throbber
4,625,081
`
`91
`LENGTH: STRING intrinsic that returns the dy-
`namic data length of a STRING variable.
`MARK: Used to mark the current top of the heap in
`dynamic data memory allocation.
`MEMAVAIL: Returns number of words between
`heap and stack in data memory.
`MOVELEFT: Low-level intrinsic for moving mass
`amounts of bytes.
`MOVERIGHT: Low-level intrinsic for moving mass
`amounts of bytes.
`POS: STRING intrinsic retuming the position of a
`pattern in a string variable.
`REWRITE: Procedure for opening a new file.
`RESET: Procedure for opening an existing file.
`RELEASE: Intrinsic used to release memory occu-
`pied by variables dynamically allocated in tlie
`heap.
`SEEK: Used for random accessing of records within
`a file.
`SIZEOF: Function returning the number of bytes
`allocated to a variable.
`STR: Procedure to convert long integer to string.
`IMM Service
`Routines: Set of over 50 procedures that provide
`access to I/O and management services.
`An example of a segmented program is the program
`development system itself. The root segment of the
`program development system simply displays a choice
`menu on the program development system terminal and
`then accepts a selection from the keyboard. This results
`in a segment procedure to be loaded from disk that
`implements the choice (i.e., editor. filer, etc.)
`A disadvantage of segmented programs is that its
`loaded code cannot be shared among a number of users.
`as the instantaneous state of each user's code space is
`not constant but depends upon the particular state of
`each program's segment overlay. For example, as pro-
`gram development system is an overlayed program,
`every program development system user who logs-on
`causes a new copy of the program development system
`code to be loaded into memory. If, on the other hand, a
`number of identical applications use a non-segmented
`program.
`then only one copy of the program code
`would actually exist in each general purpose processor
`942, 944. The individual virtual machines created for
`each instance of the application would map the virtual
`machine code space to the single copy of code and share
`it.
`
`A more rational way to organize large muitiuser
`applications is to design a system using a number of
`cooperating. memory-resident processes.
`‘This brings
`two advantages:
`(1) Better response due to no segment-loading over-
`head.
`(2) Reduction in real memory space taken with multi-
`ple users clue to code-sharing of the nonsegmented com-
`ponents of the application system.
`I. Pascal Linker
`The Pascal linker allows precompiled procedures to
`be linked to form an object code file that can be loaded
`and run in an ideal machine. A reference to an external
`procedure or function can be made within it separately
`compiled module, provided that the referenced proce-
`dure has been declared as external. The linker attempts
`to resolve references between a set of modules submit-
`ted to it. or by selecting named precompiled procedures
`from within a library of code modules, if one is supplied.
`It is also possible to specifically replace all calls to a.
`
`92
`procedure internal to a module with a procedure sup-
`plied at link—time. This allows a new or test version of a
`procedure or function to be globally replaced without
`recompiling each module.
`A link stage is required for programs that run outside
`the program development system environment and
`which use the ideal machine monitor service routines
`for the following reasons. First, most of the extensions
`to standard Pascal are implemented as functions and
`procedures called from the user program. Some of these
`calls are satisfied intrinsically by the ideal machine in-
`terpreter—-an example of this is any of the character
`string functions. When compiled, a call to one of these
`functions generates a single P-code instructions (with an
`argument) that performs the requested function in one
`P-code instruction execution cycle. These functions are
`effectively micro-coded into the ideal machine's in-
`struction set.
`internal program
`However. other extensions call
`development system procedures implernented as P-code
`routines. When a program is being run under the pro»
`gram development system (by typing EXECUTE from
`the program development system terminal) many of the
`extensions used in a program are serviced by the pro-
`gra.1'n development system itself. Examples of this are
`the GET & PUT intrinsics—using these standard intrin-
`sics actually generates calls to routines in the program
`development system that drive lower level I/O func-
`tions. Such a user program is referred to as a Level 4
`process with the program development system provid-
`ing a Level 3 operating environment which divorces
`the Level 4 application program from the underlying
`architecture of the ideal machine and system 101). ‘This
`is the environment that the editor operates in, and a user
`would normally test subsections of his application by
`using the program development system execute func-
`H011.
`Eventually, however, an application program oper-
`ates in a stand alone environment, and is initiated not
`from a terminal command under program development
`system, but by the JSAM. Therefore, any procedural
`routines normally supplied by the program develop-
`ment system to implement intrinsics and extensions (i.e.,
`GET and PUT) must be linked with the application
`before it can operate independent of the program devel-
`opment system.
`The second requirement for a link stage relates to the
`low-level ideal machine monitor service routines. Once
`again, a subset of these calls trap directly to micro-
`coded routines in the ideal machine monitor—typica1ly
`those that call REX functions directly (for example,
`Send a Packet). Other routines require a P—code “front-
`end" to map a higher level call into a specific set of
`primitive ideal machine monitor caIls—for example,
`one ideal machine monitor service routine reads the
`memo in an indexed dataset via a sequence of basic ideal
`machine monitor intrinsic calls. These P-code drivers
`must be linked into the users application.
`Note that these procedures, when linked to the user
`porgram, reduce by a small amount the code-space
`available For the application, as they reside in the same
`64K-byte address space available in a single ideal ma-
`chine. However. the amount of space taken by exten-
`sion, intrinsic and ideal machine monitor service rou-
`tines is not great—typica1ly less than 2K~bytes.
`When a program is running under the program devel-
`opment system after an EXECUTE command, the ap-
`plication code is co-resident with program development
`
`S
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`45
`
`50
`
`55
`
`60
`
`65
`
`0075
`0075
`
`Apple 1016 Part 2
`Apple 1016 Part 2
`U.S. Pat. 8,243,723
`U.S. Pat. 8,243,723
`
`

`
`93
`system in the same virtual machine. and hence the
`amount of space available for the application is less than
`64 K-bytes (program development system takes approx-
`imately 16K-bytes). Normally,
`the various segments
`and procedures of an application are tested in this envi-
`ronment.
`
`J. Program Testing
`The program development system, as an interactive
`single-user system, does not provide any specific testing
`tools for SPM programs. As the program development
`system resides in an ideal machine which executes P-
`codes, SPM code cannot be checked out in the program
`development system environment itself. To test SPM
`code, two facilities are available:
`(1) Monitor functions from a system programmer's
`console for in-system testing
`(2) Use of an auxiliary extension board for in-system
`and stand-alone testing.
`For Pascal programs, a high degree of static checking
`is carried out by the Pascal compiler, which not only
`checks for syntactic and semantic errors, but also pro-
`vides extensive type-checking of variables across as-
`signments, procedure calls, etc. A successfully com-
`piled Pascal program will mostly contain only logical
`errors at run-time. As these programs are, by the nature
`of the Pascal language, of a modular design, individual
`sections can be tested by executing them under the
`program development system with run-time errors re-
`ported to the program development system console.
`For execution tracing, the module can be seeded with
`READs and WRlTEs which will
`interact with the
`program development system terminal as the program
`runs. The Hex Dump utility can also be used to print
`diagnostic lists of data files generated by the program as
`an alternative source of trace infonnation. This latter
`method is then available for producing dumps of trace
`files generated by the complete application system run-
`ning outside the supervision of program development
`system. The interactive nature of the editor and com-
`piler ensures a fast error correction cycle to eliminate
`logical operation problems.
`Privileged programmers use the program develop-
`ment system to generate SPM programs to be run in the
`host space of the system 100. SPM programs are neces-
`sary when a new device handler, a hardware-specific
`transient, a time-critical transient or a time-critical job
`supervisor is required.
`Authorized personnel using a system programmer's
`terminal 270 can load and run programs for test pur-
`poses, interacting with the processor environment in
`which the program is loaded. From a system program-
`mer's terminal 270, the program, scratchpad and data
`memory of any processor can be displayed and altered.
`Packets can be sent to, and received from, the process
`using the test program, and dump tables can be specified
`in the event of the process trapping the processor. As a
`system programmer also has access to all of the func-
`tions available to a system operator, the system log can
`be used to collect run-time trace packets sent from the
`test program, and the verified program can then be
`installed in a program library. All of these functions can
`occur during normal system operation.
`K. Ideal Machine Implementation
`The implementation of the ideal machine software in
`a general purpose processor is presented here for inter-
`est only. Except for any desired application program,
`no user programming is required in a general purpose
`processor.
`
`10
`
`20
`
`25
`
`30
`
`35
`
`45
`
`50
`
`SS
`
`65
`
`0076
`0076
`
`4,625,08 1
`
`94
`The ideal machine monitor subsystem is a collection
`of SPM processes resident in a general purpose proces-
`sors high-speed program memory. Two main processes
`(i.e., processes with an allocated scratchpad context,
`and a system process ID (SPID) of D) can be identified:
`ideal machine monitor subprocess driver (IMMS) and
`general purpose processor resource manager.
`All requests for ideal machines are sent as create
`process requests to IMMS by JSAM. specifying a P-
`code Program ID. Ideal machine monitors creates a
`subprocess within its own main context for each such
`request. The shared program code for this subprocess
`provides two functions:
`(1) ideal machine monitor interpreter driver (IMMI).
`and
`
`(2) ideal machine interpreter (IMI).
`The IMMI function of the subprocess generates a
`request to the general purpose processor resource man-
`ager to load the program specified by the request into
`data memory and return to IMMI the data and code
`segment register contents needed to map out the loaded
`code in memory. If the P-code is already loaded, the
`existing copy is pointed to. IMMI then initiates running
`of an ideal machine by starting the ideal machine inter-
`preter (IMI) decoding and executing the P-codes con-
`tained in the code space now pointed to by the mapping
`registers.
`IMMI provides intrinsic services on behalf of IMI
`during process execution, including interfacing to REX
`functions. IMMI also deallocates resources used by the
`ideal machine when the P-code process terminates by
`calling the general purpose processor resource man-
`ager.
`IMMI and [MI functions are provided by a single
`wubprocess and use the suspend option when waiting
`for the completion of REX functions (I/O, Event Man-
`agement, etc.)
`Every P-code process running in its ideal machine
`employs an IMMI/IMI subprocess to implement the
`basic machine emulation. However, due to SPM code
`sharing, the only SPM program-code required to be
`resident
`in the general purpose processor program
`memory are single copies of ideal machine monitors,
`IMMI and [ML The general purpose processor re-
`source manager is a P-code process running in system
`ideal machine.
`Major kernel system processes implemented in Pascal
`use a main process invocation of IMMI/IMI. As each
`main process has its own scratchpad context, it can be
`assigned a unique system bus ID (SBID) by JSAM. This
`is necessary for those Level 2 system processes (for
`example, SYSDBV) requiring an instant global address
`(SBID) to which any process can send a packet. (When
`an IMML/IMI process runs as a subprocess under ideal
`machine monitors, the subprocess identity cannot be
`pre-established, thus such subprocesses cannot be as-
`signed global addresses.) In certain cas, SBIDs are
`also asgted dynamically to permit use of the packet
`rerouting facilities provided by the inter processor com-
`munications system to implement fault tolerant dual
`processing schemes.
`Finally, this approach provides a system ideal ma-
`chine with its own scratchpad context, reducing the
`process context-switching overhead to a minimum. In
`contrast, a normal ideal machine operating as a subproc-
`ess must save its operating context in a shadow context
`in data memory, to permit sharing of the ideal machine
`monitors main context.
`
`

`
`95
`
`4,625,081
`
`RESIDENT EXECUTIVE (REX)
`A. Overview
`The resident executive (REX) is a finnware subsys-
`tem integral to each system 100 processor. It provides
`the resources management which software processes
`require to effectively use the hardware facilities of the
`standard processor module 509 (SPM), common to all
`processors. Software processes executing in a processor
`perform under the management of REX, which can
`support the operation of multiple, concurrent software
`processes. REX's responsibilities include the following
`service areas:
`(1) Logical initialization of the standard processor
`module 500 on power-up.
`(2) Loading of programs into the standard processor
`module 500 on request.
`(3) Allocation of memory resources to processes.
`(4) Scheduling and dispatching of processes.
`(5) Process suspension and event management.
`(6) Input/output and packet transfer services.
`(7) Inter-processor utilities and subroutine calls.
`(8) Timer utilities.
`(9) List processing utilities.
`(10) Interrupt handling.
`(1 l) Diagnostics.
`In the system 100 each process contains an identical
`coph of REX, combined with a processor-specific ex-
`tension to REX (called generically, EXREX). EXREX
`manages the particular hardware extension of the stan-
`dard processor module 500 which, together with the
`standard processor module 500 form one of the five
`different system 100 processor types.
`Communications between REX and an active resi-
`dent process is via subroutine calls. Communications
`between a specific processor’s REX and a process in
`another processor is vis packets transmitted over the
`inter-process communications (IPC) network. There is
`a close relationship between the multiple copies of REX
`and the kernel system management process, JSAM (job
`scheduling, allocation and monitoring subsystem man-
`ager). Each REX is periodically requested by JSAM to
`report the status of its resources and this information is
`used to allocate required resources to the dynamically
`changing workload imposed on the system.
`An understanding of REX services is important to
`users of ideal machines for applications programmed in
`Pascal. although such applications programs do not
`interface with REX directly. Ideal machines created in
`general purpose processors 942, 944 (GAPs) provide a
`logically separated and hardware fence protected envi-
`ronment for Level 3 application systems. Nonetheless,
`most of the image machine monitor services routines
`called via Pascal program procedures make direct and
`obvious use of the underlying REX callable subroutines
`of the host general purpose processor.
`A system 100 consists ofa collection of independent,
`tightly-coupled processors, each with its own central
`processing unit, program, scratchpad, and data memo-
`ries. Although working cooperatively with other pro-
`cesses in the system, each within itself is an autonomous
`processing unit. At a system-wide level, each provides a
`useable resource to which work, in the form of software
`processes, is allocated. At the processor level, this allo-
`cated workload together with its hardware must be
`managed by the processor itself, free of any higher-level
`system-wide organizational structure.
`
`I5
`
`25
`
`30
`
`35
`
`4-5
`
`50
`
`55
`
`65
`
`0077
`0077
`
`96
`The resident executive provides this management.
`Each processor, of the five different types that can exist
`in a Delta 2 system, has up to 32K-words of program
`memory, 12K. of which is read-only memory used to
`house a copy of the REX program code. Each contains
`an identical copy of the basic REX code and has its own
`personal extension to REX that manages the hardware
`resources specific to that processor type. This extension
`is referred to as EXREX and is housed in an additional
`4-K-words of read-only memory.
`Because REX provides a set of fixed, well—defined
`user functions and is in integral part of the basic hard-
`ware of a standard processor module 500, it is consid-
`ered to be as much a primitive component of a standard
`processor module 50!} as are the instruction set, memo-
`ries. registers, and basic logic of the board itself. Thus
`the standard processor module 500 processor is a self-
`managed hardware system providing a reliable environ»
`ment in which to host multiple concurrent processes. all
`contending for the use of the processor's fixed re-
`sources. A user process executes compiled basic ma-
`chine instructions to achieve its specific goal but calls
`on a REX function to communication with other pro-
`cesses, acquire additional machine resources, use timers.
`or perform other general activities. REX management
`not only provides the applications programmer freedom
`from concern with many hardware-specific operations.
`but greatly increases the overall reliability of the system
`since most of the more complex activities needed by the
`user, and normally callable by basic machine instruc-
`tions, are provided by mature, well-tried REX routines.
`Additionally, it reduces the problem of resource and
`activity contention by concurrent processes in a single
`processor by allowing for a structured, disciplined way
`to handle allocation of processor resources to processes.
`Thus the low-level program visibility of a processor
`presented to a programmer is both the standard proces-
`sor module Sflll instruction set and the set of REX-calltb
`ble functions.
`The life-cycle of a process within the system 100 is
`used as an example of the kinds of services REX pro-
`vides within each processor.
`A process is the basic unit of work from which sys-
`tem-wide jobs are built. To execute, a process needs a
`processor with the available resources necessary to host
`the associated program (memory, attached channels,
`execution time. etc.). Processes are initiated by a com-
`ponent of the kernel system called the job scheduling.
`Allocation,
`and monitoring
`subsystem manager
`(JSAM), which bases its decision as to which processor
`should host a process by comparing the quantity and
`types of resources required with those available in the
`system's processors. JSAM maintains current rource
`inventories by a regular background dialogue with the
`REX of each live processor in the system. (Process
`requests can be sent to JSAM by any active process in
`any processor in the system.)
`Having chosen a suitable host for the requested pro-
`cess, JSAM sends to the REX of that processor a re-
`quest to initiate the process. If the program code needed
`by the process is not already in program memory, REX
`loads the code froma disk-resident program library and
`creates the process. Once running, the process acquires
`from REX whatever resources are necessary to perform
`its task.
`
`REX can initiate and manage a large number of pro-
`cesses, suspending those awaiting resources or an event
`and scheduling the execution of those currently dis-
`
`

`
`97
`patchable. When a process is running, it can make sub-
`routine calls to REX to request a wide range of services
`including:
`(1) Dynamic allocation of scratchpad and data mem-
`ory.
`(2) Management of and access to user-defined lists
`allowing a process to efficiently maintain a variety of
`single and multi-thread queues and stack structures.
`(3) Communications with other processors in either
`the same or different processors.
`(4) Management of any number of general purpose
`timers on behalf of the process.
`(5) Semaphore services for user-defined resource
`sharing and activity synchronization.
`(6) Management of hardware interrupts.
`(7) A set of general purpose I/O services allowing
`data transfer to be perforrned between and among de-
`vices. datasets. and processes owned by the requesting
`process. regardless of where it is located in the system.
`(B) A large number of general purpose utility and
`computational routines designed to ease the burden of
`user-programming of standard processor module pro-
`cesses.
`
`During process activation, REX manages the inter-
`face between the process and any outside events related
`to it (for example, the receipt of packets destined for it).
`A process can be suspended, and thus become dormant
`for a number of reasons: waiting for a response to a
`communication, a timer to fire. or a resource to become
`free. When a process is suspended, REX enqueues the
`process until the event occurs. and dispatches which-
`ever process has become the next-most-eligible to use
`the machine.
`
`When a process terminates. REX returns all interpre-
`cessor resources used back to the general pool. and
`notifies JSAM of termination.
`REX also manages hardware interrupts and provides
`a utility routine for each possible external
`interrupt
`condition. Many interrupts relate to the detection of
`hardware errors within the processor. REX manages
`the trapped error-condition allowaing inspection from a
`system terminal and diagnostic processing.
`When a processor is first powered-up, REX takes the
`hardware through a predefined series of initialization
`and diagnostic routines. After successful completion,
`REX reports to JSAM that the host processor is avail-
`able for work allocation.
`B. Memory Management
`Scratchpad management provides all resident pro-
`grams in a standard processor module 500 including
`REX, with an orderly way of using the scratchpad
`memory resource. Order is preserved by requiring that
`all allocations and decallocations be accomplished using
`REX‘s scratchpad routines exclusively.
`Each procmor contains 4096 16-bit words of high-
`speed scratchpad memory. organized as 32 pages of l28
`words each. Each page is arranged as eight packets of
`16 words each (the structure of a scratchpad and inter-
`processor packet are obviously similar and logically
`equivalent).
`The first page of scratchpad memory, page zero, is
`reserved for common use. Information which must be
`inaccessible to all resident processes in the host proces-
`sor, such as the date and time. is maintained here. REX
`uses page one as its own private scratchpad area or
`context. The remaining 30 pages are available for allo-
`cation by REX either for user-process contexts or dy-
`namic working storage.
`
`0078
`0078
`
`4,625,081
`
`98
`A context is an area of scratchpad memory allocated
`by REX at the time each process is created. The number
`of scratchpad packets allocated is specified to REX in
`the create process request. This is generally kept to a
`minimum since context packets are allocated for the life
`of the process and provide "registers" for system con-
`trol information as well as a static working area of stor-
`age used by the process as it desires.
`The initial context allocation to a process may not
`meet all of the processes‘
`lifetime requirements for
`scratchpad memory. This may occur because the size of
`the context is physically limited, or because it is waste-
`ful for a process to request long-term allocation for
`waht are often short-term needs. REX satisfies such
`requirements for dynamic working storage by allocat-
`ing and deallocating packets to each process on de-
`mand. Such requests are made on an individual packet
`basis. The amount of scratchpad allocated can thus
`expand and contract as necessary to meet the changing
`demands of the process during its lifetime.
`Since a context must be allocated to each main pro-
`cess in a processor, the number of main processes is
`limited to the number of contexts. To keep this number
`as large as possible, REX allocates dynamically from
`those pages from which a context has already been
`allocated and maintains all spare packets available from
`previously gllocated contexts on the available packet
`queue. Each time a context is allocated, packets not
`requested from the contexts eight packets are placed on
`this queue. Because REX prefers to have as many full-
`page contexts as possible. each time a context is deal-
`located, spare packets from the same page are removed
`from the available packet queue. If the packets are con-
`tiguous with the deallocated context, they extent it. If
`not, they are placed on a free packet stack to await the
`return of the rest of the packets in the page. The free
`packet stack is composed entirely of packets which
`were once on the available packet queue. but have since
`been removed because the context from the page of
`which they were a part is no longer allocated. The free
`packet stack is maintained as a single-linked stack. using
`REX’s standard scratchpad list facilities. When the
`remaining context packets are returned, they are all
`removed from the free packet list and used to extend the
`unused context back into a full page. REX will not
`allocate from the free packet stack unless the available
`packet list is empty. In this way, REX reconsolidates
`complete context pages.
`The page consolidation function is performed by
`REXIDLE (REX’s permanent background process
`which continually scans the available packet queue.
`Besides managing scratchpad memory as a resource,
`REX also provides a set of functions for maintaining
`lists of individual packets on single-linked stacks or
`queues for user-defined operations.
`Data memory management provides all resident pro-
`cesses in a processor, including REX, with an orderly
`way of using the data memory resource.
`Each processor contains 64K 22-Bit words of data
`memory. Each word consists of 16 bits and six error
`correcting code (ECC) bits. The ECC bits permit detec-
`tion of all 2-bit errors and correction of all 1-bit errors.
`Initially, a minimal work area of 4K-words is as-
`signed to REX. All remaining available memory is as-
`signed to extension board REX (EXREX), the needs of
`which vary depending on processor type. If. during
`initialization, EXREX determines that this assignment
`
`5
`
`10
`
`20
`
`25
`
`30
`
`35
`
`40
`
`45
`
`S0
`
`55
`
`65
`
`

`
`4,625,081
`
`99
`is excessive the surplus memory blocks are transferred
`back to REX.
`Whatever the size of the area initially assigned to
`REX, it may at times exhaust its supply of data memory.
`EXREX can aid REX by providing temporary exten- S
`sion blocks on these occasions. During physical initial-
`ization, EXREX informs REX of the addresses of its
`allocation and deallocation routines, its data memory
`area, and the maximum length of the extension blocks it
`can provide. Then whenever REX requires additional
`data memory,
`it requests a block from EXREX and
`later, after the need for a block passes, it is returned to
`EXREX.
`
`10
`
`100
`A request to load a program into program memory
`can be accepted regardless of the memory type of the
`target area. Programs cannot actuqlly be loaded into
`ROM, but part of the loading operation is to verify that
`the expected checksum which accompanies the pro-
`gram being "loaded" matches the actual checksum
`computed from the data in memory. Thus a program on
`disk can be “loaded" into ROM to verify that the two
`are identical.
`
`Another advantage of treating memory uniformly is
`that, as long as loads are requested for all programs.
`RAM and ROM can be interchanged without altering
`any code. If the target area is RAM, the area written to
`is changed. If the target area is ROM, it is not. This
`flexibility greatly facilitates testing and debugging.
`REX allocates blocks of program memory in lengths
`of up to 4K-words. Blocks can be requested by length
`alone. or by both address and length. Each block of
`available program memory (whether RAM or ROM) is
`described by a program memory element (PME), an
`internal data structure maintained in a queue by REX in
`its own data memory area.
`When a block is requested by length, REX searches
`the program memory queue for a block of sufficient size
`from which to satisfy the request. The search is begun at
`the head of the queue, and the first block of sufficient
`size encountered is selected.
`If a block is selected from which to make the alloca-
`tion, the program memory element for the block is
`changed (the length and address so that it then describes
`only that portion of the original block which remains
`after the allocation is made). If no residual exists, the
`program memory element is instead deleted from the
`queue and returned to available data memory. On the
`other hand, if the allocation cannot be made because a
`block of sufficient size does not exist.
`the request is
`rejected and an error return to the calling process is
`made.
`When a user requests that a previously allocated
`block be returned to available program memory, an
`attempt is made to recombine the returned block with
`any existing unassigned blocks on either side of it in the
`program memory address space.
`A program can consist of from one to nine relocate-
`ble load modules, each limited to a maximum length of
`4I<.—words, and up to 32 overlay modules. Load mod-
`ules can be either two types: primary or secondary.
`Primary are program-related,
`that
`is, only one re-
`entrant copy need exist in memory no matter how many
`processes share its use. Secondary are process-related
`and a separate copy of each must exist for each active
`process. Overlay modules are loaded by explicit request
`and may overlay primary or secondary modules. (Gen-
`erally, overlay modules do not exist in re-entrant pro-
`grams). A program must have at least one primary mod-
`ule loaded into program memory before a process can
`be created.
`
`Programs are loaded by REX in rmponse to create
`process requt packets. Such requests assume the im-
`plied condition that the program be loaded only if it is
`not already present in program memory. Because the
`transfer of a program from disk is a relatively time-com
`suming operation, REX creates a subprocess to perform
`each load.
`
`Initially, the subprocess sends SYSLOAD (part of
`the system directory subsystem resident in a disk data
`processor 934, 936 a packet requesting the transfer.
`SYSLOAD (via the SYSDIR program load process
`
`The data memory management routines used by REX
`and available to all processes implement a buddy system
`of storage management. Blocks of any length up to the
`maximum may be requested by a process. However,
`available data memory is maintained only in blocks of 4,
`8, 16, up to 32,768 words, that is, the lengths of all
`available blocks are in powers of two and in the range
`23 to 2'5 inclusive. A request for a block of any other
`length is satisfied by allocation from the next-larger
`available block,
`successively smaller power-of-two
`blocks until the request is satisfied with a total alloca-
`tion which exceeds the requested size of the least possi-
`ble amount. Since the smallest available block is four
`words in length, up to three extra words may be allo-
`cated to satisfy a request. The unused memory remain-
`ing after request allocation is retained as a number of
`smaller available blocks.
`During allocation, it is possible that an available next-
`larger block does not exist from which to allocate the
`request. In this case, a still larger available block can be
`successively split into two "buddies” of equal size until
`a block of the next-larger size is obtained. Since the
`length of each available block is a power of two, the
`length of each buddy thus obtained is also a power of
`two.
`
`15
`
`20
`
`25
`
`30
`
`35
`
`45
`
`50
`
`S5
`
`When a previously assigned block id deallocated,
`unless its original length was a power of two, the block 40
`cannot be made directly available.
`Instead, smaller
`blocks,
`the lengths of which are each successively
`larger powers of two, are split off from the deallocated
`block until the length of the remainder is also a power
`of two. It can then be made available.
`Each time a new available block is created, either by
`deallocation or as a result of splitting, all other available
`blocks of the same length are examined to determine if
`the bu

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket