throbber
0
`
`US005832304-A
`
`United States Patent
`
`[19]
`
`[11] Patent Number:
`
`5,832,304
`
`Bauman et al.
`
`[45] Date of Patent:
`
`Nov. 3, 1998
`
`54]
`
`75]
`
`]\/[EMORY QUEUE WITH ADJUSTABLE
`PRIORITY AND C()N[«‘],]C'[' DETECTION
`Inventors: Mitchell A. Bauman, Circle Pines;
`Jerome G. Carlin, Shoreview; Roger
`L‘.Gi“’°”S°“’ M1-““°'°‘P°1i5’ an °[
`Minn.
`.
`_
`_
`73] Assigneez Unlsys Corporation, Blue Bell, Pa.
`
`21] Appl. No.2 404,791
`
`22]
`
`Filed:
`
`Mar. 15, 1995
`
`6/1993 Nishida
`. 395/478
`5,218,688
`
`8/1993 0’Conne eta. .
`. 395/297
`_.241,632
`8/1993 Taniguchlilet at,
`395/445
`2,235,688
`- 395/297
`$41994 Hsieh 9‘ 31-1
`2-282:2“
`
`7’&ggf5l
`§}a“_”m
`at‘
`‘
`5-‘$3,;
`671996 V:Is1ll1giueCt
`.. ..: 3953474
`5:53o:s35
`Primary Extzmirter—Eddie P. Chan
`Amjgtan, Em,,,,~mr_Hiep T, Nguyen
`Attorney, Agent, or Firm—Charlcs A. Johnson; Mark T.
`Starr
`
`....
`
`[57]
`
`ABSTRACT
`
`An improved memory request storage and allocation system
`using parallel queues to retain different categories of
`memory requests until they can be acted on by the main
`memory. Memory requests in the parallel queues are allowed
`to access the main memory according to a queue priority
`scheme. The queue priority scheme is based on an adjustable
`ratio, which determines the rate at which memory requests
`from one queue are allowed to access the main memory
`versus memory requests from other queues. Registers for
`;:?;:::;‘;£:::,*::,:?:.::“;::;1::::;; $:':;:::':;*::
`non-existent memory request from a queue. Conllict detec-
`tion circuitry maintains proper instruction order in the
`parallel queue architecture by ensuring that subsequent
`memory requests, which have the same address as a memory
`-
`~
`request already in the queue, are placed in the same queue
`in the order that they were entered into the queue.
`25 Claims, 9 Drawing Sheets
`
`G06F 13/13
`395/860; 395/859; 711/151;
`711/158
`395/478, 485,
`58] Field of Search
`395/444, 449, 872, 250, 474, 475, 477,
`297, 728, 729, 732, 857; 370/60, 61, 92,
`93, 194.1, 95.3, 112, 113; 711/151, 158
`.
`References Cited
`mm
`8/1983 Suzuki et al.
`7/1986 Kinjo etal.
`11/1987 51111iVfiI1 9131-
`4/1933 Clark 91 a1~
`1raca.,.
`(5)I1’(‘LagU€t€t]a1~
`11/1992 Wu Ct al,
`3/1993 lyer ..........
`5/1993 Uchida et at.
`
`395/478
`395/250
`395/474
`-~ 370/'58
`1
`3
`3952250
`395/Z97
`.. 395/478
`
`..
`
`
`
`IHL Cl-5
`51l
`52] U.S. CI.
`
`V
`[56]
`
`4,400,771
`4,598,362
`4,707,731
`4,735,352
`s,
`,
`591657021
`5,193,193
`5,214,769
`
`.
`
`-~
`
`_~’\
`_’
`MSL Ar:
`i
`64/1 MSLWAI
`" 9
`SL QEVEISTER ./
`r ------------------------ — ' A 4 “““““““““““““““ ‘ ‘ 1
`|
`\ ‘
`I
`1
`/‘ ULEJE
`MUX K 110
`PRIDRITY
`14
`
`V
`2
`t
`
`Cit 2
`
`I
`I
`I
`
`65
`15
`
`35
`
`‘/‘O5
`
`:P REG
`?
`
`/F‘
`96
`
`94
`
`:
`I
`E
`E
`‘Xt"l'1"tND
`P
`I
`:
`LEVEL
`‘_,
`'
`: Lf .
`,8
`:
`CAUH;
`azgfii
`E
`,
`v
`1
`‘_______A7‘_ _ _ _ _ 7 7 7 7 7
`1
`
`00/’
`1
`//_
`Iqpgj
`T84
`L
`t
`QEGIS ER
`'
`t
`[
`I
`—
`
`I"‘UX
`1 /
`‘L i _ _ i _ _ _ __tj_t5_‘”",__” i _ _ _ _ _ _ , , , , , , W
`
`1
`, I/J REG /’1O8
`V
`_ _ _ _ _ _ _ _ A _ Z3_ _ _ _ _ _ _ _ 7 # _ _ _ j
`E
`\ Fifi]
`:
`n
`‘
`:
`3108
`-
`:
`QUEUE E
`I
`I
`_ _ _ _ _ _ __J|
`AD¢S’DRAGE I
`MZMJRY QUEUE
`;
`‘,
`l
`I
`I
`
`43
`
`_
`__§C_0____J'
`
`-58
`
`58- y
`77
`1
`13-41
`
`1
`
`84o
`
`90
`
`7
`
`60
`
`1
`ri
`1?—4a
`
`58
`
`98
`r’
`I naea
`
`REMDTE
`INTERFACE
`
`K 52
`
`0001
`
`Volkswagen 1004
`
`0001
`
`Volkswagen 1004
`
`

`
`U.S. Patent
`
`Nov. 3, 1998
`
`Sheet 1 of 9
`
`5,832,304
`
`mm
`
`e31_3:\+31ZM
`
`H¢gD:¢mmmmmumm;
`
`_3mH3j\_3mZH
`
`Amigamxammgummm
`
`F:¢+:a\+:¢zH
`
`mmammmummm
`
`mam:
`
`I..L
`
`43m+Dj\_31ZH
`
`cmammmumag
`
`flc¢mH_
`
`F3mH3D\H3mZH
`
`mmkmqmq
`
`mqmu
`
`mgqmmkm
`
`mL_4mmHzmu
`
`fiumg
`
`ZmHHuDmHmZH
`
`mmmmwummm
`
`flab
`
`ZH¢:
`
`moqmakm
`
`%HZ3
`
`fi3m:_
`
`0002
`
`

`
`U.S. Patent
`
`Nov. 3, 1998
`
`Sheet 2 of 9
`
`5,832,304
`
`INSTRUCTIDN ERDCESSDR
`HP]
`
`FIRST
`LEVEL
`CACHE
`
`STDRAGE CDNTRDLL-
`[SC]
`
`INPUT/DUTPUT
`ADAPTER
`HDA]
`
`MAIN
`STDRAGE
`UNIT
`EMSUJ
`
`0003
`
`

`
`U.S. Patent
`
`Nov. 3, 1998
`
`Sheet 3 of 9
`
`5,832,304
`
`I
`
`I I I I I I I I I I I
`
`FIFEI OUEUE
`
`_5i
`
`STDRAGE
`CDNTRDLLERI
`
`EXTERNAL
`MEMDRY
`REQUESTS
`
`PRIOR ART
`
`FIG. 3
`
`0004
`
`

`
`U.S. Patent
`
`Nov. 3, 1998
`
`Sheet 4 of 9
`
`5,832,304
`
`on
`
`9_
`
`_::nI:Im|EZm1$tI|II:_
`mK
`K/mwmmB:_.<Ezaqm
`54%_«$3_Amqé
`mg¢¢mmHzH_LEVER9%_“
`
`“W.,.4
`
`mo/Ji%«cm
`
`
`
` mmVa9Jm¢:3mZaim:
`
`0005
`
`

`
`U.S. Patent
`
`Nov. 3, 1998
`
`Sheet 5 of 9
`
`5,832,304
`
`mofi
`
`m.a$_Em2%:
`
`
`
`mamas>mm:mz
`
`LEVE
`
`Bimflzfl
`
`HJQZH
`
`wgmmmm
`
`mzmfim
`
`4m>m4
`
`MES
`
`0006
`
`

`
`U.S. Patent
`
`Nov. 3, 1998
`
`Sheet 6 of 9
`
`5,832,304
`
`_ OUEUE_]
`PRIDRITY
`
`130
`T 2
`PR;DRITY
`
`SELECT
`3513759
`
`138
`A
`I/D DUTPUT
`
`VALID
`REGISTER
`
`IR CDNFLICT
`
`I/D
`OUEUE
`CDNFLICT
`
`MEMDRY
`OUEUE
`SELECTED
`
`--oQoo--oooQ
`
`-oQ-oo-OO-Oo
`
`—Cm(3
`
`OQ-o»Q-owo
`
`IP OUEUE
`I/C OUEUE
`IP QUEUE
`NA
`I/C OUEUE
`I/E QUEUE
`IP QUEUE
`NA
`I/E OUEUE
`I/C OUEUE
`I3 OUEUE
`NA
`NA
`NA
`NA
`NA
`
`0007
`
`

`
`U.S. Patent
`
`Nov. 3, 1998
`
`Sheet 7 of 9
`
`5,832,304
`
`//mmfi
`
`
`
`BfluzmdSE28
`
`zfimflmmflwmmm
`
`_m¢:ammJ\H-mm:ammQM
`
`
`
`+uH4+ZfiukuH4¢zmu
`
`Em;Ezcflkuuhua
`4mgHu_HL:
`
`ESEmm.HE:mine
`
`.9834
`
`g_::gm
`
`muqummgzfl
`0%is;mm1
`9m<-¢H/
`Foo
`em:fi¢.mH
`mmr/mm
`
`mg
`
`mpmsa
`
`»PgmgHm¢
`
`0008
`
`

`
`U.S. Patent
`
`Nov. 3, 1998
`
`Sheet 8 of 9
`
`5,832,304
`
`HBIEZDQCHE2QfiflzammahSmaammmm!
`
`rmmrHHIau4f:@m1H
`
`
`
`ZDCQEMEZcpumfima
`
`W
`
`M:_EEZS732V9
`
`HDQZH
`
`EBEE
`
` ®HAW!MMH
`
`mazmizzu
`
`0009
`
`

`
`U.S. Patent
`
`Nov. 3, 1998
`
`Sheet 9 of 9
`
`5,832,304
`
`170
`
`.I
`
`OUEUE
`\,./RITE
`SELECTIDN
`LDGIC
`
`|____....___..__.._.______________________._._____________________
`
`SELECT}I
`
`IIIIIIIIIIIIII
`
`I/EI
`IP
`REIVIZITE
`MEMEIRY
`OUEUE
`INTERFACE
`REQUEST
`INDICATEIR INEICATDR’ CEINELICT
`
`I/I]
`EIUEUE
`CEINFLICT
`
`CIUEUE
`I»/RITE
`ENABLE
`
`I I I I I
`
`FIG. 9
`
`0010
`
`

`
`5,832,304
`
`1
`MEMORY QUEUE VVITH ADJUSTABLE
`PRIORITY AND CONFLICT DETECTION
`
`BACKGROUND OF THE INVENTION
`
`1. Field of the Invention
`
`This invention relates to memory access queues, and more
`particularly to providing adjustable-ratio priorities to paral-
`lel memory access queues, and to detecting, maintaining and
`managing the order of memory references in the queues.
`2. Description of the Prior Art
`Data processing systems typically perform central pro-
`cessing and input/output (hereinafter “I/O”) functions. The
`central processing units in a data processing system are often
`referred to as “instruction processors”. The instruction pro-
`cessors carry out data manipulations and control functions at
`the heart of the data processing system. The instruction
`processors are heavily dependent upon the main storage
`(i.e., memory) within the data processing system, since they
`require data on which to operate, and resulting calculations
`must be stored for future use. I/O functions are also very
`reliant on the main storage of the system, since data is
`transferred between peripheral devices and the main
`memory to allow the instruction processors to control the
`peripheral devices coupled to it.
`Aproblem with many data processing systems is that the
`rate at which the instruction processors can process data is
`faster than the rate at which I/O devices can access and
`transfer data. Although memory caching can help increase
`I/O rates, the main storage must still be accessed on cache
`“misses”. Both the instruction processors and the I/O
`devices in the system must therefore access the main stor-
`age. Where instruction processor memory requests,
`I/O
`memory requests, or any other memory requests are com-
`peting for main storage access,
`it is very likely that the
`requests wfll accumulate faster than the main storage can
`manage the requests. In such cases, memory queues are
`implemented to temporarily store the various memory
`requests until the main storage can individually act on the
`memory requests. These “memory requests” are simply
`addresses to data stored in a memory device. It is known in
`the art to utilize a single first—in—first—out (FIFO) queue to
`store these memory requests, such as the FIFO shown in
`FIG. 3. However, storing the I/O memory requests in a
`single FIFO along with instruction processor memory
`requests can decrease the I/O rate. This is because prior
`instruction processor memory requests in the FIFO queue
`are handled before later I/O memory requests. This addi-
`tional delay for I/O activity produces the undesirable result
`of lowering l/O rates. Furthermore, the I/O in the system
`often includes devices which have a maximum allowable
`memory access time, and if this access time is exceeded,
`errors can occur, such as device overrun errors.
`The present invention overcomes this problem by utiliz-
`ing parallel memory queues, and by recognizing memory
`requests from these queues based on an adjustable priority
`scheme. A different queue can be assigned to a particular
`type of memory request. For example, one queue can be
`assigned to store I/O memory requests, and another queue
`can be appointed as the IP memory request queue. Asystem
`having approximately equal
`IP to I/O memory requests
`could have a priority ratio of 50%, meaning that one I/O
`memory request will be handled for every one IP memory
`request. In a system which has heavy input/output activity,
`the queue assigned to I/O memory requests could be given
`a higher priority, such as 67%. In such a case,
`two I/O
`memory requests would be handled for every one IP
`
`2
`memory request. In this Way, the data processing system can
`be efliciently “tuned” to the most desirable ratio. This desigi
`proves to be extremely elfective where critical errors would
`occur if the I/O memory requests were not acknowledged
`fast enough to avoid device overruns. In this case, faster
`input/output rates can be achieved even under heavily
`loaded IP conditions.
`Some types of priority schemes for memory queues exist
`in prior art, wherein preselected memory requests are given
`a “fixed” priority over other memory requests. One such
`arrangement is shown in U.S. Pat. No. 5,088,053 by Sprague
`et al., issued on Feb. 11, 1992. Sprague et al. appears to
`disclose the use of two input FIFOs for normal memory
`requests and urgent memory requests. However, this design
`apparently uses a “fixed priority scheme”, unlike the present
`invention’s priority which is determined by preemptive
`ratios. Another such arrangement is disclosed in U.S. Pat.
`No. 5,165,021 by Wu et al., issued on Nov. 17, 1992. This
`reference appears to describe the use of a fixed priority
`scheme wherein requests are given a priority, and if the
`queue receives too many requests, requests of the lowest
`priority are “discarded” (col. 6, lines 60—63). The present
`invention separates the high and low priority requests into
`separate parallel queues. The advantage of the present
`invention is that the parallel queues are permitted access to
`the main memory by an adjustable priority scheme.
`'lhis
`allows the priority to change depending on the particular
`characteristics of the system, which increases efliciency by
`balancing the amount of idle time that the memory requests
`in the parallel queues must endure.
`The use of parallel memory queues creates a concern in
`maintaming proper order of memory references. Prior
`memory requests having the same address as the incoming
`memory request must access the main storage before the
`incoming memory request does. This is because the earlier
`memory request will affect the data at the main storage
`address, and the subsequent memory request must wait until
`that occurs. In a single FIFO queue, this poses no problem
`because all memory requests will be acknowledged in the
`order that
`they were put on the queue. However, where
`parallel queues are used,
`it is possible that a subsequent
`memory request from a higher priority queue could be
`allowed to access the main storage before an earlier memory
`request having the same address from the other queue. This
`could result
`in incorrect data being read from the main
`storage, or valid data in the main storage being overwritten.
`In order to avoid such a problem, the present invention
`employs a conflict detection scheme. Each memory request
`from the parallel memory queues are compared to the
`incoming memory request. If the conflict detection circuitry
`determines that an incoming memory request has the same
`address as a prior memory request already queued,
`the
`incoming memory request is routed to the memory queue
`that is storing that prior memory request. By putting the
`subsequent memory request in the same FIFO queue as the
`prior memory request,
`the prior memory request will be
`allowed to access the main storage before the subsequent
`memory request.
`The present invention provides a memory queuing archi-
`tecture that allows separation of predetermined types of
`memory requests. The order of memory references is main-
`tained through the use of conflict detection circuitry. Main
`storage access by queued memory requests takes place
`pursuant
`to an adjustable priority ratio, where the ratio
`defines how often one memory queue is enabled relative to
`the other queue(s) in the parallel queue structure. Memory
`request
`rates for particular memory requests can be
`
`0011
`
`

`
`5,832,304
`
`3
`increased by simply adjusting the priority ratio. This pro-
`vides flexibility and greater efficiency in handling memory
`requests of different types, such as instruction processor and
`input/output meniory requests.
`
`OBJECTS
`
`It is a primary objective of this invention to provide an
`improved address queuing and allocation apparatus.
`It is another object of the present invention to provide an
`address queuing scheme which increases the speed in which
`predetermined categories of memory requests can be recog-
`nized.
`
`Yet another object of the invention is to store different
`categories of memory requests in parallel queues, and to
`allow preemptive priority for predetermined categories of
`memory requests.
`It is still another object of the present invention to provide
`for memory request retrievals from the parallel queues
`according to a predetermined ratio.
`It is another object of the invention to allow the ratio of
`memory request retrievals to be changed to best suit the
`memory request rates of the system.
`It is yet another object of the present invention to use a
`counter and a predetermined count value to generate the
`ratio of rnemory request retrievals, by retrieving a memory
`request from a different parallel queue when the counter
`reaches the predetermined count value.
`It is a further object of the invention to provide memory
`request bypass capabilities to override the current memory
`request retrieval ratio through continuous selection of one of
`the parallel queues.
`Still another object of the invention is to provide address
`conflict detection to allow queue loading without disrupting
`proper instruction order.
`It is still another object of the present invention to store
`equivalent memory requests in the same one of the parallel
`queues upon detection of an address conflict, so that proper
`order of memory references can be maintained.
`Other more detailed objectives will become apparent from
`a consideration of the Drawings and the Detailed Descrip-
`tion of the Preferred Embodiment.
`
`SUMMARY OF THE INVENTION
`
`The Memory Queue With Adjustable Priority and Conflict
`Detection provides flexibility, efficiency, and increased rela-
`tive speeds of system memory request recognition. The
`system in which the invention resides is a data processing
`system having a main memory and multiple memory
`requesting devices,
`including instruction processor and
`input/output memory requesters. Because memory requests
`may be occurring faster than the main memory can act on
`them, queuing must be used to store the memory requests
`until the main memory is ready to accept them. The present
`invention utilizes parallel queues for retaining memory
`requests until they can be processed by the main memory.
`The memory requests are selected one at a time to be
`directed to one of the parallel queues. The memory requests
`are transmitted to one of the parallel queues according to the
`type of memory request
`it
`is. For instance,
`instruction
`processor memory requests can be transmitted to one of the
`queues, and input/output memory requests can be transmit-
`ted to another one of the parallel queues. This parallel
`queuing structure allows the queue priority circuitry to select
`memory requests from any of the parallel queues. The queue
`
`4
`priority circuitry generates control signals which control a
`selection circuit, such as a multiplexer, which in turn will
`select a rnemory request from one of the parallel queues.
`The queue priority circuitry includes a ratio selector, that
`can be set so that memory requests are retrieved from the
`parallel queues according to a predetermined queue ratio.
`For instance, the queue ratio could be set so as to retrieve
`one memory request from a first queue,
`three memory
`requests from a second queue, and so forth. The ratio
`selector of the present invention can be modified during
`periods of inactivity, or during normal operation, so that the
`ratio of memory requests could be changed at any time. This
`provides the ability to efficiently set the ratio according to
`the unique characteristics of the data processing system.
`The ratio selector of the present invention uses a loadable
`count value, and a counter to generate the desired ratio. The
`counter continually increments until a comparator indicates
`that the counter is equal to one of the count values, at which
`time a control signal is generated to direct the selection
`circuit to choose the corresponding queue as the one to
`provide a memory request to the main memory.
`The queue priority circuit also includes other loadable
`registers to determine which queue should be selected as the
`one to provide a memory request to the main memory. A
`priority select register is provided to invert
`the ratio in
`systems having two parallel queues. Therefore, if the origi-
`nal ratio was three-to-one, an active signal in the priority
`select register would invert the ratio to one-to-three, thus
`swapping the priorities of the queues. Also included in the
`queue priority circuitry are output—valid registers, which
`indicate whether there are any memory requests in their
`associated queues. If there are no memory requests in a
`particular one of the parallel queues, its associated output-
`valid register will cause the ratio in the queue priority to be
`temporarily changed to disregard requests to that queue.
`This in effect is a bypass mode, which saves valuable time
`by prohibiting the queue priority circuitry from attempting
`to retrieve a non existent memory request from a queue.
`The present
`invention also includes conflict detection
`circuitry. This circuitry maintains proper memory request
`order in the parallel-queue system as seen by the storage
`controller. For instance, an instruction processor may issue
`a memory write request to the storage controller, which is
`placed on the memory request queue behind other requests
`to memory. A subsequent memory read request from a
`remote interface to the same address as the memory write
`request can not be allowed to access the main memory
`before the prior memory write request has a chance to write
`to that address. For such cases, the conflict detection cir-
`cuitry ensures that the subsequent memory read request is
`placed in the same queue as the prior memory write request
`having the same address. The conflict detection circuitry
`performs this function by comparing each of the memory
`requests in the parallel queues to the incoming memory
`request. If a match is found, the incoming memory request
`is put in the same queue in which the equivalent memory
`request was located, and the memory requests will be
`received by the main memory in the proper order.
`Still other objects and advantages of the present invention
`will become readily apparent to those skilled in this art from
`the following detailed description, where the preferred
`embodiment of the invention is shown by way of illustration
`of the best mode contemplated of carrying out the invention.
`As will be realized, the invention is capable of other and
`different embodiments, and its details are capable of modi-
`fication without departing from the invention. Accordingly,
`
`0012
`
`

`
`5
`the drawing and description are to be regarded as illustrative
`in nature, and not as restrictive.
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`FIG. 1 is a fundamental block diagram of the Data 5
`Processing System according to the present invention;
`FIG. 2 is a diagram of the n1ei11ory hierarchy within the
`Data Processing System;
`FIG. 3 is a block diagram of a known method of queuing
`memory requests;
`FIG. 4 is a block diagram of the particular data processing
`system encompassing the present invention;
`FIG. 5 is a block diagram that shows how instruction
`processor and input/output memory requests in the system of
`FIG. 4 are separated to prevent storage access domination by
`the instruction processors;
`FIG. 6 is a block diagram of the Queue Priority logic,
`which specifies which memory queue will supply a memory
`request to the main storage;
`FIG. 7 is a block diagram of the invention which further
`implements address conflict detection circuitry;
`FIG. 8 is a block diagram of the IP Request Conllict
`Detection and the I/O Request Conllict Detection circuitry;
`FIG. 9 is a schematic diagram of the Queue Write
`Selection Logic;
`FIG. 10 is a truth table indicating the memory queue that
`will be selected by the Queue Write Selection Logic in
`response to the various states of the I/O MEMORY
`REQUEST, REMOTE INTERFACE,
`IP QUEUE
`CONFLICT, and I/O QUEUE CONFLICT signals.
`DETAILED DESCRIPTION OF THE
`PREFERRED EMBODIMENT
`
`FIG. 1 is a fundamental block diagram of the Data
`Processing System 10 according to the present invention.
`Data Processing System 10 includes an Instruction Proces-
`sor 12 (hereinafter IP), a Storage Controller 14 (hereinafter
`SC), and a Main Storage Unit 16 (hereinafter MSU). The IP
`12 is the central processing unit for the execution of program
`and system software instructions. The IP 12 is coupled to the
`SC 14 via Interface 18, which provides storage request
`priority and data routing between the major components of
`the system. The MSU 16, which provides the main storage
`capability for the Data Processing System 10, is connected
`to the SC 14 via Interface 20.
`
`The Data Processing System 10 of FIG. 1 also includes
`input/output capabilities. The SC 14 is connected to the
`Input/Output Adapter 22 (hereinafter IOA) by Interface 24.
`The IOA 22 provides the SC 14 with storage requests from
`the Input/Output Processors 26 (hereinafter IOP). The IOP
`26 comprises multiple individual IOPs, shown in FIG. 1 as
`IOPA 28, IOPB 30, through IOPn 32. The IOA 22 provides
`the interface from the IOP 26 to the SC 14.
`
`FIG. 2 is a diagram of the memory hierarchy within the
`Data Processing System 10. The IP 12 contains a First Level
`Cache 34, which represents the first level of the memory
`hierarchy. Soi11e cache accesses associated with normal
`instruction execution will take place in the First Level Cache
`34. The IP 12 requests access of a particular data element
`which may or may not be within the First Level Cache 34.
`If the data element
`is present and valid,
`the access is
`completed at that level. If not, access is made to the SC 14
`via Interface 18 to access the desired data element.
`
`The second level of the memory hierarchy is within the
`SC 14, and is shown as the Second Level Cache 36. Any
`
`6
`memory request to SC 14 is routed to the directory of the
`Second Level Cache 36 to determine if the desired data
`element is present and valid. A “memory request” is an
`address to a particular data bit, byte, word, longword, etc.
`stored in the MSU 16. Memory requests are made to the
`Second Level Cache 36 from the IP 12 via Interface 38, and
`from the IOA 22 via Interface 40. If the data element is
`present and valid, the memory access is coi11pleted at that
`level, and the MSU 16 will not be accessed. If the data
`element is not present and valid within the Second Level
`Cache 36, the data must be requested from the MSU 16.
`The MSU 16 represents the third level of the memory
`hierarchy of the Data Processing System 10. Where cache
`“misses” occur in the Second Level Cache 36,
`the data
`element must be retrieved from the MSU 16. The Second
`Level Cache 36 can handle data requests much faster than
`the MSU 16. Since the Second Level Cache 36 is faster than
`the MSU 16, it must be “buffered” from the slower MSU to
`allow the Second Level Cache to operate during an MSU
`fetch. The advantage of using a fast cache (the Second Level
`Cache 36) would be thwarted if it had to suspend operation
`to wait for the MSU 16 to complete a memory access cycle.
`To allow the Second Level Cache 36 to operate indepen-
`dently of the MSU 16, the Main Storage Memory Queue 42
`is provided to buffer the Second Level Cache 36 from the
`MSU 16. Memory requests from the IP 12 are made to the
`Main Storage Memory Queue 42 via Interface 44, and
`memory requests from the IOA 22 are made to the Main
`Storage Memory Queue 42 via Interface 46. The Main
`Storage Memory Queue 42 is connected to the MSU 16 via
`Interface 20. Therefore, when there is a Second Level Cache
`36 “miss”, the data request is stored in the Main Storage
`Memory Queue 42 to await handling by the MSU 16.
`FIG. 3 is a block diagram of a known method of queuing
`memory requests. Within the SC 14 is a Selection Circuit 48
`which receives memory requests from the IP 12, the IOA 22,
`and other External Memory Requests 50. The selection
`method used for the Selection Circuit 48 is not important,
`and any type of selection method could be used. In the
`preferred embodiment, a multiplexer was used. The External
`Memory Requests are individually routed to the Input Reg-
`ister 52 where they are latched. As previously described, an
`attempt to locate the requested data element from the Second
`Level Cache 36 will take place. If the data element is not
`within the Second Level Cache 36, the memory request is
`queued in the FIFO Queue 54. The Fll-‘O Queue 54 repre-
`sents a strict “first-in, first-out” queue which is Well-known
`in the art. The memory requests entered into the queue are
`used to access the MSU 16 in the order that they were input
`into the FIFO Queue.
`A memory queuing architecture as in FIG. 3 will direct
`memory requests to the MSU 16 in the order that they are
`received by the FIFO Queue 54. For example,
`if five
`consecutive memory requests were made from the IP 12, and
`then one memory request was made from the IOA22, all five
`IP memory requests would be allowed to access the MSU 16
`before the IOA memory request was recognized. This limi-
`tation can have adverse effects in a data processing system.
`This results from the fact that the input/output in the system
`may require a certain maximum allowable access time. If
`this maximum access time is exceeded, errors such as device
`overruns can occur. Some device overruns may cause sys-
`tem operation to be terminated. The IPs do not have a
`maximum allowable access time, and they simply suspend
`operation rather than terminate system operation. Therefore,
`where both IP and I/O memory requests are occurring, the
`I/O memory requests should be allowed to preempt the IP
`memory requests to some extent.
`
`0013
`
`

`
`8
`
`7
`FIG. 4 is a block diagram of the particular data processing
`system encompassing the present invention. The large multi-
`processing system in which the present invention resides is
`capable of expansion as shown in FIG. 4. The fundamental
`Data Processing System 10 of FIG. 1 can be duplicated to
`provide for parallel instruction processing, additional input/
`output capabilities, and increased memory capacity.
`FIG. 4 shows one embodiment of this expansion, wherein
`two of the Data Processing Systems 10 of FIG. 1 are housed
`in two separate cabinets. Cabinet A56 includes two instruc-
`tion processors labeled IP-A1 58 and IP-A2 60, a storage
`controller labeled SC-A 62, two main storage units labeled
`MSU-A1 64 AND MSU-A2 66, and an input/output adapter
`labeled IOA-A 68. Similarly, Cabinet B 70 includes two
`instruction processors labeled IP-B1 72 and IP-B2 74, a
`storage controller labeled SC-B 76, two main storage units
`labeled MSU—Bl 78 AND MSU—B2 80, and an input/output
`adapter labeled IOA—B 82. The instruction processors IP—Al
`58 and IP-A2 60, and the input/output adapter IOA-A 68 of
`Cabinet A 56 request data elements from the Second Level
`Cache of SC—A 62, and from MSU—A1 64 and MSU-A2 66
`as was described in FIG. 2.
`In the same manner,
`the
`instruction processors IP—B1 72 and IP—B2 74, and the
`input/output adapter IOA—B 82 of Cabinet B 70 request data
`elements from the Second Level Cache of SC-B 76, and
`from MSU-B1 78 and MSU-B2 80.
`The Second Ievel Cache 36 described in FIG. 2 is a
`“shared” cache for all
`instruction processors and input/
`output processors in the entire data processing system. In
`other words, the Second Level Cache 36 in SC-A 62 will
`hold the same data as the Second Level Cache 36 in SC-B
`76. Furthermore, the IPs and IOAs of one cabinet can access
`memory from the MSUs of the other cabinet. This is
`performed by way of the REMOTE INTERFACE on Inter-
`faces 84a and 84b.
`It should be noted that continued
`expansion of cabinets could occur, and in the system accom-
`panying the present invention up to four cabinets may be
`present. Each storage controller (SC-A 62, SC-B 76, etc.)
`would be coupled to each of the other storage controllers in
`this situation.
`
`FIG. 5 is a block diagram that shows how instruction
`processor and input/output memory requests in the system of
`FIG. 4 are separated to prevent storage access domination by
`the instruction processors. The Main Storage Memory
`Queue 42 (see also FIG. 2) is comprised of parallel FIFO
`queues in both SC-A 62 and SC-B 76. SC-A 62 will be used
`for the discussion of FIG. 5, but it should be understood that
`this discussion applies equally to SC-B 76 with regards to
`the utilization of parallel memory request queues.
`The Instruction Processors, labeled IP-A1 58 and IP-A2
`60, which are associated with SC-A 62, are inputs to the
`multiplexer labeled MUX 86 via Interfaces 88 and 90
`respectively. Input/Output Adapter IOA-A 68 is also coupled
`to the MUX 86, by way of Interface 92. Finally, memory
`requests from other instruction processors and input/output
`processors are input
`to the MUX 86 via the REMOTE
`INTERFACE on Line 84a. The MUX 86 selects one of the
`inputs to be passed to the Input Register 94. Selection by the
`MUX 86 is accomplished through control signals (not
`shown) which indicate which of the MUX 86 inputs is to be
`selected. The binary control signals can simply count from
`binary 0 to its maximum binary value to individually
`sequence through each memory request input (i.e., binary
`00, 01, I0, and II to sequence through four memory request
`inputs). Other criteria could be used to select memory
`request inputs upon recognition of the presence of a memory
`request. The exact criteria used to control the MUX 86 is not
`
`8
`relevant to the present invention, it is only important that a
`selection method be used to pass one memory request at a
`time to the Input Register 94.
`The Input Register 94 latches the selected memory
`request from the MUX 86. At this time, the memory request
`is sent to the Second Level Cache 36 across Interface 96 to
`see if the desired data is present within the Second Level
`Cache. If so, the data is retrieved from the Second Level
`Cache 36, and the memory request will not be loaded into
`the Main Storage Memory Queue 42. If the desired data is
`not present in the Second Level Cache 36,
`the memory
`request will be loaded into one of the two queues Within the
`Main Storage Memory Queue 42. As a general rule, memory
`requests from IP-A1 58 and IP-A2 60 will be loaded into the
`IP FIFO Queue 98 via Interface 100, and memory requests
`from IOA-A 68 will be loaded into the I/O FIFO Queue 102
`via Interface 104. The circuitry used to direct the memory
`request to the desired queue in the Main Storage Memory
`Queue 42 is discussed in the description of FIG. 7.
`By separating the instruction processor (IP—A1 58 and
`IP-A2 60) memory requests from the input/output (IOA-A
`68) memory requests, I/O requests or II’ requests can be
`given priority over the other. In other words, one queue can
`be used to access the main storage (MSU-A1 64 and
`MSU-A2 66) while memory request operation from the
`other queue is temporarily suspended. Therefore, subse-
`quent I/O memory requests can access the main storage prior
`to IP memory requests which are already in the IP FIFO
`Queue 98.
`The instruction processor output register, labeled IP REG
`106, receives and latches the memory request from the IP
`FIFO Queue 98 that has been in the IP FIFO Queue for the
`longest
`time. Similarly,
`the input/output output register,
`labeled I/O REG 108, latches the memory request from the
`I/O FIFO Queue 102 that has been in the I/O FIFO Queue
`for the longest time. The IP FIFO Queue 98 and the I/O
`FIFO Queue 102 are shown in FIG. 5 to hold four and three
`memory requests respectively, but the size of the queue
`could be as large as needed or desired.
`'Ihe multiplexer
`labeled MUX 110 then selects a memory request from either
`the IP REG 106 or the I/O REG 108 depending on the state
`of a control signal on Line 112. The co

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket