throbber
WORLD INTELLECTUAL PROPERTY ORGANIZATION
`International Bureau
`
`
`
`INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT)
`
`(51) Im91'n3ti0n3l PN91“ C1355ifiC9‘i°“ 6 5
`G06F 17/16
`
`A1
`
`(11) International Publication Number:
`
`W0 96/11440
`
`(43) International Publication Date:
`
`18 April 1996 (l8.04.96)
`
`(21) International Application Number:
`
`PCT/US95/12933
`
`(22) International Filing Date:
`
`6 October 1995 (06.10.95)
`
`(81) Designated States: AU, CA, CN, JP, KR, MX, European
`patent (AT, BE, CH, DE, DK, ES. FR, GB, GR, IE. IT.
`LU, MC. NL. PT. SE)-
`
`(30) Priority Data:
`08/319,231
`
`6 October 1994 (06.10_94)
`
`US
`
`(71) Applicant: VIRC, INC. [US/US]; 4910 Keller Springs Road.
`Dallas, TX 75248 (US).
`
`(72) Inventor: WI-IA], Lim; 1120 Galloway Street, Pacific Pal-
`isades, CA 90272 (US).
`
`(74) Agent: HOWISON, Gregory, M.; Thompson & Howison,
`L.L.P., Suite 995, 12225 Greenville Avenue, Dallas, TX
`75243 (US).
`
`Published
`With international Search report.
`Before the expiration of the time limit for amending the
`claims and to be republished in the event of the receipt of
`amendments-
`
`(54) Title: SHARED MEMORY SYSTEM
`
`
`
`EISA HOST
`
`PROCESSOR
`
`
`
`
`
`(57) Abstract
`
`A shared memory system (10) interfaces a shared memory bus (14) and EISA processor bus (20). The shared memory system (10)
`has an associated arbitration logic circuit (78) that services bus requests from each of the memory interfaces (16). In one of 2 arbitration
`modes, the shared memory system (10) allows a bus request from one of the memory interfaces (16) to transfer one byte of data, after
`which its priority is lowered and it relinquishes the bus to another one of the memory interfaces (16). This allows a byte-by-byte transfer
`without allowing any memory interface (16) to seize the bus. In another mode, each of the memory interfaces (16) is allowed to seize the
`bus to continuously transfer data with a priority system implemented to allow a higher priority one to seize the bus away from a lower
`priority one.
`
`Page 1 °I 34'
`
`HTC—LG-SAMSUNG EXHIBIT 1018
`
`

`
`FOR THE PURPOSES OF INFORMATION ONLY
`
`Codes used to identify States party to the PCT on the front pages of pamphlets publishing international
`applications under the PCI‘.
`
`AT
`AU
`BB
`BE
`BF
`BG
`BJ
`BR
`BY
`CA
`CF
`CG
`CH
`Cl
`CM
`CN
`CS
`CZ
`DE
`
`Austria
`Australia
`Barbados
`Belgium
`Burkina Faso
`Bulgaria
`Benin
`Brazil
`Belarus
`Canada
`Central African Republic
`Congo
`Switzerland
`Cote d'lvoite
`Cameroon
`China
`Czechoslovakia
`Czech Republic
`Germany
`
`Page 2 of 34
`
`GB
`GE
`GN
`GR
`HU
`[E
`[T
`JP
`KE
`KG
`KP
`
`KR
`KZ
`LI
`LK
`LU
`LV
`MC
`MD
`MG
`ML
`MN
`
`United Kingdom
`Georgia
`Guinea
`Greece
`Hungary
`Ireland
`Italy
`Japan
`Kenya
`Kyrgystan
`Democratic People's Republic
`of Korea
`Republic of Korea
`Kazakhstan
`Liechtenstein
`Sri Lanlta
`Luxembourg
`Latvia
`Monaco
`Republic of Moldova
`Madagascar
`Mali
`Mongolia
`
`Mauritania
`Malawi
`Niger
`Netherlands
`Norway
`New Zealand
`Poland
`Portugal
`Romania
`Russian Federation
`Sudan
`Sweden
`Slovenia
`Slovakia
`Senegal
`Chad
`Togo
`Tajikistan
`Trinidad and Tobago
`Ukraine
`United States of America
`Uzbekistan
`Viet Nam
`
`

`
`W0 96/1 1440
`
`PCTfUS95/12933
`
`SHARED MEMORY SYSTEM
`
`In large integrated computer networks, large storage systems are typically
`
`disposed in a server-based system with multiple peripheral systems allowed to operate
`
`independently and access the server main memory. One typical way for integrating such
`
`a network is that utilized in Local Area Networks (LANs).
`
`In these type of networks, a
`
`single broadband communication bus or media is provided through which all signals are
`
`passed. These LANs provide some type of protocol to prevent bus conflicts.
`
`In this
`
`manner, they provide an orderly method to allow the peripheral systems to “seize” the
`
`bus and access the server main memory. However, during the time that one ofthe
`
`peripheral systems has seized the bus, the other peripheral systems are denied access to
`
`the server main memory.
`
`In the early days of computers, this was a significant problem in computer centers
`
`in that a computer operator determined which program was loaded on the computer,
`
`which in turn determined how the computer resources were utilized. However, the
`
`computer operator would assign priority to certain programs such as those from a well-
`
`known professor in a university system.
`
`In such an environment, it was quite common
`
`for priority to be assigned such that the computer could be tied up for an entire evening
`
`working on a problem for an individual with such a high priority. Students in the
`
`university-based system, of course, had the lowest priority and, therefore, their programs
`
`were run only when the system resources were available. The problem with this type of
`
`system was that an extremely small program that took virtually no time to run was
`
`required to sit on the shelf for anywhere from five to twenty hours waiting for the larger,
`
`higher priority program to run. Although it would have been desirable to have the
`
`system operator instruct it to interrupt the higher priority program for a relatively short
`
`time to run a number ofthe fairly short programs, this was not an available option. Even
`
`ifthis interruption may have extended the higher priority program for a fairly short time,
`
`it would clearly provide a significantly higher level of service to the low priority small
`
`l0
`
`15
`
`20
`
`25
`
`program users.
`
`Page 3 of 34
`
`

`
`W0 96/ l 1440
`
`PCT/U S95/ 12933
`
`Present networks are seldom comprised ofa single LAN system due to the fact
`
`that these networks are now distributed. For example, a single system at a given site
`
`utilizing a local network that operates over, for example, an Ethernet® cable, would
`
`have a relatively high data transfer rate on the local cable. The Ethernet® cables in those
`
`systems provide a means to access remote sites via the telephone lines or other
`
`communication links. However, these communication links tend to have significantly
`
`slower access time. Even though they can be routed through a relatively high speed
`
`Ethernet® bus, they still must access and transmit instructions through the lower speed
`
`communication link. With the advent of multimedia, the need for much larger memories
`
`that operate in a shared memory environment has increased.
`
`In the multimedia world,
`
`the primary purpose ofthe system is for data exchange. As such, the rate ofdata
`
`transfer from the server memory to multiple systems is important. However, regardless
`
`ofthe type of memory system or the type of data transfer performed, the system still
`
`must transfer the data stored in the server memory in a serial manner; that is, only one
`
`word of data can be accessed and transferred out ofthe memory (or written thereto) on
`
`any given instruction cycle associated with the memory, When multiple systems are
`
`attempting to access the given server memory, it is necessary to control the access to the
`
`server memory by the peripheral system in an orderly manner to ensure all peripheral
`
`systems are adequately served.
`
`In typical systems that serve various communication links to allow those
`
`communication links to access the server memory, separate coprocessors are typically
`
`provided to handle the communication link.
`
`10
`
`15
`
`20
`
`This will therefor requires the server processor to control access to the server
`
`main memory. By requiring the server processor to serve access control limits the
`
`25
`
`amount of data that can be transferred between the server and the communication
`
`coprocessor, thus to the peripheral.
`
`Page 4 of 34
`
`

`
`W0 96/ 1 1440
`
`PCTlUS95/12933
`
`SUMMARY OF THE INVENTION
`
`2/-
`
`10
`
`15
`
`20
`
`25
`
`The present invention, disclosed and claimed herein, comprises a shared memory
`
`system that includes a centrally located memory. The shared memory has a plurality of
`
`storage locations, each for storing data ofa finite data size as a block of data. The block
`
`of data is accessible by an address associated with the storage location ofthe block of
`
`data for storage of data therein or retrieval of data therefrom. The centrally located
`
`memory is controlled by a memory access control device. A plurality of peripheral
`
`devices are disposed remote to the shared memory system, each operable to access the
`
`centrally located memory and generate addresses for transmittal thereto to address a
`
`desired memory location in the central locating memory system. The peripheral device is
`
`then allowed to transfer data thereto or retrieve data therefrom. A memory interface
`
`device is disposed between each of the peripheral devices and the centrally located
`
`memory system and is operable to control the transmittal of addresses from the
`
`associated peripheral device to the centrally located memory and transfer of data
`
`therebetween. The memory interface device has a unique ID which is transmitted to the
`
`centrally located memory. Associated with the centrally located memory is an arbitration
`
`device that is operable to determine which ofthe peripheral devices is allowed to access
`
`the centrally located memory. The arbitration device operates in a block-by-block basis
`
`to allow each peripheral unit to only access the centrally located memory for a block of
`
`data before relinquishing access, wherein all requesting ones ofthe peripheral devices
`
`will have access to at least one block of data prior to any of the peripheral devices having
`
`access to the next block of data requested thereby.
`
`In an alternate embodiment ofthe present invention, each block ofdata
`
`comprises a byte of data. Further, each memory interface device is given a priority bas
`
`upon its unique ID. The arbitration device operates in a second mode to allow the
`
`highest priority one of the requesting peripheral devices to seize the bus away from any
`
`ofthe other peripheral devices to access all ofthe data requested thereby.
`
`Page 5 of 34
`
`

`
`W0 96/11440
`
`PCT/US95/12933
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`For a more complete understanding of the present invention and the advantages
`
`thereof, reference is now made to the following description taken in conjunction with the
`
`accompanying Drawings in which:
`
`5
`
`FIGURE I illustrates an overall block diagram ofthe system;
`
`FIGURE 2 illustrates a perspective view ofthe physical configuration ofthe
`
`system;
`
`FIGURES 3a and 3b illustrate views ofthe shared memory board and peripheral
`
`interface board, respectively;
`
`10
`
`FIGURE 4 illustrates a diagram ofthe shared memory system;
`
`FIGURE 5 illustrates a diagram of the shared memory interface;
`
`FIGURE 5a illustrates a memory map for the system ofthe present invention;
`
`FIGURES 6 and 7 illustrate a timing diagram for the memory access;
`
`FIGURE 8 illustrates a flowchart for the operation ofthe system;
`
`15
`
`FIGURE 9 illustrates a prior art configuration for the overall system;
`
`FIGURE 10 illustrates the configuration for the system of the present invention;
`
`FIGURE 1 l illustrates an alternate block diagram ofthe present invention; and
`
`FIGURES 12a and 12b illustrate block diagrams ofthe CIM illustrated in
`
`FIGURE 5.
`
`Page 6 of 34
`
`

`
`W0 96/11440
`
`PCT/US95/12933
`
`DETAILED DESCRIPTION OF THE INVENTION
`
`Referring now to FIGURE 1, there is illustrated an overall block diagram of the
`
`system ofthe present invention. At the heart ofthe system is a shared memory system
`
`10 which, as will be described hereinbelow, provides a global memory that is accessible
`
`by a plurality of peripheral systems 12. The shared memory system l0, as will be
`
`described in more detail hereinbelow, is operable to serve each ofthe peripheral systems
`
`by receiving requests for data transfer, i.e., reading or writing of datz. o the global
`
`memory in the shared memory system 10, and arbitrating the service such that all
`
`peripheral units 12 are served in an even manner. Also, as will be described hereinbelow,
`
`various priorities are given to the peripheral units l2. The shared memory system 10 is
`
`operable to interface with each ofthe peripheral memories 12 through a shared memory
`
`bus 14. Each ofthe shared memory buses l4 are connected to the various peripheral
`
`memories 12 through an interface l6.
`
`In addition, in the preferred embodiment a host
`
`processor 18 is provided. This host processor I8 is given the highest priority in the
`
`system, and is operable to interface with the shared memory system 10 via an EISA bus
`
`20. The EISA host processor 18 functions similar to the peripheral units 12 and, in fact,
`
`is logically a peripheral system to the shared memory system 10. The host system 18
`
`does, however, have additional functions with respect to initializing the operation, etc.
`
`Referring now to FIGURE 2, there is illustrated a perspective view ofthe
`
`physical configuration for the system of FIGURE 1. An EISA host processor/bus board
`
`26 is provided which is operable to contain the host processor 18 and the EISA bus 20.
`
`A plurality of EISA bus connectors 28 are provided, into which a plurality of peripheral
`
`interface boards 30 are insened. Each ofthe peripheral interface boards is provided with
`
`an EISA interface connector 34, which is disposed on the lower edge of the peripheral
`
`interface board 30 and inserted into EISA bus connector 28. However, this could be any
`
`type of computer bus architecture, such as ISA, PCI, etc. A shared memory bus
`
`connector 36 is disposed on one end of the peripheral interface boards 30 and operable
`
`to be inserted into a common shared memory bus connector 38.
`
`In addition to the
`
`peripheral interface boards 30, a shared memory board 40 is disposed in one of the EISA
`
`memory bus connectors 38 on the EISA host processor/bus board 26. The shared
`Page 7 of 34
`
`IO
`
`15
`
`20
`
`25
`
`

`
`W0 96/ 1 1440
`
`PCTfUS95/12933
`
`memory 40 has a shared memory bus connector 36 associated therewith which is also
`
`operable to be interfaced with the common shared memory bus connector 38.
`Additionally, each ofthe peripheral interface boards 30 has associated therewith a
`peripheral interface connector 42. The peripheral interface connector 42 is operable to
`interface with the peripheral device 12.
`
`The peripheral devices 12 can be of many different types. One example is an
`RS232 interface, which allows any type of peripheral device 12 that utilizes an RS232
`communication protocol to access the shared memory system 10. When the peripheral
`device 12 generates an address to access the shared memory, the peripheral interface
`board 30 is operable to service this request and generate the instructions necessary to the
`shared memory board 40 in order to access the memory disposed thereon, this also
`including any memory mapping function that may be required.
`It is noted that each of
`the peripheral interfaces 30 has a unique ID on the shared memory bus. The unique ID
`determines both priority and the ID ofthe board and, hence, it determines how a shared
`
`memory system IO services memory access requests from the peripheral interface boards
`30. As will be described hereinbelow, each ofthe peripheral interface boards 30 is
`operable to buffer memory requests from the peripheral device 12 until they are served
`by the shared memory system 10.
`
`Referring now to FIGURES 3a and 3b, there are illustrated general layouts for
`boards themselves.
`In FIGURE 3a, the shared memory board 40 is illustrated depicting
`on one edge thereofthe male shared memory bus connector 36 and on the lower edge
`thereof, the host EISA bus interface connector 28.
`In FIGURE 3b, the peripheral
`interface board 30 is illustrated. The peripheral interface board 30 is comprised oftwo
`sections, a shared memory interface section 46 and a Communication Interface Module
`(CIM) 48. The shared memory interface portion 46 is operable to interface with the
`shared memory bus 14 and the communication interface module 48. The communication
`interface module 48 determines the nature ofthe peripheral interface board 30. For
`
`10
`
`15
`
`20
`
`25
`
`example, ifthe peripheral interface board 30 of FIGURE 3b were associated with an
`RS232 peripheral device 12, the communication interface module 48 would be operable
`to convert between a parallel data system and a serial data system, generating the various
`transmission signals necessary to handle the protocol associated with an RS232 interface.
`Page 8 of 34
`
`

`
`W0 96/ I 1440
`
`PCT/US95/12933
`
`This allows data to be transmitted in an RS232 data format. Additionally, the CIM 48 is
`operable to receive data in RS232 format and convert it to a parallel word for
`
`transmission to the shared memory interface portion 46.
`
`The shared memory interface portion 46 contains various processors for allowing
`the shared memory interface portion 46 to interface with the shared memory system 10
`via the shared memory bus 14. Ifthe CIM 48 were associated with an ISDN fimction,
`for example, the CIM 48 would also provide the interface between a parallel bus an
`ISDN format. The functionality ofa CIM 48 is quite similar to that associated with
`
`peripheral boards in an ISA bus architecture, i.e., it allows for conversion between a
`
`parallel bus and the communication identity.
`
`In the present embodiment, the EISA
`
`architecture utilizes a 32-bit bus structure.
`
`Referring now to FIGURE 4, there is illustrated a block diagram ofthe shared
`
`memory system 10. The shared memory system 10 has associated therewith a number of
`buses. At the heart ofthe shared memory system 10 is a global random access memory
`(RAM) 50. The global RAM 50 has an address input, a data input and a control input
`for receiving the Read/Write signal and various control signals for the Read operation
`such as the Column Address Strobe (CAS) and the Row Address Strobe (RAS). These
`are conventional signals. The global RAM 50 occupies its own memory space such that
`when it receives an address, this address will define one ofthe memory locations ofthe
`global RAM 50. This is conventional. The data input ofthe global RAM 50 is
`interfaced with a global RAM data (GRD) bus 52, and the address input ofthe global
`RAM 50 is interfaced with a global RAM address (GRA) address bus 54. The control
`
`input ofthe global RAM 50 is interfaced with a control bus 56. Data is not transferred
`
`directly from the GRD bus 52 nor addresses transferred directly from the GRA bus 54 to
`the connector 36 associated with the shared memory
`lard 40 which is then relayed to
`the other connectors 36 associated with the peripheral interface boards 30 via the
`
`IO
`
`15
`
`20
`
`connector 38. The connector 36 provides for control inputs, address inputs and data
`inputs. The address inputs and data inputs are interfaced with the shared memory bus
`14. The shared memory bus 14 is comprised ofa global address bus 58 and a global data
`bus 60. The global address bus 60 is interfaced through a transceiver 64 to the GRD bus
`
`30
`
`52. The transceiver 64 allowing for the transfer of data from the global data bus 60 to
`Page 9 of 34
`
`

`
`W0 96/1 1440
`
`PCT/US95/12933
`
`‘J!
`
`10
`
`15
`
`20
`
`25
`
`the GRD bus 52 and also from the GRD bus 52 to the global data bus 60. The global
`
`address bus 60 is connected through a buffer 66 to the GRA bus 54 to allow the
`
`peripheral systems 12 to address the global RAM 50. As will be described hereinbelow,
`
`the address space in the peripheral system is mapped into the memory space ofthe global
`
`RAM 50, such that the global RAM 50 merely becomes an extension ofthe memory
`
`system at the peripheral system l2.
`
`The EISA data is provided on EISA data bus 68, which EISA data bus 68 is
`
`interfaced through a transceiver 70 to the GRD bus 52. The EISA address is input to an
`
`EISA address bus 72. The EISA address is input to an EISA address bus 72 and then
`
`input to GRA bus 54 through a buffer 74. The transceiver 70 and buffer 74, are a
`
`“gated devices", as are the transceiver 64 and buffer 66. This allows the shared memory
`
`system 10 to prevent bus contention and service or receive addresses from only the
`
`peripheral units 12 or the host processor 18. However, it should be understood that the
`
`host processor 18 could merely have been defined as a peripheral device and interfaced
`
`with a peripheral interface board 30 to the shared memory bus 14. Each ofthe
`
`peripheral systems 12 and EISA host processor 18 are interfaced with the control bus 56
`
`to allow control signals to be passed therebetween.
`
`The shared memory system 10 is controlled primarily by logic, with the processor
`
`function being distributed to the memory interface 16. The shared memory system 10
`
`provides the operation of arbitration and priority determination.
`
`In the arbitration
`
`function, the shared memory system l0 determines via various bus signals generated by
`
`the shared memory interfaces 16 to determine how to service these requests and which
`
`one is serviced at a given time. Additionally, each ofthe peripheral systems is assigned a
`
`priority such that the arbitration function is based upon priority. This will be described in
`
`much more detail hereinbelow. The arbitration function is provided by an arbitration
`
`logic block 78 with the priority provided by a priority logic block 80. The various
`
`control functions for the global RAM 50 are provided by a RAM control block 82. The
`
`logic blocks 78 and 80 are provided by programmable logic devices (PLD). This function
`
`is provided by an integrated circuit such as the Intel N85C-220-80, a conventional PLD.
`
`Page 10 of 34
`
`

`
`W0 96/11440
`
`PCTfUS95/12933
`
`Referring now to FIGURE 5, there is illustrated a block diagram of the peripheral
`
`interface board 30. The peripheral interface board 30, as described above, is operable to
`
`perform a slave function between the peripheral device 12 and shared memory system
`
`10. The peripheral interface board has at the heart thereofa central processing unit
`
`(CPU) 96.
`
`In actuality, the CPU 96 is based on a 32-bit Extended Industry Standard
`
`Architecture (EISA) bus architecture and utilizes three Motorola INP68302s. These are
`
`conventional chips and have onboard a 16 MHz, 6800 processor core, which operates on
`
`a 6800 32-bit bus. The 32-bit bus has associated therewith an interrupt controller, a
`
`general purpose DMA control block and timers. The 32-bit bus is operable to interface
`
`with off-chip memory and other control systems, providing both data and address in
`
`addition to control functions. The internal 6800 32-bit bus interfaces through various
`
`DMA channels with a RISC processor bus. Attached to the RISC processor bus is a
`
`16 MHZ RISC communication processor. This processor is operable to interface with
`
`such things as ISDN support circuits, etc. Again, this is a conventional design and, in
`
`general, the three processors that make up the CPU 96 are divided up such that one
`
`operates as a master and the other two are slaves to distribute various processing
`
`functions. However, a single CPU could be utilized.
`
`The CPU 96 interfaces with an onboard processor bus 98, which processor bus is
`
`comprised of an address. a data and a control bus. The processor bus 98 is
`
`interfaced with an input/output circuit 100, which is generally realized with a peripheral
`input/output device manufactured by Intel, Part No. 82C5.5/X. This provides for a local
`
`input/output function that allows the processor bus 98 to communicate with the
`
`communication interface module 48. communication interface module 48 then operable
`
`to interface through the connector 42 with the peripheral unit 12. Local memory 102 is
`
`also provided, which local memory occupies an address space on the processor bus 98.
`
`As will be described hereinbelow. the address space associated with local memory 102
`
`also occupies the address space ofthe peripheral unit 12, i.e., directly mapped thereto.
`
`In order for the processor bus 98 to interface with the shared memory connector 36 and
`
`the shared memory system l0, the data portion ofthe processor bus 98 is interfaced with
`
`an intermediate data bus 106 via data buffers 108 for transferring data from the data
`
`portion ofthe processor bus 98 to the intermediate data bus 106. and a data latch llO for
`
`transferring data from the intermediate data bus 106 to the data portion of the processor
`Page 1 1 of 34
`
`I0
`
`15
`
`25
`
`30
`
`

`
`W0 96/11440
`
`10
`
`PCT/US95/12933
`
`bus 98. A bi—directional transceiver l 14 is provided for connecting the intermediate data
`
`bus 106 with a global data bus I16 when the global data bus I I6 is interfaced with the
`
`shared memory connector 36. Similarly, a global address bus I I8 is interfaced with the
`
`shared memory connector 36 for receiving addresses from the address portion ofthe
`
`processor bus 98. However, as will be described in more detail hereinbelow, the address
`
`on the address portion ofthe process memory ofthe processor bus 98 is mapped or
`
`translated via an address translator block 120 to basically map one Megabyte portions of
`
`the address space to the address space in the shared memory system. This address
`
`translation is facilitated via a static random access memory (SRAM) ofthe type
`
`IO
`
`T6TC6688J. This is a 32K memory for translating the eight megabyte portions to the
`
`desired portions ofthe shared memory address space.
`
`In addition to translating the address that is input from the peripheral unit 12 to
`
`the peripheral interface board 30, the address that is generated from the processor bus 98
`
`and relayed to the global address bus I 18 also has control information associated
`
`therewith. There are a number of higher order address bits that are not required for the
`
`available memory space in the shared memory system IO. These address bits are used as
`
`control bits and are generated by a control bit generator I24. Each unit has an ID that is
`
`input via a DIP switch 126 which allows the user to set the ID for a given board.
`
`Therefore, whenever an address is sent to the shared memory system 10, it not only
`contains an address as translated into the memory space ofthe shared memory system 10
`
`but also contains the control bit. Various other controls are input along a global control
`bus 130 that is connected to control portion ofthe processor bus 98 and also to the
`
`shared memory connector 36.
`
`25
`
`30
`
`Referring now to FIGURE 5a, there is illustrated a diagrammatic view ofthe
`
`address space ofthe shared memory system and the peripheral interface board 30. A 32
`
`Megabyte shared memory map 136 is provided, representing the memory map ofthe
`
`shared memory system 10, although this could be any size.
`
`In general, in the memory
`
`space associated with the shared memory system 10, the first location in the memory
`
`map 136 is represented by the “O” location. The highest order bit would be the Hex
`
`value for 32 Megabytes. By comparison, the peripheral interface board 30 has
`
`associated therewith a I6 Megabyte map 138. This therefore allows up to 16 Megabytes
`Page 12 of 34
`
`

`
`
`
`W0 96/11440
`
`PCT/US95/12933
`
`1 1
`
`of memory to be associated with the peripheral interface board 30. This local memory
`
`provides the ability for the peripheral interface board 30 to carry out multiple processing
`
`functions at the peripheral interface board level. These operations will be described in
`
`some detail hereinbelow. However, it is important that when data is being transferred to
`
`5
`
`the peripheral interface board that there is no conflict between the two memory spaces.
`
`Therefore, data that is being transmitted to the shared memory system 10 must have a
`
`different address above the physical eight Megabyte memory space. This is facilitated by
`
`defining the address space for the shared memory system 10 relative to the input to the
`
`peripheral interface board from the peripheral unit 12 as being at a higher address.
`
`10
`
`Therefore, the “0” location in the address space ofthe map 136 appears to be in a
`
`different portion ofthe memory space ofthe peripheral interface board 30 above the
`
`eight Megabyte memory space. This is represented by a virtual memory space 142.
`
`When an address exists in this space, it is recognized and transmitted to the shared
`
`memory system 10 after translation thereofto the address space ofthe shared memory
`
`15
`
`system 10. This allows an address to be generated at the peripheral unit 12 and then
`
`transmitted directly to the shared memory system 10.
`
`Referring now to FIGUREs 6 and 7, there are illustrated timing diagrams for the
`
`arbitration sequence for two modes of operation, a byte-by-byte arbitration mode and a
`
`priority based bus seizing mode. The two modes are facilitated by a control bit referred
`
`20
`
`to as a “Fair” Bit that, when set to “l” causes the mode to operate in a byte-to-byte
`
`mode and when set to “O”, forces the system to operate in the priority based bus seizing
`
`mode. FIGURE 6 illustrates the byte-to-byte mode and FIGURE 7 illustrates the
`
`priority based bus seizing mode.
`
`With further reference to FIGURE 6, there are illustrated five bus accessing
`
`25
`
`systems, the host system and four peripheral units 12. As described above, the host
`
`processor 18 essentially operates as a peripheral unit with the exception that it is given
`
`the highest priority, as will be described hereinbelow. Whenever memory is accessed, it
`
`typically requires four memory access cycles. The first operation is a bus request that is
`
`sent from the peripheral interface board to the shared memory system I0. When this is
`
`30
`
`processed, a bus grant signal is then sent back from a shared memory system 10 to the
`
`peripheral interface board 30. On the next cycle, an address is transmitted to the
`Page 13 of 34
`
`

`
`W0 96/ l 1440
`
`12
`
`PCT/US95/12933
`
`peripheral unit 12, followed by data in the next cycle. This is then repeated for the next
`byte ofinformation that is transmitted. As such, four cycles in the timing diagram of
`FIGURE 6 are required for each byte of data that is transmitted. However, the
`arbitration logic 78 operates to provide a pseudo-concurrence of data transfer. This
`pseudo-concurrence is provided in that each peripheral board is allowed to seize the bus
`for the purpose oftransferring one byte ofdata. Therefore, the bus is relinquished to
`another peripheral interface board 30 to allow it to transfer a byte ofinformation and so
`on. As such, this provides a relatively fair use for a fully loaded system such that a single
`peripheral interface board 30 cannot seize and occupy the bus.
`In applications such as a
`massive transfer of video or image information, it is necessary to allow all peripheral
`systems to have as much access to the data as possible. This is true especially for such
`systems as interactive applications. For example, when two systems are accessing the
`same database such that two peripheral systems interact with each other, it is important
`that one system be able to write to a memory location in one cycle. i.e., four
`uninterruptable memory access cycles required for the memory system, and then another
`peripheral system is able to access the data on the next data transfer cycle. This provides
`for the maximum flexibility in an interactive system utilizing a shared memory.
`If. on the
`other hand, one system were allowed to seize the bus, it would virtually isolate another
`peripheral system from the memory while it has seized control ofit. This, therefore,
`detracts from the interactive nature ofany type ofshared memory application.
`
`u-
`
`10
`
`20
`
`In the system illustrated in FIGURE 6, the peripheral unit P3 initially generates a
`bus request, followed by receipt ofthe bus grant and then transmission ofaddress and
`data. Upon the next cycle, three bus requests are then received, one from peripheral unit
`P2, one from peripheral unit P4 and one from peripheral unit P5. However, due to the
`priority nature ofthis system, the bus is released to peripheral unit P2 for transfer of data
`therebetween. However, the bus requests for P4 and P5 remain and upon the next data
`transfer cycle, the bus is relinquished to P4 and, at the end ofthis cycle, a decision is
`made as to the next peripheral unit to receive it. At the end ofthe data transfer cycle for
`P4, the host generates a bus request. Although the system is operable to grant the
`request to the first requesting peripheral unit, the host has maximum priority and bus
`access is granted to the host. However, at the end ofthe transfer cycle ofthe bus to the
`host, the host is forced to release the bus and it is given to peripheral unit P5. However,
`Page 14 of 34
`
`25
`
`30
`
`

`
`W0 96/ 1 1440
`
`13
`
`PCTfUS95/12933
`
`the host still desires to transfer information and the bus request is maintained at the end
`
`ofthe data transfer information by PS, the bus is released back to the host.
`
`It is
`
`um
`
`important to note that if all peripheral units including the host generate a bus request
`constantly, the system would divide the operation up such that all 5 peripheral units had
`the same amount of access to the bus in an alternating timeslot method. This provides
`the maximum throughput efficiency for all systems.
`
`With further reference to FIGURE 7, there is illustrated the system wherein a
`priori

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket