throbber
ARM_VPT_IPR_00000001
`
`ARM Ex. 1001
`IPR Petition - USP 5,463,750
`
`

`
`US. Patent
`
`Oct. 31, 1995
`
`Sheet 1 of 4
`
`5,463,750
`
`INSTRUCTION
`
`ISSUING
`
`UNIT
`
`53
`
`
`
`TRANSFER
`
`
`
`UNIT
`
`VIRTUAL MEMORY
`
`REAL MEMORY
`
` 4c BYTE
`
`16M BYTE
`224
`
`232
`ms
`
`12
`
`RM = 2
`
`PAGES
`
`PAGE = 2'2 am-:3
`
`/_—I6_ 2A
`
`FIG. 28
`
`ARM_VPT_IPR_00000002
`ARM_VPT_|PR_00000002
`
`

`
`U.S. Patent
`
`Oct. 31, 1995
`
`Sheet 2 of 4
`
`5,463,750
`
`31
`VA DIRECTORY (10)
`
`22 21
`PAGE (10)
`
`12 11
`
`DISP (12)
`
`PDO
`REGISTER STO
`ZERO '3
`
`108
`
`D
`
`154
`
`VA(11:0)
`
`12
`
`A2
`
`vAw=22>
`,0
`m
`>
`<
`In vA<2w2)
`
`PAGE TABLE 0
`20
`H
`
`.
`
`1 104
`}
`
`PTE - I
`
`I
`
`TABLE
`
`ENTRY
`
`P
`
`,«
`
`"N.
`
`I
`I
`
`Accuuu1iAqTDR
`115
`(
`)
`20
`<20) E!
`
`AGE TABLE 1025
`<.....
`PAGE TABLE‘“~\
`
`.._.5_..,4._..
`RA EH
`
`I
`
`I
`
`
`
`PAGE
`
`DIRECTORY
`ENTRY
`
`ACCUMULATOR
`
`112
`
`:00
`
`I
`
`1
`:
`
`:
`
`1
`
`1
`
`T
`1
`'
`
`TLB
`
`
`
`1
`,
`I
`}
`T
`I________________ __4
`
`
`
`PAGE 1023
`
`20
`
`mm
`
`PF = PAGE FAULT
`PL = ACCESS PROTECTION
`
`ST = SYSTEM TAGS
`D = DIRTY FLAG
`R = REFERENCED FLAG
`
`(20)
`Q2)
`
`
`F79?‘
`
`57
`
`32 BIT REAL ADDRESS
`I
`ADDRESS
`REAL
`
`ACCUMULATOR
`
`ARM_VPT_IPR_00000003
`ARM_VPT_|PR_00000003
`
`

`
`U.S. Patent
`
`Oct. 31, 1995
`
`Sheet 3 of 4
`
`5,463,750
`
`VA<18 :12)
`
`IZB
`
`DATA
`TRANSFER UNIT
`TO MAIN
`MEMORY
`
`34
`
`FOR RA TAG
`
`COMPARISON
`
`
`
`TO DETERMINE
`-CACHE HIT/MISS
`
`VIRTUAL ADDRESS
`
`31
`
`19 18
`
`12 11
`
`O
`
`VIRTUAL ADDRESSES
`
`FROM OTHER PIPELINES
`
`VAT = VIRTUAL ADDRESS TAG
`
`RA
`VA
`
`REAL ADDRESS (BITS <31fl2>)
`VIRTUAL ADDRESS
`
`FIG. 4
`
`ARM_VPT_IPR_00000004
`ARM_VPT_|PR_00000004
`
`

`
`U.S. Patent
`
`Oct. 31, 1995
`
`Sheet 4 of 4
`
`5,463,750
`
`2104
`
`2103
`
`210::
`
`”200
`
`LOAD
`INSTRUCTION
`
`PIPELINE
`
`LOAD
`INSTRUCTION
`
`PIPELINE
`
`STORE
`INSTRUCTION
`
`PIPELINE
`
`
`2154
`U
`2135
`U
`2186‘
`U
`
`2145'
`
`2286,
`
`31444
`
`2143
`
`ADDRESS
`REGISTER
`
`2254
`
`ADDRESS
`REGISTER
`2255'
`2228
`
`ADDRESS
`REGISTER
`
`.
`
`2233
`225::
`2225'
`
`225»?
`9394
`
`ARM_VPT_IPR_00000005
`ARM_VPT_|PR_00000005
`
`

`
`5,463,750
`
`1
`METHOD AND APPARATUS FOR
`TRANSLATING VIRTUAL ADDRESSES IN A
`DATA PROCESSING SYSTEM HAVING
`MULTIPLE INSTRUCTION PIPELINES AND
`SEPARATE TLB’S FOR EACH PIPELINE
`
`BACKGROUND OF THE INVENTION
`
`The present invention relates to computing systems and.
`more particularly, to a method and apparatus for translating
`virtual addresses in a computing system having multiple
`instruction pipelines.
`FIG. I is a block diagram of a typical computing system
`10 which employs virtual addressing of data. Computing
`system 10 includes an instruction issuing unit 14 which
`communicates instructions to a plurality of (e.g., eight)
`instruction pipelines 18A—H over a communication path 22.
`The data referred to by the instructions in a program are
`stored in a mass storage device 30 which may be, for
`example, a disk or tape drive. Since mass storage devices
`operate very slowly (c.g., a million or more clock cycles per
`access) compared to instruction issuing unit 14 and instn.1e—
`tion pipelines l8A—H, data currently being worked on by the
`program is stored in a main memory 34 which may be a
`random access memory (RAM) capable of providing data to
`the program at a much faster rate (eg.. 30 or so clock
`cycles]. Data stored in main memory 34 is transferred to and
`from mass storage device 30 over a communication path 42.
`The communication of data between main memory 34 and
`mass storage device 3|} is controlled by a data transfer unit
`46 which communicates with main memory 34 over a
`communication path 50 and with mass storage device 30
`over a communication path 54.
`Although main memory 34 operates much faster than
`mass storage device 30, it still does not operate as quickly
`as instruction issuing unit 14 or
`irrstruction pipelines
`18A—H. Consequently, computing system 10 includes a high
`speed cache memory 60 for storing a subset of data from
`main memory 34, and a very high speed register file 64- for
`storing a subset of data from cache memory 60. Cache
`memory 6|] communicates with main memory 34 over a
`communication path 68 and with register file 64 over a
`communication path 72. Register tile 64 communicates with
`instruction pipelines 18A—H over a communication path 76.
`Register file 64 operates at approximately the same speed as
`instruction issuing unit 14 and instruction pipelines I8A—H
`(e.g.. a fraction of a clock cycle), whereas cache memory 60
`operates at a speed somewhere between register file 64 and
`main memory 34 {e.g., approximately two or three clock
`cycles).
`
`FIGS. 2A-B are block diagrams illustrating the concept
`of virtual addressing. Assume computing system 10 has 32
`bits available to address data. The addressable memory
`space is then 2” bytes, or four gigabytes (4 GB), as shown
`in FIG. 2A. However, the physical (real) memory available
`in main memory 34 typically is much less than that, eg.,
`l—256 megabytes. Assuming a 16 megabyte (16 MB) real
`memory, as shown in FIG. 2B, only 24 address bits are
`needed to address the memory. Thus, multiple virtual
`addresses inevitably will be translated to the same real
`address used to address main memory 34. The same is true
`for cache memory 60. which typically stores only 1-36
`kilobytes of data. Register tile 64 typically comprises, e.g.,
`32 32-bit registers, and it stores data from cache memory 60
`as needed. The registers are addressed by instruction pipe-
`lines l8A—H using a different addressing scheme.
`
`20
`
`25
`
`30
`
`35
`
`4f}
`
`45
`
`55
`
`60
`
`2
`To accommodate the dilferencc between virnral addresses
`and real addresses and the mapping between them,
`the
`physical memory available in computing system 10 is
`divided into a set of uniform-si2.e blocks, called pages. If a
`page contains 2”’ or 4 kilobytes {-4 KB), then the full 32-bit
`address space contains 23” or
`1 million {lM) pages (4
`K.B><1M=4 GB).
`[If course, if main memory 34 has l6
`megabytes of memory, only 212 or 4K of the 1 million
`potential pages actually could be in memory at the same time
`(4KX4 KB=l6 MB).
`
`Computing system 10 keeps track of which pages of data
`from the 4 GB address space currently reside in main
`memory 34- {and exactly where each page of data is physi-
`cally located in main memory 34) by means of a set of page
`tables 100 (FIG. 3) typically stored in main memory 34.
`Assume computing system 10 specifics 4 KB pages and each
`page table 100 contains IK entries for providing the location
`of IK separate pages. Thus. each page table maps 4 MB of
`memory (lK><4KB=4 MB), and 4 page tables suflice for a
`machine with 16 megabytes of physical main memory (16
`MB.-'4 MB=1).
`
`The set of potential page tables are tracked by a page
`directory 104 which may contain, for example, IK entries
`[not all of which need to he used}. The starting location of
`this directory {its origin) is stored in a page directory origin
`(P130) register 108.
`
`To locate a page in main memory 34, the input virtual
`address is conceptually split
`into a 12-bit displacement
`address
`(VA<11:0>),
`a
`10-bit
`page
`table
`address
`(VA<21:12>) for accessing page table 100. and a 10-bit
`directory address [<VA 31222;) for accessing page directory
`104. The address stored in PDO register 108 is added to the
`directory address VA<31:22> of the input virtual address in
`a page directory entry address accumulator 112. The address
`in page directory entry address accumulator 112 is used to
`address page directory 104 to obtain the starting address of
`page table 100. The starting address of page table 100 is then
`added to the page table address VA<21:12> of the input
`virtual address in a page table entry address accumulator
`116. and the resulting address is used to address page table
`100. An address field in the addressed page table entry gives
`the starting location of the page in main memory 34 corre-
`sponding to the input virtual address, and a page fault field
`PF indicates whether the page is actually present in main
`memory 34. The location of data within each page is
`typically specified by the 12 lower—order displacement hits
`of the virtual address.
`
`When an instruction uses data that is not currently stored
`in main memory 34, a page fault occurs. and the faulting
`instruction abnormally terminates. "thereafter, data transfer
`unit 42 must find an unused 4 KB portion of memory in main
`memory 34. transfer the requested page from mass storage
`device 30 into main memory 34. and make the appropriate
`update to the page table (indicating both the presence and
`location of the page in memory). The program then may be
`restarted.
`
`FIG. 4 is a block diagram showing how virtual addresses
`are translated in the computing system shown in FIG. 1.
`Components which remain the same as FIGS. 1 and 3 retain
`their original numbering. An address register 154 receives
`an input virtual address which references data used by an
`instruction issued to one of instruction pipelines 14A»H, a
`translation memory (e.g., a translation lookaside buffer
`(TLB)} 158 and comparator 1'70 for initially determining
`whether data requested by the input virtual address resides
`in main memory 34, and a dynamic translation unit (DTU)
`
`ARM_VPT_IPR_00000006
`ARM_VPT_|PR_OOOOOOO6
`
`

`
`5,463 ,'}'5O
`
`3
`
`162 for accessing page tables in main memory 34. Bits
`VA[18:l2] of the input virtual address are communicated to
`TLB 158 over acommunication path 166, bits VA[31:12] of
`the input virtual address are communicated to DTU 162 over
`a communication path 174, and bits VA[31:19] are commu-
`nicated to comparator 170 over a communication path 176.
`Tl_.B 158 includes a plurality of addressable storage
`locations 1'78 that are addressed by bits VA[18:l2] of the
`input virtual address. Each storage location stores a virtual
`address tag (VAT) 180, a real address (RAJ 182 correspond-
`ing to the virtual address tag, and control
`information
`(CN"l"RL) 184. How much control information is included
`depends on the particular design and may include, for
`example, access protection flags, dirty flags, referenced
`flags, etc.
`The addressed virtual address tag is communicated to
`comparator 170 over a communication path 186, and the
`addressed real address is output on a communication path
`188. Comparator 170 compares the virtual address tag with
`bits VA[31:22] of the input virtual address. If they match (a
`TLB hit), then the real address output on communication
`path 188 is compared with a real address tag {not shown} of
`a selected line in cache memory 60 to determine if the
`requested data is in the cache memory (a cache hit). An
`example of this procedure is discussed in U.S. Pat. No.
`4,933,835 issued to Howard G. Sachs, et al. and incorpo-
`rated herein by reference. If there is a cache bit, then the
`pipelines may continue to run at their highest sustainable
`speed. If the requested data is not in cache memory 60, then
`the real address bits on corrununieation path 183 are com-
`bined with bits [1110] of the input virtual address and used
`to obtain the requested data from main memory 34.
`If the virtual address tag did not match bits VA[31:l9] of
`the input virtual address, then comparator 170 provides a
`miss signal on a communication path 190 to DTU 162. The
`miss signal indicates that the requested data is not currently
`stored in main memory 34, or else the data is in fact present
`in main memory 34 but the corresponding entry in TLB 158
`has been deleted.
`
`_When the miss signal is generated, DTU 162 accesses the
`page tables in main memory 34 to determine whether in fact
`the requested data is currently stored in main memory 34. If
`not, then DTU 162 instructs data transfer unit 42 through a
`communication path 194 to fetch the page containing the
`requested data from mass storage device 30. In any event,
`TLB 158 is updated through a communication path 196, and
`instruction issuing resumes.
`to accommodate the
`TLB 158 has multiple ports
`addresses from the pipelines needing address translation
`services. For example, if two load instruction pipelines and
`one store instruction pipeline are used in computing system
`10, then TLB 158 has three ports, and the single memory
`array in TLB 158 is used to service all address translation
`requests.
`As noted above, new virtual-to—real address translation
`information is stored in TLB 158 whenever a miss signal is
`generated by comparator 170. The new translation informa-
`tion typically replaces the oldest and least used entry pres-
`ently stored in TLB 158. While this mode of operation is
`ordinarily desirable, it may have disadvantages when a
`single memory array is used to service address translation
`requests from multiple pipelines. For example,
`if each
`pipeline refers to difierent areas of memory each time an
`address is to be translated, then the translation infonnation
`stored in TLB 158 for one pipeline may not get very old
`before it is replaced by the translation information obtained
`
`4
`by DTU 162 for the same or another pipeline at a later time.
`This increases the chance that DTU 162 will have to be
`activated more often, which degrades performance. The
`client is particularly severe and counterproductive when a
`first pipeline repeatedly refers to the same general area of
`memory. but the translation information is replaced by the
`other‘ pipelines between accesses by the first pipeline.
`
`SUMMARY OF THE INVENTION
`
`The present invention is directed to a method and appa-
`ratus for translating virtual addresses in a computing system
`having multiple pipelines wherein a separate 'I'LB is pro-
`vided for each pipeline requiring address translation ser-
`vices. Each TLB may operate independently so that it
`contains its own set of virtual-to-real address translations, or
`else each TLB in a selected group may be simultaneously
`updated with the satne address translation infomtation
`whenever the address translation tables in main memory are
`accessed to obtain address translation information for any
`other TLB in the group.
`In one embodiment of the present invention, a TLB is
`provided for each loadfstore pipeline in the system, and an
`address translator is provided for each such pipeline for
`translating a virtual address recieved front its associated
`pipeline into corresponding real addresses. Each address
`translator comprises a translation buffer accessing circuit for
`accessing the TLB, a translation indicating circuit for indi-
`cating whether translation data for the virtual address is
`stored in the translation boiler, and an update control circuit
`for activating the direct address translation circuit when the
`translation data for the virtual address is not stored in the
`TLB. The update control circuit also stores the translation
`data retrieved from the main memory into the TLB. If it is
`desired to have the same translation information available
`for all the pipelines in a group, then the update control circuit
`also updates all the other TLB’s in the group.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`FIG. 1 is a block diagram of a known computing system;
`FIGS. 2A and 2B are each diagrams illustrating virtual
`addressing;
`FIG. 3 is a diagram showing how page tables are accessed
`in the computing system shown in FIG. 1;
`FIG. 4 is a block diagram illustrating how virtual
`addresses are translated in the computing system shown in
`FIG. 1; and
`
`20
`
`30
`
`35
`
`40
`
`50
`
`FIG. 5 is a block diagram of a particular embodiment of
`a multiple TLB apparatus for translating virtual addresses in
`a computing system according to the present invention.
`
`BRIEF DESCRIPTION OF THE PREFERRED
`EMBODIMENTS
`
`I50
`
`FIG. 5 is a block diagram of a particular embodiment of
`an apparatus 200 according to the present
`invention for
`translating virtual addresses in a computing system such as
`computing system 10 shown in FIG. 1. Apparatus 200
`includes, for example, a load instruction pipeline 210A, a
`load instruction pipeline 21013, and a store instruction pipe-
`line 210C. These pipelines may be three of the pipelines
`18A—H shown in FIG. 1. Pipelines 2I(lA—C communicate
`virtual addresses to address registers 214A—C over respec-
`tive communication paths 218A—C. Relevant portions of the
`virtual addresses stored in address registers 218A-C are
`communicated to TLB’s 222A—C and to comparators
`
`ARM_VPT_IPR_00000007
`ARM_VPT_|PR_OOO0OOO7
`
`

`
`S
`
`6
`
`5,463,750
`
`230A—C over communication paths 226A—C and 228A—C,
`respectively. Tl_.B’s 222A—C are accessed in the manner
`noted in the Background of the Invention, and the addressed
`virtual address tags in each TLB are communicated to
`comparators 23l}A—C over respective communication paths
`234A—C. Comparators 238A—C compare the virtual address
`tags to the higher order bits of the respective virtual
`addresses and provide hitfmiss signals on communication
`paths 238A-C to an update control circuit 240.
`Update control circuit 240 controls the operation of DTU
`162 through a communication path 244 and updates TLB’s
`222A—C through respective update circuits 241-243 and
`communication paths 248A—C whenever there is a miss
`signal generated on one or more of communication paths
`238A—C. Tl:tat is, update control circuit 240 activates DTU
`162 whenever a miss signal is received over communication
`path 238A and stores the desired translation information in
`TLB 222A through communication path 248A; update con-
`trol circuit 240 activates DTU 162 whenever a miss signal
`is received over communication path 2383 and stores the
`desired translation information in TLB 22213 through com-
`munication path 2483: and update control circuit 240 acti-
`vates DTU 162 whenever a miss signal is received over
`communication path 238C and stores the desired translation
`inforrnation in TLB 222C through communication path
`248C.
`
`If desired, each TLB 222A—C may be updated indepen-
`dently of the others, which results in separate and indepen-
`dent sets of virtual-to-real address translation data in each
`TLB. Thus, if, for example, pipeline 210A tends to refer to
`a particular area of memory more than the other pipelines
`210B—C, then TLB 222A will store a set of virtual—to—real
`address translations that maximize the hit rate for pipeline
`210A. Even if pipeline 210A does not favor a particular area
`of memory, having aseparate and independent set of virtual-
`to-real address translation data elirnirtates the possibility that
`needed translation information in TLB 222A is deleted and
`replaced by translation data for another pipeline.
`If all three pipelines tend to refer to a common area of
`memory, then update control circuit 240 can be hardware or
`software programmed to simultaneously update all TLB’s
`with the same translation data whenever the address trans-
`lation tables in main memory are accessed to obtain address
`translation information for any other TLB. That is. every
`time DTU 162 is activated for translating a virtual address
`supplied by pipeline 210A, then update control circuit stores
`the translation data in each of TLB’s 222A—C. While this
`mode of operation resembles that described for a multi-
`ported TLB as described in the Background ofthc Invention,
`this embodiment still has benefits in that three separate
`single-port TLB’s are easier to implement than one multi-
`port TLB and takes up only slightly more chip area.
`Lf one group of pipelines tends to refer to a common area
`of memory and other pipelines do not, then update control
`circuit 240 can be hardware or software prograrrimcd to
`maintain a common set of translations in the TLB‘s associ-
`ated with the group while independently updating the other
`TLE‘s. For example, if load pipelines 210A and 2103 tend
`to refer to a common area in memory and store pipeline
`210C tends to refer to a diiierent area of memory (or to
`random areas of memory), then control circuit 240 activates
`DTU 162 whenever a miss signal is received over commu-
`nication path 238A and stores the desired translation infor-
`mation in both TLB 222A and TLB 222B. Similarly, update
`control circuit 240 activates DTU I62 whenever a miss
`signal is received over communication path 23813 and stores
`the desired translation information in both TLB 222A and
`
`ll}
`
`15
`
`20
`
`30
`
`40
`
`55
`
`Eat}
`
`TLB 2223. On the other hand, update control circuit 240
`activates DTU 162 whenever a miss signal is received over
`communication path 238C and stores the desired translation
`information only in TLB 222C.
`
`While the above is a complete description of a preferred
`embodiment of the present invention, various modifications
`may be employed. For example, signals on a communication
`path 260 could he used to control which TLB‘s are com-
`monly updated and which TLB's are separately updated
`{eg., all TLB's updated independently, TLl3’s 222A and
`222C updated in common while TLB 222B is updated
`independently, or TLB‘s 222A-C all updated in common).
`That is useful when common memory references by the
`pipelines are application or program dependent. Conse-
`quently, the scope of the invention should be ascertained by
`the following claims.
`What is claimed is:
`
`1. An apparatus for translating virtual addresses in a
`computing system having at
`least a first and a second
`instruction pipeline and a direct address translation unit for
`translating virtual addresses into real addresses. the direct
`address translation unit
`including a master
`translation
`memory for storing translation data,
`the direct address
`translation unit
`for translating a virtual address into a
`corresponding real address, comprising:
`a first translation bufl'er, associated with the first instruc-
`tion pipeline, for storing a first subset of translation data
`front the master translation memory;
`a first address translator, coupled to the first instruction
`pipeline and to the first translation buffer, for translat-
`ing a first virtual address received from the first instruc-
`tion pipeline into a corresponding first real address, the
`first address translator comprising:
`first translation butler accessing means for accessing
`the first translation buffer;
`first translation indicating means, coupled to the first
`translation buffer accessing means. for indicating
`whether translation data for the first virtual address is
`stored in the first translation buffer; and
`first direct address translating means, coupled to the
`first translation indicating means and to the direct
`address translation unit to translate the first virtual
`address when the first translation indicating means
`indicates that the translation data for the first virtual
`address is not stored in the first translation buffer. the
`first direct address translating means including first
`translation buffer storing means. coupled to the first
`translation buffer, for storing the translation data for
`the first virtual address from the master translation
`memory into the first translation buffer;
`a second translation buffer. associated with the second
`instruction pipeline, for storing a second subset of
`translation data from the master translation memory;
`and
`
`a second address translator. coupled to the second instruc-
`tion pipeline and to the second translation butler, for
`translating :3. second virtual address received from the
`second instruction pipeline into a corresponding second
`real address, the second address translator comprising:
`second translation buffer accessing means Ior acccssi ng
`the second translation buffer;
`second translation indicating means, coupled to the
`second translation buffer accessing means, for indi-
`eating whether translation data for the second virtual
`address is stored in the second translation buffer, and
`second direct address translating means, coupled to the
`
`ARM_VPT_IPR_00000008
`AFiM_VPT_|PFi_OOOOOOO8
`
`

`
`7
`
`8
`
`5 ,463 ,750
`
`second translation indicating means and to the first
`address translation unit,
`for activating the direct
`address translation unit to translate the second virtual
`address when the second translation indicating
`means indicates that the translation data for the
`second virtual address is not stored in the second
`translation bulfer, the second direct address translat-
`ing means including second translation butler storing
`means. coupled to the second translation buffer, for
`storing the translation data for the second virtual
`address from the master translation memory into the
`second translation bulfer.
`2. The apparatus according to claim 1,
`wherein the first direct address translating means further
`comprises second translation buffer storing means.
`coupled to the second translation buffer, for storing the
`translation data for the first virtual address from the
`master translation memory into the second translation
`bulfer.
`
`3.
`
`The apparatus according to claim 2:
`wherein the second direct address translating means for-
`ther comprises first translation buffer storage means,
`coupled to the first translation buffer. for storing the
`translation data for the second virtual address from the
`master translation memory into the first translation
`bufier.
`
`4.
`
`The apparatus according to claim 3 further comprising:
`a third translation buffer. associated with a third instruc-
`tion pipeline, for storing a third subset of translation
`data from the master translation memory;
`a third address translator, coupled to the third instruction
`pipeline and to the third translation buffer, for translat-
`ing a third virtual address received from the third
`instruction pipeline into a corresponding third real
`address, the third address translator comprising:
`third translation buffer accessing means for accessing
`the third translation buffer;
`third translation indicating nteans, coupled to the third
`translation buffer accessing means. for indicating
`whether translation data for the third virtual address
`is stored in the third translation buffer; and
`third direct address translating means, coupled to the
`third translation indicating means and to the direct
`address translation unit, for activating the direct
`address translation unit to translate the third virtual
`address when the third translation indicating means
`indicates that the translation data for the third virtual
`address is not stored in the third translation buffer.
`the third direct address translating means including
`third translation butter storing means, coupled to the
`third translation buffer, for storing the translation
`data for the third virtual address from the master
`translation memory into the third translation buffer.
`The apparatus according to claim 4,
`wherein the third translation buffer storing means is the
`only means for storing translation data into the third
`translation buffer.
`
`5.
`
`6.
`
`The apparatus according to claim 5,
`wherein the first instruction pipeline comprises a first load
`instruction pipeline for processing instructions which
`cause data to be loaded from a memory; and
`wherein the third instruction pipeline comprises a store
`instruction pipeline for processing instructions which
`cause data to be stored into the memory.
`The apparatus according to claim 6,
`wherein the second instruction pipeline comprises a sec-
`
`'7.
`
`ARM_VPT_IPR_00000009
`ARM_VPT_|PR_OOOOOOO9
`
`0nd load instruction pipeline for processing instruc-
`tions which cause data to be loaded from the memory.
`8. A method for translating virtual addresses in a com-
`puting system having at least a first and a second instruction
`pipeline and a direct address translation unit for translating
`virtual addresses into real addresses,
`the direct address
`translation unit including a master translation memory for
`storing translation data, the direct address translation unit for
`translating a virtual address into a corresponding real
`address, comprising the steps of:
`storing a first subset of translation data from the master
`translation memory into a first translation buffer asso-
`ciated with the first instruction pipeline;
`translating a first virntal address received from the first
`instruction pipeline into a corresponding first rea]
`address, wherein the first virtual address translating
`step comprises the steps of:
`accessing the first translation buffer;
`indicating whether translation data for the first virtual
`address is stored in the first translation bttlfcr;
`activating the direct address translation unit to translate
`die first virtual address when the translation data for
`the first virtual address is not stored in the first
`translation buffer; and
`storing the translation data for the first virtual address
`from the master translation memory into the first
`translation bufier;
`storing a second subset of translation data from the master
`translation memory into a second translation buffer
`associated with the second instruction pipeline; and
`translating a second virtual address received from the
`second instruction pipeline into a corresponding second
`real address, wherein the second virtual address trans-
`lating step comprises the steps of:
`accessing the second translation buffer;
`indicating whether translation data for the second vir-
`tua] address is stored in the second translation buffer;
`activating the direct address translation unit to translate
`the second virtual address when the translation data
`for the second virtual address is not stored in the
`second translation bufier; and
`storing the translation data for the second virtual
`address front the master translation memory into the
`second translation bufier.
`
`9. The method according to claim 8 further comprising the
`step of:
`storing the translation data for the first virtual address
`front the master translation memory into the second
`translation buffer whenever translation data for the first
`virtual address from the master translating memory is
`stored into the first translation buffer.
`
`10. The method according to claim 9 further comprising
`the step of:
`storing the translation data for the second virtual address
`from the master translation memory into the first trans-
`lation buffer whenever translation data for the second
`virtual address from the master translation memory is
`stored into the second translation bufier.
`
`11. The method according to claim 10 further comprising
`the steps of:
`storing a third subset of translation data from the master
`translation memory into a third translation bufi'er asso-
`ciated with the third instruction pipeline; and
`translating a third virtual address received from the third
`instruction pipeline into a corresponding third real
`address, where in the third virtual address translating
`
`10
`
`15
`
`20
`
`38
`
`35
`
`45
`
`SD
`
`55
`
`60
`
`65
`
`

`
`5,463,750
`
`9
`step comprises the steps of:
`accessing the third translation buffer;
`indicating whether translation data for the third virtual
`address is stored in the third translation buffer;
`activating the direct address translation unit to translate 5
`the third virtual address when the translation data for
`the third virtual address is not stored in the third
`translation buffer; and
`storing the translation data for the third virtttal address
`from the master translation memory into the third
`translation bulfer.
`
`12. The method according to claim 11.
`wherein the step of storing the translation data for the
`third virtual address comprises the step of storing
`translation data for only the third virtua.l address in the
`
`10
`third translation buffer.
`
`13. The method according to claim 12,
`
`wherein the first instruction pipeline comprises a first load
`instruction pipeline for processing instructions which
`cause data to be loaded from a memory; and
`wherein the third instruction pipeline comprises a store
`instruction pipeline for processing instructions which
`cause data to be stored in the memory.
`14. The method according to claim 13.
`
`wherein the second instruction pipeline comprises a sec-
`ond load instruction pipeline for processing instrue—
`tions which cause data to be loaded from the memory.
`=0!
`$
`is
`1‘
`1:
`
`ARM_VPT_IPR_00000010
`ARM_VPT_|PR_0OO0OO10

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket