`
`VUMA Standard - Hardware Specifications
`
`Video Electronics Standards Association
`
`2150 North First Street, Suite 440
`San Jose, CA 95131-2029
`
`Phone: (408) 435-0333
`FAX: (408) 435-8225
`
`VESA Unified Memory Architecture (VUMA) Standard
`Hardware Specifications
`
`Version: 1.0
`
`March 8, 1996
`
`This is a document from the Video Electronics Standards Association
`Important Notice:
`(VESA) Unified Memory Architecture (VUMA) Committee. It has been ratified by the VESA general
`membership.
`
`Purpose
`
`To enable silicon vendors to design inter operable core logic chipset and VUMA device products,
`resulting in Unified Memory Architecture systems from computer manufacturers.
`
`Summary
`
`This document contains a specification for core logic chipset and VUMA devices’ hardware interface.
`It includes logical and electrical interface specifications. The BIOS protocol is described in VESA
`document VESA Unified Memory Architecture Standard - BIOS Extension Specifications Ver. 1.0.
`
`ASUS Exhibit 1010 - Page 1
`
`
`
`VUMA Standard - Hardware Specifications
`
`VESA Confidential
`
`Scope
`
`This document contains a specification for core logic chipset and VUMA devices’ hardware interface.
`It includes logical and electrical interface specifications. This document cannot be considered complete
`or accurate in all respects although every effort has been made to minimize errors.
`
`Intellectual Property
`
`© Copyright 1995 – Video Electronics Standards Association. Duplication of this document within
`VESA member companies for review purposes is permitted. All other rights are reserved.
`
`Trademarks
`
`All trademarks used in this document are the property of their respective owners. VESA and VUMA
`are trademarks owned by the Video Electronics Standards Association.
`
`Patents
`
`The proposals and standards developed and adopted by VESA are intended to promote uniformity and
`economies of scale in the video electronics industry. VESA strives for standards that will benefit both
`the industry and end users of video electronics products. VESA cannot ensure that the adoption of a
`standard; the use of a method described as a standard; or the making, using, or selling of a product in
`compliance with the standard does not infringe upon the intellectual property rights (including patents,
`trademarks, and copyrights) of others. VESA, therefore, makes no warranties, expressed or implied,
`that products conforming to a VESA standard do not infringe on the intellectual property rights of
`others, and accepts no liability direct, indirect or consequential, for any such infringement.
`
`2
`
`Version 1.0
`
`ASUS Exhibit 1010 - Page 2
`
`
`
`VUMA Standard - Hardware Specifications
`
`VESA Confidential
`
`Support For This Specification
`
`If you have a product that incorporates VUMA TM, you should ask the company that manufactured your
`product for assistance. If you are a manufacturer of the product, VESA can assist you with any
`clarification that you may require. All questions must be sent in writing to VESA via:
`
`(The following list is the preferred order for contacting VESA.)
`
`VESA World Wide Web Page:
`
`www.vesa.org
`
`Fax:
`
`Mail:
`
`Acknowledgments
`
`(408) 435-8225
`
`VESA
`2150 North First Street
`Suite 440
`San Jose, California 95131-2029
`
`This document would not have been possible without the efforts of the members of the VESA Unified
`Memory Architecture Committee and the professional support of the VESA staff.
`
`Work Group Members
`
`Any industry standard requires information from many sources. The following list recognizes
`members of the VUMA Committee, which was responsible for combining all of the industry input into
`this proposal.
`
`Chairperson
`Rajesh Shakkarwar
`
`Members
`Alan Mormann
`Andy Daniel
`Dean Hays
`Derek Johnson
`Don Pannell
`Jim Jirgal
`Jonathan Claman
`Larry Alchesky
`Long Nguyen
`Neil Trevett
`Peter Cheng
`Robert Tsay
`Solomon Alemayehu
`Sunil Bhatia
`Tony Tong
`Wallace Kou
`
`OPTi
`
`Micron Technology Inc.
`Alliance Semiconductor
`Weitek
`Cypress
`Sierra Semiconductor
`VLSI Technology Inc.
`S3 Inc.
`Mitsubishi
`Oak Technology
`3Dlabs
`Samsung Electronics
`Pacific Micro Computing Inc.
`Hitachi America Ltd.
`Mentor Arc
`S3 Inc.
`Western Digital
`
`3
`
`Version 1.0
`
`ASUS Exhibit 1010 - Page 3
`
`
`
`VUMA Standard - Hardware Specifications
`
`VESA Confidential
`
`Revision History
`
`Initial Revision 0.1p
`
`Revision 0.2p
`Added sync DRAM support
`Electrical Section
`Boot Protocol
`Reformatted document
`
`Revision 0.3p
`Graphics controller replaced with VUMA device
`MD[n:0] changed to t/s
`Modified Aux Memory description
`Added third solution to Memory Expansion Problem
`Synch DRAM burst length changed to 2/4
`Modified all the bus hand off diagram s
`Added DRAM Driver Characteristics section
`
`Revision 0.4p
`Sync DRAM Burst Length changed to 1/2/4
`DRAM controller pin multiplexing added
`Changed AC timing parameters
`
`Sept. 21 ‘95
`
`Oct 5 ‘95
`
`Oct 19 ‘95
`
`Oct 31 ‘95
`
`Revision 0.5p
`Clock reference added to section 2.2
`Modified all the timing diagrams to reflect output valid delay from driving
`clock edge
`Added figures to section 4.4
`Corrected Figure 5-9
`AC Timing Parameters conditions specified and parameters modified
`Modified section 7.5
`
`Jan 9 ‘96
`
`4
`
`Version 1.0
`
`ASUS Exhibit 1010 - Page 4
`
`
`
`VUMA Standard - Hardware Specifications
`
`VESA Confidential
`
`TABLE OF CONTENTS
`
`1.0 INTRODUCTION ............................................................................................................................6
`
`2.0 SIGNAL DEFINITION ...................................................................................................................7
`2.1 SIGNAL TYPE DEFINITION ....................................................................................................................7
`2.2 ARBITRATION SIGNALS ........................................................................................................................7
`2.3 FAST PAGE MODE, EDO AND BEDO DRAMS ....................................................................................7
`2.4 SYNCHRONOUS DRAM .......................................................................................................................8
`3.0 PHYSICAL INTERFACE ...............................................................................................................9
`3.1 PHYSICAL SYSTEM MEMORY SHARING.................................................................................................9
`3.2 MEMORY REGIONS ............................................................................................................................10
`3.3 PHYSICAL CONNECTION.....................................................................................................................11
`4.0 ARBITRATION .............................................................................................................................12
`4.1 ARBITRATION PROTOCOL...................................................................................................................12
`4.2 ARBITER............................................................................................................................................13
`4.3 ARBITRATION EXAMPLES...................................................................................................................15
`4.4 LATENCIES ........................................................................................................................................19
`5.0 MEMORY INTERFACE ..............................................................................................................20
`5.1 MEMORY DECODE .............................................................................................................................20
`5.2 MAIN VUMA MEMORY MAPPING.....................................................................................................21
`5.3 FAST PAGE EDO AND BEDO.............................................................................................................24
`5.4 SYNCHRONOUS DRAM .....................................................................................................................28
`5.5 MEMORY PARITY SUPPORT.................................................................................................................34
`5.6 MEMORY CONTROLLER PIN MULTIPLEXING .......................................................................................34
`6.0 BOOT PROTOCOL ......................................................................................................................34
`6.1 MAIN VUMA MEMORY ACCESS AT BOOT.........................................................................................35
`6.2 RESET STATE.....................................................................................................................................36
`7.0 ELECTRICAL SPECIFICATION ...............................................................................................37
`7.1 SIGNAL LEVELS .................................................................................................................................37
`7.2 AC TIMING .......................................................................................................................................37
`7.3 PULLUPS............................................................................................................................................39
`7.4 STRAPS..............................................................................................................................................39
`7.5 DRAM DRIVER CHARACTERISTICS GUIDELINES ................................................................................40
`
`5
`
`Version 1.0
`
`ASUS Exhibit 1010 - Page 5
`
`
`
`VUMA Standard - Hardware Specifications
`
`VESA Confidential
`
`1.0 Introduction
`
`The concept of VESA Unified Memory Architecture (VUMA) is to share physical
`system memory (DRAM) between system and an external device, a VUMA device; as
`shown in Figure 1-1. A VUMA device could be any type of controller which needs to
`share physical system memory (DRAM) with system and directly access it. One
`example of a VUMA device is graphics controller. In a VUMA system, graphics
`controller will incorporate graphics frame buffer in physical system memory (DRAM)
`or in other words VUMA device will use a part of physical system memory as its
`frame buffer, thus, sharing it with system and directly accessing it. This will eliminate
`the need for separate graphics memory, resulting in cost savings. Memory sharing is
`achieved by physically connecting core logic chipset (hereafter referred to as core
`logic) and VUMA device to the same physical system memory DRAM pins. Though
`the current version covers sharing of physical system memory only between core logic
`and a motherboard VUMA device, the next version will cover an expansion connector,
`connected to physical system memory DRAM pins. An OEM will be able to connect
`any type of device to the physical system memory DRAM pins through the expansion
`connector.
`
`Though a VUMA device could be any type of controller, the discussion in the
`specifications emphasizes a graphics controller as it will be the first VUMA system
`implementation.
`
`Figure 1-1 VUMA System Block Diagram
`
`PCI Bus
`
`CPU
`
`Core Logic
`
`VUMA
`Device
`(e.g. Graphics
`Controller)
`
`Physical
`System Memory
`(DRAM)
`
`6
`
`Version 1.0
`
`ASUS Exhibit 1010 - Page 6
`
`
`
`VUMA Standard - Hardware Specifications
`
`VESA Confidential
`
`2.0 Signal Definition
`
`2.1 Signal Type Definition
`
`in
`
`out
`
`t/s
`
`s/t/s
`
`Input is a standard input-only signal.
`
`Totem Pole Output is a standard active driver
`
`Tri-State is a bi-directional, tri-state input/output pin.
`
`Sustained Tri-state is an active low or active high tri-state signal owned and
`driven by one and only one agent at a time. The agent that drives an s/t/s pin
`active must drive it high for at least one clock before letting it float. A pullup is
`required to sustain the high state until another agent drives it. Either internal or
`external pullup must be provided by core logic. A VUMA device can also
`optionally provide an internal or external pullup.
`
`2.2 Arbitration Signals
`
`MREQ#
`
`in
`out
`
`MREQ# is out for VUMA device and in for core logic. This
`signal is used by VUMA device to inform core logic that it
`needs to access shared physical system memory bus.
`driven by VUMA device on a rising edge of
`sampled by core logic on a rising
`
`MREQ# is
`CPUCLK. MREQ# is
`edge of CPUCLK.
`
`MGNT#
`
`in
`out
`
`MGNT# is out for core logic and in for VUMA device. This
`signal is used by core logic to inform VUMA device t hat it can
`access shared physical system memory bus. MGNT# is
`driven by core logic on a rising edge of CPUCLK. MGNT# is
`sampled by VUMA device on a rising edge of CPUCLK.
`
`CPUCLK
`
`CPUCLK is driven by a clock driver. CPUCLK is in for core
`in
`logic, VUMA device and synchronous DRAM.
`
`2.3 Fast Page Mode, EDO and BEDO DRAMs
`
`RAS#
`
`s/t/s Active low row address strobe for memory banks. Core logic
`will
`have multiple RAS#s to support multiple banks. VUMA device
`could have a single RAS# or multiple RAS#s. These signals are
`shared by core logic and VUMA device. They are driven by
`current bus master.
`
`7
`
`Version 1.0
`
`ASUS Exhibit 1010 - Page 7
`
`
`
`VUMA Standard - Hardware Specifications
`
`VESA Confidential
`
`CAS[n:0]#
`
`WE#
`
`OE#
`
`MA[n:0]
`
`MD[n:0]
`
`s/t/s Active low column address strobes, one for each byte lane. In
`case of pentium-class systems n is 7. These signals are shared by core
`logic and VUMA device. They are driven by current bus master.
`s/t/s Active low write enable. This signal is shared by core logic
`and VUMA device. It is driven by current bus master.
`s/t/s Active low output enable. This signal exists only on EDO and
`BEDO. This signal is shared by core logic and VUMA device.
`It is driven by current bus master.
`s/t/s Multiplexed memory address. These signals are shared by core
`logic and VUMA device. They are driven by current bus master.
`Bi-directional memory data bus. In case of pentium-class
`t/s
`systems
`n is 63. These signals are shared by core logic and
`VUMA device.
`They are driven by current bus master.
`
`2.4 Synchronous DRAM
`
`CPUCLK
`
`CKE
`
`CS#
`
`RAS#
`
`CAS#
`
`WE#
`
`MA[n:0]
`
`DQM[n:0]
`
`MD[n:0]
`
`in
`
`CS#s
`
`CPUCLK is the master clock input (referred to as CLK in
`synchronous DRAM data books). All DRAM input/ output
`signals
`are referenced to the CPUCLK rising edge.
`s/t/s CKE determines validity of the next CPUCLK. If CKE is high,
`the
`next CPUCLK rising edge is valid; otherwise it is invalid. This
`signal also plays role in entering power down mode and refresh
`modes. This signal is shared by core logic and VUMA device.
`It is driven by current bus master.
`s/t/s CS# low starts the command input cycle. CS# is used to select a
`bank of Synchronous DRAM. Core logic will have multiple
`to support multiple banks. VUMA device could have a single
`CS# or multiple CS#s. These signals are shared by core logic
`and VUMA device. They are driven by current bus master.
`s/t/s Active low row address strobe. This signal is shared by core
`logic and VUMA device. It is driven by current bus master.
`s/t/s Active low column address strobe. This signal is shared by core
`logic and VUMA device. It is driven by current bus master.
`s/t/s Active low write enable. This signal is shared by core logic and
`VUMA device. It is driven by current bus master.
`s/t/s Multiplexed memory address. These signals are shared by core
`logic and VUMA device. They are driven by current bus master.
`I/O buffer control signals, one for each byte lane. In case of
`pentium-class systems n is 7. In read mode they control the
`output buffers. In write mode, they control the word mask. These
`signals
`are shared by core logic and VUMA
`device.They
`are driven by current bus master.
`Bi-directional memory data bus. In case of pentium-class
`t/s
`systems
`n is 63. These signals are shared by core logic and
`
`s/t/s
`
`8
`
`Version 1.0
`
`ASUS Exhibit 1010 - Page 8
`
`
`
`VUMA Standard - Hardware Specifications
`
`VESA Confidential
`
`VUMA device.
`
`They are driven by current bus master.
`
`3.0 Physical Interface
`
`3.1 Physical System Memory Sharing
`
`Figure 3-1 depicts the VUMA Block Diagram. Core logic and VUMA device are
`physically connected to the same DRAM pins. Since they share a common resource,
`they need to arbitrate for it. PCI/VL/ISA external masters also need to access the same
`DRAM resource. Core logic incorporates the arbiter and takes care of arbitration
`amongst various contenders.
`
`Figure 3-1 VUMA Block Diagram
`
`Host Addr & Cntrl
`
`CPU
`
`Host Data Bus
`
`Core Logic
`
`CLK Gen
`
`PCI Bus
`
`CPUCLK
`MREQ#
`
`MGNT#
`
`MA[n:0]
`
`Mem Cntrl
`
`MA[n:0]
`
`Mem Cntrl
`
`VUMA
`Device
`(e.g. Graphics
`Controller)
`
`Main VUMA
`Memory
`(e.g. Frame Buffer)
`
`Physical
`System
`Memory
`(DRAM)
`
`Optional Aux
`VUMA Memory
`
`O.S.
`Memory
`
`Memory Data Bus
`
`As shown in Figure 3-1, VUMA device arbitrates with core logic for access to the
`shared physical system memory through a three signal arbitration scheme viz.
`MREQ#, MGNT# and CPUCLK. MREQ# is a signal driven by VUMA device to core
`logic and MGNT# is a signal driven by the core logic to VUMA device. MREQ# and
`MGNT# are active low signals driven and sampled synchronous to CPUCLK common
`
`9
`
`Version 1.0
`
`ASUS Exhibit 1010 - Page 9
`
`
`
`VUMA Standard - Hardware Specifications
`
`VESA Confidential
`
`to both core logic and VUMA device.
`
`Core logic is always the default owner and ownership will be transferred to VUMA
`device upon demand. VUMA device could return ownership to core logic upon
`completion of its activities or park on the bus. Core logic can always preempt VUMA
`device from the bus.
`
`VUMA device needs to access the physical system memory for different reasons and
`the level of urgency of the needed accesses varies. If VUMA device is given the
`access to the physical system memory right away, every time it needs, the CPU
`performance will suffer and as it may not be needed right away by the VUMA device,
`there would not be any improvement in VUMA device performance. Hence two levels
`of priority are defined viz. low priority and high priority. Both priorities are conveyed
`to core logic through a single signal, MREQ#.
`
`3.2 Memory Regions
`
`As shown in Figure 3-1, physical system memory can contain two separate physical
`memory blocks, Main VUMA Memory and Auxiliary (Aux) VUMA Memory. As
`cache coherency for Main VUMA Memory and Auxiliary VUMA Memory is handled
`by this standard, a VUMA device can access these two physical memory blocks
`without any separate cache coherency considerations. If a VUMA device needs to
`access other regions of physical system memory, designers need to take care of cache
`coherency.
`
`Main VUMA Memory is programmed as non-cacheable region to avoid cache
`coherency overhead. How Main VUMA Memory is used depends on the type of
`VUMA device; e.g., when VUMA device is a graphics controller, main VUMA
`memory will be used for Frame buffer.
`
`Auxiliary VUMA Memory is optional for both core logic and VUMA device. If
`supported, it can be programmed as non-cacheable region or write-through region.
`Auxiliary VUMA Memory can be used to pass data between core logic and VUMA
`device without copying it to Main VUMA Memory or passing through a slower PCI
`bus. This capability would have significant advantages for more advanced devices.
`How Auxiliary VUMA Memory is used depends on the type of VUMA device e.g.
`when VUMA device is a 3D graphics controller, Auxiliary VUMA memory will be
`used for texture mapping.
`
`When core logic programs Auxiliary VUMA Memory area as non-cacheable, VUMA
`device can read from or write to it. When core logic programs Auxiliary VUMA
`Memory area as write through, VUMA device can read from it but can not write to it.
`
`Both core logic and VUMA device have an option of either supporting or not
`
`10
`
`Version 1.0
`
`ASUS Exhibit 1010 - Page 10
`
`
`
`VUMA Standard - Hardware Specifications
`
`VESA Confidential
`
`supporting the Auxiliary VUMA Memory feature. Whether Auxiliary VUMA memory
`is supported or not should be transparent to an application. The following algorithm
`explains how it is made transparent. The algorithm is only included to explain the
`feature. Refer to the latest VUMA BIOS Extension Specifications for the most updated
`BIOS calls:
`
`1. When an application needs this feature, it needs to make a BIOS call, <Report
`VUMA - core logic capabilities (refer to VUMA BIOS Extension Specifications)>,
`to find out if core logic supports the feature.
`2. If core logic does not support the feature, the application needs to use some alternate
`method.
`3. If core logic supports the feature, the application can probably use it and should do
`the following:
`
`a. Request the operating system for a physically contiguous block of memory of
`required size.
`b. If not successful in getting physically contiguous block of memory of required size,
`use some alternate method.
`c. If successful, get the start address of the block of memory.
`d. Read <VUMA BIOS signature string (refer to VUMA BIOS Extension
`Specifications)>, to find out if VUMA device can access the bank in which
`Auxiliary VUMA Memory has been assigned.
`e. If VUMA device can not access that bank, the application needs to either retry the
`procedure from “step a” to “step c” till it can get Auxiliary VUMA Memory in a
`VUMA device accessible bank or use some alternate method.
`f. If VUMA device can access that bank, make a BIOS call function <Set (Request)
`VUMA Auxiliary memory (refer to VUMA BIOS Extension Specifications)>, to
`ask core logic to flush Auxiliary VUMA Memory block of the needed size from the
`start address from “step c” and change it to either non-cacheable or write through.
`How a core logic flushes cache for the block of memory and programs it as non-
`cacheable/ write through is implementation specific.
`g. Use VUMA Device Driver, to give VUMA device the Auxiliary VUMA Memory
`parameters viz. size, start address from “step c” and whether the block should be
`non-cacheable or write through.
`
`3.3 Physical Connection
`
`A VUMA device can be connected in two ways:
`
`1. VUMA device can only access one bank of physical system memory - VUMA
`device is connected to a single bank of physical system memory. In case of Fast Page
`Mode, EDO and BEDO VUMA device has a single RAS#. In case of Synchronous
`DRAM VUMA device has a single CS#. Main VUMA memory resides in this memory
`bank. If supported, Auxiliary VUMA Memory can only be used if it is assigned to this
`
`11
`
`Version 1.0
`
`ASUS Exhibit 1010 - Page 11
`
`
`
`VUMA Standard - Hardware Specifications
`
`VESA Confidential
`
`bank.
`
`2. VUMA device can access all of the physical system memory - VUMA device has as
`many RAS# (for Fast Page Mode, EDO and BEDO)/CS# (for Synchronous DRAM)
`lines as core logic and is connected to all banks of the physical system memory. Both
`Main VUMA memory and Auxiliary VUMA Memory (if supported) can be assigned
`to any memory bank.
`
`4.0 Arbitration
`
`4.1 Arbitration Protocol
`
`There are three signals establishing the arbitration protocol between core logic and
`VUMA device. MREQ# signal is driven by VUMA device to core logic to indicate it
`needs to access the physical system memory bus. It also conveys the level of priority
`of the request. MGNT# is driven by core logic to VUMA device to indicate that it can
`access the physical system memory bus. Both MREQ# and MGNT# are driven
`synchronous to CPUCLK.
`
`As shown in Figure 4-1, low level priority is conveyed by driving MREQ# low. A
`high level priority can only be generated by first generating a low priority request. As
`shown in Figure 4-2 and Figure 4-3, a low level priority is converted to a high level
`priority by driving MREQ# high for one CPUCLK clock and then driving it low.
`
`Figure 4-1 Low Level Priority
`
`1
`
`2
`
`3
`
`4
`
`5
`
`6
`
`7
`
`8
`
`9
`
`CPUCLK
`
`MREQ#
`
`Figure 4-2 High Level Priority
`
`1
`
`2
`
`3
`
`4
`
`5
`
`6
`
`7
`
`8
`
`9
`
`CPUCLK
`
`MREQ#
`
`Figure 4-3 A Pending Low Level Priority converted to a High Level Priority
`
`12
`
`Version 1.0
`
`ASUS Exhibit 1010 - Page 12
`
`
`
`VUMA Standard - Hardware Specifications
`
`VESA Confidential
`
`1
`
`2
`
`3
`
`4
`
`5
`
`6
`
`7
`
`8
`
`9
`
`CPUCLK
`
`MREQ#
`
`4.2 Arbiter
`
`The arbiter, housed in core logic, needs to understand the arbitration protocol. State
`Machine for the arbiter is depicted in Figure 4-4. As shown in Figure 4-4, the arbiter
`State Machine is resetted with PCI_Reset. Explanation of the arbiter is as follows:
`
`1. HOST State - The physical system memory bus is with core logic and no bus
`request from VUMA device is pending.
`
`2. Low Priority Request (LPR) State - The physical system memory bus is with core
`logic and a low priority bus request from the VUMA device is pending.
`
`3. High Priority Request (HPR) State - The physical system memory bus is with core
`logic and a pending low priority bus request has turned into a pending high priority
`bus request.
`
`4. Granted (GNTD) State - Core logic has relinquished the physical system memory
`bus to VUMA device.
`
`5. Preempt (PRMT) State - The physical system memory bus is owned by VUMA
`device, however, core logic has requested VUMA device to relinquish the bus and
`that request is pending.
`
`Figure 4-4 Arbiter State Machine
`
`13
`
`Version 1.0
`
`ASUS Exhibit 1010 - Page 13
`
`
`
`VUMA Standard - Hardware Specifications
`
`VESA Confidential
`
`Note:
`
`1. Only the conditions which will cause a transition from one state to another have
`been shown. Any other condition will keep the state machine in the current state.
`
`4.2.1 Arbitration Rules
`
`1. VUMA device asserts MREQ# to generate a low priority request and keeps it
`asserted until the VUMA device obtains ownership of the physical system memory
`bus through the assertion of MGNT#, unless the VUMA device wants to either raise
`a high priority request or raise the priority of an already pending low priority
`request. In the later case,
`
`a. If MGNT# is sampled asserted the VUMA device will not deassert MREQ#.
`Instead, the VUMA device will gain physical system memory bus ownership and
`maintain MREQ# asserted until it wants to relinquish the physical system
`memory bus.
`
`b. If MGNT# is sampled deasserted, the VUMA device will deassert MREQ# for
`one clock and assert it again irrespective of status of MGNT#. After reassertion,
`the VUMA device will keep MREQ# asserted until physical system memory bus
`ownership is transferred to the VUMA device through assertion of MGNT#
`signal.
`
`14
`
`Version 1.0
`
`ASUS Exhibit 1010 - Page 14
`
`
`
`VUMA Standard - Hardware Specifications
`
`VESA Confidential
`
`2. VUMA device may assert MREQ# only for the purpose of accessing the unified
`memory area. Once asserted, MREQ# should not be deasserted before MGNT#
`assertion for any reason other than raising the priority of the request (i.e., low to
`high). No speculative request and request abortion is permitted. If MREQ# is
`deasserted to raise the priority, it should be reasserted in the next clock and kept
`asserted until MGNT# is sampled asserted.
`
`3. Once MGNT# is sampled asserted by VUMA device, it gains and retains physical
`system memory bus ownership until MREQ# is deasserted.
`
`4. The condition, VUMA device completing its required transactions before core logic
`needing the physical system memory bus back, can be handled in two ways:
`
`a. VUMA device can deassert MREQ#. In response, MGNT# will be deasserted in
`the next clock edge to change physical system memory bus ownership back to
`core logic.
`b. VUMA device can park on the physical system memory bus. If core logic needs
`the physical system memory bus, it should preempt VUMA device.
`
`5. In case core logic needs the physical system memory bus before VUMA device
`releases it on its own, arbiter can preempt VUMA device from the bus. Preemption
`is signaled to VUMA device by deasserting MGNT#. VUMA device can retain
`ownership of the bus for a maximum of 60 CPUCLK clocks after it has been
`signaled to preempt. VUMA device signals release of the physical system memory
`bus by deasserting MREQ#.
`
`6. When VUMA device deasserts MREQ# to transfer bus ownership back to core
`logic, either on its own or because of a preemption request, it should keep MREQ#
`deasserted for at least two clocks of recovery time before asserting it again to raise a
`request.
`
`4.3 Arbitration Examples
`
`1. Low priority request and immediate bus release to VUMA device
`
`15
`
`Version 1.0
`
`ASUS Exhibit 1010 - Page 15
`
`
`
`VUMA Standard - Hardware Specifications
`
`VESA Confidential
`
`1
`
`2
`
`3
`
`4
`
`5
`
`6
`
`7
`
`8
`
`9
`
`CPUCLK
`
`MREQ#
`
`MGNT#
`
`Bus Owner
`
`Arbiter State
`
`Core Logic
`
`Float
`
`VUMA Device
`
`Float
`
`Core Logic
`
`HOST
`
`LPR
`
`GNTD
`
`HOST
`
`2. Low priority request and immediate bus release to VUMA device with
`preemption where removal of MGNT# and removal of MREQ# coincide
`
`1
`
`2
`
`3
`
`4
`
`5
`
`6
`
`7
`
`8
`
`9
`
`CPUCLK
`
`MREQ#
`
`MGNT#
`
`Bus Owner
`
`Arbiter State
`
`Core Logic
`
`Float
`
`VUMA Device
`
`Float
`
`Core Logic
`
`HOST
`
`LPR
`
`GNTD
`
`HOST
`
`3. Low priority request and immediate bus release to VUMA device with
`preemption where MREQ# is removed after the current transaction because of
`preemption
`
`1
`
`2
`
`3
`
`4
`
`5
`
`6
`
`7
`
`8
`
`9
`
`CPUCLK
`
`MREQ#
`
`MGNT#
`
`Bus Owner
`
`Arbiter State
`
`Core Logic
`
`Float
`
`VUMA Device
`
`Float
`
`Core Logic
`
`HOST
`
`LPR
`
`GNTD
`
`PRMT
`
`HOST
`
`4. Low priority request and delayed bus release to VUMA device
`
`16
`
`Version 1.0
`
`ASUS Exhibit 1010 - Page 16
`
`
`
`VUMA Standard - Hardware Specifications
`
`VESA Confidential
`
`1
`
`2
`
`3
`
`4
`
`5
`
`6
`
`7
`
`8
`
`9
`
`CPUCLK
`
`MREQ#
`
`MGNT#
`
`Bus Owner
`
`Arbiter State
`
`Core Logic
`
`Float
`
`VUMA Device
`
`Float
`
`Core Logic
`
`HOST
`
`LPR
`
`GN TD
`
`HOST
`
`5. High priority request and immediate bus release to VUMA device
`
`1
`
`2
`
`3
`
`4
`
`5
`
`6
`
`7
`
`8
`
`9
`
`CPUCLK
`
`MREQ#
`
`MGNT#
`
`Bus Owner
`
`C ore Logic
`
`Float
`
`VUMA Device
`
`Float
`
`Core Logic
`
`Arbiter State
`
`HOST
`
`LPR
`
`GNTD
`
`HOST
`
`6. High priority request and immediate bus release to VUMA device with
`preemption where MGNT# and MREQ# removal coincides.
`
`1
`
`2
`
`3
`
`4
`
`5
`
`6
`
`7
`
`8
`
`9
`
`CPUCLK
`
`MREQ#
`
`MGNT#
`
`Bus Owner
`
`C ore Logic
`
`Float
`
`VUMA Device
`
`Float
`
`Core Logic
`
`Arbiter State
`
`HOST
`
`LPR
`
`GNTD
`
`HOST
`
`7. High priority request and immediate bus release to VUMA device with
`preemption where MREQ# is removed after the current transaction because of
`
`17
`
`Version 1.0
`
`ASUS Exhibit 1010 - Page 17
`
`
`
`VUMA Standard - Hardware Specifications
`
`VESA Confidential
`
`preemption.
`
`1
`
`2
`
`3
`
`4
`
`5
`
`6
`
`7
`
`8
`
`9
`
`CPUCLK
`
`MREQ#
`
`MGNT#
`
`Bus Owner
`
`C ore Logic
`
`Float
`
`VUMA Device
`
`Float
`
`Core Logic
`
`Arbiter State
`
`HOST
`
`LPR
`
`GNTD
`
`PRMT
`
`HOST
`
`8. High priority request and one clock delayed bus release to VUMA device
`
`1
`
`2
`
`3
`
`4
`
`5
`
`6
`
`7
`
`8
`
`9
`
`CPUCLK
`
`MREQ#
`
`MGNT#
`
`Bus Owner
`
`Arbiter State
`
`Core Logic
`
`Float
`
`VUMA Device
`
`Float
`
`Core Logic
`
`HOST
`
`LPR
`
`HPR
`
`GNTD
`
`HOST
`
`9. High priority request and one clock delayed bus release to VUMA device with
`preemption where MREQ# and MGNT# removal do not coincide
`
`1
`
`2
`
`3
`
`4
`
`5
`
`6
`
`7
`
`8
`
`9
`
`CPUCLK
`
`MREQ#
`
`MGNT#
`
`Bus Owner
`
`Arbiter State
`
`Core Logic
`
`Float
`
`VUMA Device
`
`Float
`
`Core Logic
`
`HOST
`
`LPR
`
`HPR
`
`GNTD
`
`PRMT
`
`HOST
`
`10. High priority request and delayed bus release to VUMA device
`
`18
`
`Version 1.0
`
`ASUS Exhibit 1010 - Page 18
`
`
`
`VUMA Standard - Hardware Specifications
`
`VESA Confidential
`
`1
`
`2
`
`3
`
`4
`
`5
`
`6
`
`7
`
`8
`
`9
`
`CPUCLK
`
`MREQ#
`
`MGNT#
`
`Bus Owner
`
`Arbiter State
`
`Core Logic
`
`Float
`
`VUMA Device
`
`Float
`
`Core Logic
`
`HOST
`
`LPR
`
`HPR
`
`GN TD
`
`HOST
`
`4.4 Latencies
`
`1. High Priority Request - As shown in Figure 4-5, worst case latency for VUMA
`device to receive a grant after generating a high priority request is 35 CPUCLK
`clocks, i.e. after arbiter receives a high priority request from VUMA device, core
`logic does not need to relinquish the physical system memory bus right away and
`can keep the bus for up to 35 CPUCLK clocks.
`
`Figure 4-5 Worst Case Latency for High Priority Request
`
`CPUCLK
`
`MREQ#
`
`MGNT#
`
`1
`
`2
`
`3
`
`4
`
`5
`
`37
`
`38
`
`39
`
`40
`
`\\
`
`\\
`\\
`
`2. Low Priority Request - No worst case latency number has been defined by this
`specification for low priority request. VUMA devices should incorporate some
`mechanism to avoid a low priority request being starved for an unreasonable time.
`The mechanism is implementation specific and not covered by the standard. One
`simple reference solution is as follows:
`
`VUMA device incorporates a programmable timer. The timer value is set at the boot
`time. The timer gets loaded when a low priority request is generated. When the
`timer times out, the low priority request is converted to a high priority request.
`
`3. Preemption Request to VUMA device - As shown in Figure 4-6, worst case latency
`for VUMA device to relinquish the physical system memory bus after receiving a
`preemption request is 60 CPUCLK clocks, i.e. after core logic requests VUMA
`device to relinquish the physical system memory bus, VUMA device does not need
`to relinquish the bus right away and can keep the bus for up to 60 CPUCLK clocks.
`
`19
`
`Version 1.0
`
`ASUS Exhibit 1010 - Page 19
`
`
`
`VUMA Standard - Hardware Specifications
`
`VESA Confidential
`
`Figure 4-6 Worst Case Latency for Preemption Request
`
`CPUCLK
`
`MREQ#
`
`MGNT#
`
`1
`
`2
`
`3
`
`4
`
`5
`
`63