throbber
(19) United States
`(12) Patent Application Publication (10) Pub. No.: US 2003/0076320 A1
`(43) Pub. Date:
`Apr. 24, 2003
`Collodi
`
`US 20030076320A1
`
`(54) PROGRAMMABLE PER-PIXEL SHADER
`WITH LIGHTING SUPPORT
`
`(52) US. Cl. ............................................................ .. 345/426
`
`(76) Inventor: David Collodi, Taylorville, IL (US)
`
`(57)
`
`ABSTRACT
`
`Correspondence Address:
`William F. Prendergast
`Brinks Hofer Gilson & Lione
`NBC Tower, Suite 3600
`PO. Box 10395
`Chicago, IL 60610 (US)
`
`(21) Appl. No.:
`
`10/037,992
`
`(22) Filed;
`
`()CL 18, 2001
`
`Publication Classi?cation
`
`(51) Int. Cl.7 ................................................... .. G06T 15/50
`
`Aprogrammable hardware per-pixel shading device for use
`in real-time 3D graphics applications containing additional
`asynchronous hardWare units to assist in real-time per-pixel
`lighting. The standard programmable shading unit contains
`hardWare for execution of a user-de?ned sequence of texture
`lookup and color blend operations. Programmable shader is
`assisted by a Vector Generation Unit responsible for gener
`_
`_
`_
`_
`_
`_
`at1ng normalized shading vectors and providing said vectors
`for use Within said programmable shading unit. One or more
`Vector Shading Units operate in parallel With the program
`mable shading unit providing hardWare accelerated lighting/
`color calculations and making results available for use in the
`programmable shading unit.
`
`6
`W
`RASTERIZER
`
`VERTEX
`VALUES
`i?
`
`I\ LlGHTING SEQUENCE UNiT
`
`VECTOR
`GENERATOR
`8/"
`
`‘
`l
`
`COEFFICIENT
`GENERATOR
`P
`\9
`
`PROGRAMMABLE SHADING
`UN IT
`
`plXEL
`c
`OLOR
`5
`
`l
`
`k 2
`
`3 L TEXTURE
`MEMORY
`
`MEDIATEK, Ex. 1007, Page 1
`
`

`

`Patent Application Publication Apr. 24, 2003 Sheet 1 0f 3
`
`US 2003/0076320 Al
`
`F l
`
`I
`
`6
`'\
`
`RASTER'ZER
`
`‘\ LlGHTING SEQUENCE UNIT
`vEcTOR
`COEFFICIENT
`GENERATOR
`GENERATOR %
`8/‘
`
`‘
`T
`
`\9
`
`VERTEX
`VALUES
`(T
`
`PROGRAMMABLE SHADING
`UNIT
`
`plXEL
`COLOR
`
`5
`
`1
`
`L2
`
`3
`i
`L TEXTURE
`MEMORY
`
`l7
`
`GENERATE SURFACE g'g
`ANGLE vEcTOR
`=
`
`(‘4
`(12
`COMBINE
`PROCESS
`NORMAL OATA BUMP MAP
`
`'5’\
`
`‘
`
`\
`
`"l3
`
`\
`GENERATE VIEW
`PROCESS
`/ EYE
`_, REFLECTION
`I6 vEcTOR
`vEcTOR
`K\
`l8
`
`20
`5:
`
`nf
`
`MEDIATEK, Ex. 1007, Page 2
`
`

`

`Patent Application Publication Apr. 24, 2003 Sheet 2 0f 3
`
`US 2003/0076320 A1
`
`POINT LIGHT UNIT
`
`36
`)
`
`3|
`30
`7
`>
`GEN ERATE RAW * CAI-CULATZE
`LIGHT VECTOR
`LENGTH
`
`33
`/
`NORMALlZE LIGHT , r
`VECTOR
`
`34
`
`CALCULATE
`DISTANCE SCALAR
`)
`as
`
`_
`L35
`
`43
`
`44
`
`4|
`‘T
`
`42>
`
`CALCULATE
`' VECTOR
`DOT
`_ PRODUCT
`
`SCALE
`VECTOR
`—'—“’ DOT
`PRODUCT
`
`+
`
`45
`x
`l
`SCALE
`COLOR
`VALUE
`
`'
`
`46%
`
`1}
`
`47/
`
`‘I
`
`49/ V48
`
`i
`
`MEDIATEK, Ex. 1007, Page 3
`
`

`

`Patent Application Publication Apr. 24, 2003 Sheet 3 0f 3
`
`US 2003/0076320 A1
`
`59‘
`
`551
`
`COMBINE DIFFUSE COLOR VECTORS
`
`5/7
`
`‘5e
`
`COMBINE SPECULAR COLOR VECTORS
`
`9s
`
`60/
`
`UGHTING SEQUENCE UNIT
`
`VECTOR
`GENERATION
`uNIT
`
`TI/
`
`74
`VECTOR /
`SHADING
`UNIT
`
`1
`
`75
`/
`-—-> ACCUMULATION
`UNIT
`
`MEDIATEK, Ex. 1007, Page 4
`
`

`

`US 2003/0076320 A1
`
`Apr. 24, 2003
`
`PROGRAMMABLE PER-PIXEL SHADER WITH
`LIGHTING SUPPORT
`
`BACKGROUND OF THE INVENTION
`[0001] The present invention relates in general to the ?eld
`of real-time computer generated graphics hardWare. In par
`ticular, the present invention relates to the ?eld of per-pixel
`shading and programmable per-pixel shading Within real
`time image generation devices. Until recently, most real
`time 3D graphics hardWare Was limited to per-vertex light
`ing and shading operations, such as Gouraud shading and
`equivalents. Per-vertex operations, While computationally
`ef?cient, tend to produce visual inaccuracies for the lighting
`of some models and prohibit the rendering of complex
`surface effects such as true surface curvature, bump map
`ping, and proper specular re?ections. More complicated
`effects such as Phong shading and bump mapping require
`that shading operations are done for substantially each
`draWn pixel on a polygon surface.
`[0002] Recently, advances in hardWare design and proces
`sor speed have alloWed the application of a limited number
`of per-pixel effects to be produced in real-time graphics
`hardWare. Recent prior art shading devices have included
`user-selected multi-texturing capabilities Wherein end users
`Were able to direct the texture-blending pipeline to imple
`ment a number of per-pixel effects. One such prior art
`approach is the implementation of Environment Mapped
`Bump Mapping (EMBM). In a standard EMBM implemen
`tation, 2 texture maps are provided—a bump map represent
`ing surface perturbation and an environment map represent
`ing the light from the surrounding environment. Initially, the
`bump map value(s) corresponding to the rendered pixel are
`found by addressing the bump map from interpolated vertex
`coordinate values. The bump map value(s) are used to
`compute an environment map address, Which is used in turn
`to address the environment map. The environment map is
`then taken as the pixel color.
`[0003] Currently, state of the art pixel shaders alloW for a
`more involved form of user con?gurability. In some cases,
`massively parallel pixel architectures are provided consist
`ing of a plurality of pixel processors Wherein each processor
`is capable of performing a user-speci?ed sequence of
`instructions and mathematical operations on local data to
`ultimately determine pixel color. In such parallel systems,
`complex pixel operations can be performed quickly because
`the number of parallel processors can overcome the time to
`perform the pixel computations. HoWever, in systems such
`as these, the hardWare cost of supporting multiple general
`purpose pixel processors ultimately tends to outWeigh the
`bene?ts of supporting such complex pixel operations. Other
`state-of-the-art programmable per-pixel shading systems
`provide a tighter compromise betWeen hardWired function
`ality and general purpose operation by alloWing a limited,
`user-programmable sequence of specialiZed pixel operations
`to be performed Within a more traditional pixel rendering
`pipeline. The Microsoft DirectX8 graphics API presents a
`standard set of pixel operations for use in the current and
`future programmable per-pixel shading hardWare. Compli
`ant hardWare provides for the implementation of said pixel
`operations, memory units to hold operation result values,
`and memory to hold a user-de?ned sequence of operations
`(program).
`[0004] Programmable per-pixel shading architectures such
`as the DirectX8 compliant hardWare previously described
`
`offer a great deal of user customiZability While retaining the
`speed advantages of dedicated 3D graphics hardWare. The
`architecture, hoWever, provides a number of limitations as
`Well. Most vector operations are typically limited to 8-12 bit
`accuracy, Which is insuf?cient for some lighting applications
`such as point lighting and high order specularity. Although
`programmable per-pixel architectures can be user-de?ned to
`perform a variety of operations, they lack parallel processing
`support for many general-purpose lighting calculations.
`While programmable architectures alloW for the calculation
`of standard per-pixel surface normal, N, and eye re?ection,
`R, vectors, using the programmable hardWare to do so can
`Waste valuable calculation and texture resources and can
`severely limit the accuracy of the resultant vectors. Like
`Wise, although pixel operations may be used to perform dot
`product operations for standard diffuse and specular light
`intensity calculations, the general purpose nature of the
`per-pixel operations don’t lend themselves to the parallelism
`necessary to process multiple light sources per-pixel. Addi
`tionally, general purpose per-pixel operations offer very
`limited support to point lights and complex specularity
`functions. There exists a need, therefore, for a program
`mable per-pixel architecture capable providing parallel pro
`cessing support for general purpose lighting computations
`Wherein the results of said lighting computations are made
`available for use Within a user-programmable frameWork.
`
`BRIEF SUMMARY OF THE INVENTION
`
`[0005] The present invention details a hardWare extension
`to a programmable per-pixel shading device. The program
`mable shading unit is assisted by a lighting sequence unit
`comprising a vector generation unit, one or more point light
`units, one or more vector shading units and an optional
`accumulation unit. The vector generation unit interpolates a
`surface normal vector, combines it With an optional bump
`map vector and produces a normaliZed surface angle vector
`and a normaliZed vieW re?ection vector Wherein each vector
`may be used in the programmable unit. The point light unit
`supplies a normaliZed point light vector and a fade-off
`coef?cient value to the programmable unit. The vector
`shading unit performs light-surface dot product calculations,
`optionally scales the dot product value and uses the scaled
`dot product to modify a color value. The accumulation unit
`combines colors into tWo separate channels: a diffuse color
`and a specular color and provides said colors for use in the
`programmable unit.
`
`BRIEF DESCRIPTION OF SEVERAL VIEWS OF
`THE DRAWINGS
`
`[0006] FIG. 1 is an overvieW of a preferred embodiment
`of the present invention.
`[0007] FIG. 2 depicts the internal components of the
`Lighting Sequence Unit.
`[0008] FIG. 2 illustrates the operation of the Vector Gen
`eration Unit.
`
`[0009] FIG. 3 shoWs a logic vieW of the operation of the
`Point Light Unit.
`[0010]
`FIG. 4 illustrates the operation of the Vector Shad
`ing Unit.
`[0011] FIG. 5 illustrates the operation of the Accumula
`tion Unit.
`
`MEDIATEK, Ex. 1007, Page 5
`
`

`

`US 2003/0076320 A1
`
`Apr. 24, 2003
`
`DETAILED DESCRIPTION OF THE
`PREFERRED EMBODIMENTS
`
`[0012] The present invention performs per-pixel calcula
`tions Within real-time 3D graphics hardware. Most tradi
`tional 3D graphics hardWare systems take polygonal (usu
`ally triangular) primitives, convert the primitives from
`World-space to screen-space if necessary, and rasteriZe the
`transformed primitive to the screen. During rasteriZation, the
`primitive is broken up into a plurality of pixels and each
`visible pixel is draWn to screen memory. Per-pixel process
`ing units are used to determine pixel color before each pixel
`is Written to screen memory. Per-pixel processing units need
`not necessarily operate on every pixel in the polygon or the
`scene as some pixels may be excluded for several reasons
`including but not limited to occlusion by a nearer surface.
`The per-pixel processing units generally determine pixel
`color from a combination of texture map lookups, lighting
`calculations, and interpolated vertex color values. Prior art
`per-pixel processors are capable of performing a user
`programmable sequence of pixel commands Which operate
`on pixel data to produce the ?nal Written color value. The
`present invention details an improved programmable per
`pixel processor able to calculate per-pixel lighting effects
`faster and more ef?ciently than prior art implementations.
`
`[0013] The present invention seeks to improve lighting
`performance by the inclusion of dedicated hardWare lighting
`sequence processing While maintaining and enhancing user
`customiZation by alloWing said dedicated lighting hardWare
`to provide feedback data to hardWare registers Which are
`readable (and potentially Writeable) by the user-program
`mable sequence of pixel operations.
`
`[0014] FIG. 1 presents an overvieW of the logic elements
`of a per-pixel processor that is a preferred embodiment of
`the present invention. Said per-pixel processor performs the
`same general functionality as many prior art per-pixel pro
`cessors in that it takes, as input, a number of vertex values
`from a rasteriZer, is connected to a texture memory, and
`ultimately outputs a pixel color. The per-pixel processor, as
`With prior art approaches, operates on each draWn pixel of
`the rendered surface primitive. It may be desirable to
`increase rendering ef?ciency by providing multiple per-pixel
`processors running in parallel in order to multiply the pixel
`throughput. The preferred embodiment detailed herein rep
`resents only a single per-pixel processor primarily for the
`purpose of providing a clear example. As Will be recogniZed
`by those skilled in the art, multiple copies of the processor
`detailed herein, as Well as those of alternate embodiments,
`may be used in parallel Within a graphics system.
`
`[0015] As illustrated by FIG. 1, the preferred embodiment
`of the present invention is comprised of a Programmable
`Shading Unit 2 operatively connected to a texture memory
`3, a rasteriZer 6 and a Lighting Sequence Unit 1. The
`rasteriZer 6, Whose operation is Well knoWn to those skilled
`in the art, iterates through the pixels of the current polygon,
`optionally performs depth buffering operations and provides
`interpolated vertex values for each pixel. The Programmable
`Shading Unit 2 is responsible for executing a user-de?ned
`sequence of pixel operations. The available operations in a
`preferred embodiment of the present invention include, but
`are not limited to, operations for: using a vector value to
`address a texture, cube, or sphere map, calculation of a 3 or
`4 dimensional dot product value, addition and subtraction of
`
`scalar and vector values, vector-scalar multiplication and
`division, and component-Wise vector multiplication. Alter
`nate embodiments provide different sets of pixel operations.
`Said pixel operations are performed on vector and scalar
`values stored in local memory registers. The Programmable
`Shading Unit has a number of registers to store scalar and
`vector values for per-pixel operations. Certain registers are
`also used to communicate data back and forth from the
`Lighting Sequence Unit 1. The general embodiment of the
`Lighting Sequence Unit illustrated in FIG. 1 contains dedi
`cated hardWare elements for the generation of normaliZed
`surface angle and re?ection vectors 8, and hardWare ele
`ments for the production of light coef?cient values, 9, for
`one or more light sources. Vertex values input from the
`rasteriZer 6 to the Programmable Shading Unit at 7, consist
`of scalar or vector quantities interpolated from values given
`at each vertex of said polygonal primitive. The Program
`mable Shading unit produces a pixel color value at 5,
`consisting of an RGB color value or an RGBA color-alpha
`value.
`[0016] FIG. 6 illustrates a preferred embodiment of the
`internal operations of the Lighting Sequence Unit. The
`Vector Generation Unit 71 is responsible for the calculation
`of a 3-dimensional, unit-length surface normal into vector N
`and a 3-dimensional, unit-length vieW re?ection vector R.
`The ?nal N vector value is computed from an interpolated
`vertex value and is optionally perturbed by a bump map
`vector. The R vector value is found by re?ecting an option
`ally variable vieW (eye) vector around the surface normal N
`vector. One or more Point Light Units 76 provide normal
`iZed point light vectors and scalar distance coef?cients. One
`or more Vector Shading Units 74 are responsible for dot
`product and color scaling operations for light source color
`and direction vectors. An optional Accumulation Unit 75
`sums tWo separate channels of color values to produce
`diffuse and specular color values. Communications betWeen
`the Programmable Unit and the Lighting Sequence Unit are
`done, in a preferred embodiment, through shared registers.
`The main goal of the aforementioned hardWare units is to
`provide parallel hardWare support for commonly used light
`ing tasks rather than putting the burden entirely on the
`Programmable Shading Unit. HardWare lighting units can be
`assembled in a parallel, pipelined fashion to alloW for a
`completion rate of one pixel per clock cycle. Pipelining the
`user-programmable operations in the Programmable Shad
`ing Unit is more dif?cult since the sequence of operations is
`variable. The use of the aforementioned hardWare units is
`advantageous since it removes much of the burden of
`lighting from the Programmable Shading Unit, freeing the
`resources to the production of user-de?ned special effects.
`The Programmable Unit has even more ?exibility since the
`shading values produced by the various other hardWare units
`are made available for use Within the Programmable Unit.
`This arrangement enhances the user customiZation of shad
`ing routines While providing base per-pixel lighting at rates
`of 1 pixel per clock cycle or more.
`[0017] FIG. 2 illustrates the operation of the Vector Gen
`eration Unit. At 11, interpolated normal data is received.
`Several formats of interpolated normal data may be used. In
`a preferred embodiment, said normal data consists of a
`2-dimensional vector interpolated from polygon vertex nor
`mal vectors transformed to 2-dimensional vectors by the
`process described in US. patent application ?led in the
`name of David J. Collodi, on Jan. 18, 2001, bearing Ser. No.
`
`MEDIATEK, Ex. 1007, Page 6
`
`

`

`US 2003/0076320 A1
`
`Apr. 24, 2003
`
`09/766,277 and entitled Method and System For Improved
`Per-Pixel Shading in a Computer Graphics System, the
`disclosure of Which is hereby incorporated by reference. The
`interpolated normal vector may be in either ?xed point or
`?oating point format, but if a ?xed point representation is
`used at least 12 bits of accuracy are assumed. At 12, the
`interpolated normal vector is received and passed on to the
`next stage.
`[0018] At 14, the interpolated normal vector is (option
`ally) perturbed by a bump-map vector. Abump map vector
`is passed in at 13. In a preferred embodiment, a 2-dimen
`sional bump map vector is passed in from the Programmable
`Shading Unit. The Programmable Shading Unit may be
`con?gured to generate the bump map vector procedurally or
`provide the bump map vector as a result of a texture map
`lookup (or a combined result of multiple texture map
`lookups) Wherein said texture map contains 2-dimensional
`vector values or the texture map contains height values
`Which are subsequently converted to a tWo-dimensional
`vector value. The timing of the Programmable and Lighting
`Sequence Units must be con?gured so that the Program
`mable unit is able to provide a stable value at this stage of
`execution. Before being combined With the interpolated
`normal vector, the bump map vector may optionally be
`transformed by a 2x2 matrix supplied by the Programmable
`Shading Unit. The bump-map vector b is then combined
`With said interpolated normal vector n by vector addition to
`produce composite surface angle vector c Where:
`c=b+n
`
`[0019] The c vector is a 2-dimensional vector that repre
`sents the surface direction of the current pixel in?uenced by
`surface curvature and (optionally) a bump map value. If
`bump mapping is not used for the current pixel, then the c
`vector is simply set equal to the n vector. The composite
`surface vector is output at 15.
`[0020] At 17, the surface angle vector is generated from
`the composite surface vector c received at 15. A preferred
`embodiment uses the component values of vector c to
`address a 2-dimensional lookup table value Wherein said
`lookup table value is used to produced substantially nor
`maliZed 3-dimensional surface angle vector, N. The term
`substantially normaliZed is used henceforth to describe a
`vector Whose length is in the range of 0.8-1.2, ie the vector
`is approximately unit length. A preferred method for obtain
`ing a substantially normaliZed 3-dimensional vector from a
`2-dimensional lookup table is detailed in the above-refer
`enced US. patent application ?led in the name of David J.
`Collodi, on Jan. 18, 2001, and bearing Ser. No. 09/766,277.
`[0021] Alternate embodiments use dedicated hardWare to
`directly calculate the aforementioned N vector from an
`interpolated 3-dimensional polygon vertex vector combined
`With a 3-dimensional bump map vector.
`[0022] The normaliZed surface angle vector N is output at
`19 for use Within the Programmable Shading Unit. Before N
`is output, it may need to be converted to/from ?xed-point
`format and/or reduced in accuracy dependent on the speci
`?cation for the Programmable Shading Unit registers and
`operations. In cases Where the N vector reduced in accuracy
`for output to the Programmable Shading Unit, a full preci
`sion copy of N is kept and used for subsequent lighting
`computations in the Vector Shading Units.
`[0023] At 16, eye vector information is received. A pre
`ferred embodiment receives the screen coordinates for the
`
`current pixel Which are then converted at 16 into a 2-di
`mensional offset vector, e. At 18, a normaliZed vieW re?ec
`tion vector is generated. The 2-dimensional offset vector, e,
`is input as Well as composite surface vector c. The e and c
`vectors are then combined to produce a 2-dimensional
`re?ection vector x, Where:
`
`[0024] Next the x vector is used to produce substantially
`normaliZed vieW re?ection vector R. The conversion of x to
`R may be accomplished by the aforementioned procedures
`used to produce the N vector from the c vector. At 20, the R
`vector is output to the Programmable Shading Unit. As With
`the N vector, a loWer precision vector may be given to the
`Programmable Shading Unit and a full precision copy of R
`may be kept for subsequent operations. Alternate embodi
`ments may use different algorithms for the generation of the
`R and N vectors provided that they are capable of ensuring
`a consistent format for each vector.
`
`[0025] FIG. 3. illustrates the operation of a single Point
`Light Unit, although a preferred embodiment of the present
`invention may contain multiple Point Light Units. At 36,
`point light data is received. In a preferred embodiment, said
`point light data consists of a 3-dimensional surface position
`vector, s, representing the position in some ?xed coordinate
`space (such as World space or vieW space) of the current
`pixel and a point light position vector, p, representing the
`position (in the same coordinate space) of a point light. In
`cases Where multiple Point Light Units exist, the same s
`vector is received by each unit but each unit may receive a
`separate p vector. At 30, non-normaliZed point light vector,
`I, is generated Where:
`
`[0026] At 31, scalar distance-squared value, d, is calcu
`lated Where:
`
`[0027] At 32, normaliZed light vector, P, is calculated by
`normaliZing the I vector. The normaliZation of the I vector
`is performed, in a preferred manner, by the operations
`detailed in above-referenced US. patent application ?led in
`the name of David J. Collodi, on Jan. 18, 2001, and bearing
`Ser. No. 09/766,277. HoWever, since the I vector can vary
`drastically in length, it may be desirable to scale the I vector
`before normaliZation. The substantially normaliZed point
`light vector, P, is output at 34.
`
`[0028] At 33 a distance scalar value, h, is calculated
`Where:
`
`[0029] Where c is a user selected scalar value. It may
`be desirable to clamp the d value to a loWer bound of
`1 before calculating the h value. The h value repre
`sents the intensity of light at the surface point
`associated With the current pixel. At 35, said distance
`scalar value, h, is output. In a preferred practice of
`the present invention, the P vector is output as a
`4-dimensional vector With the h value used as the 4th
`vector component.
`
`MEDIATEK, Ex. 1007, Page 7
`
`

`

`US 2003/0076320 A1
`
`Apr. 24, 2003
`
`[0030] FIG. 4 illustrates the operation of the Vector Shad
`ing Unit. At 41 and 42, a light vector, L, and surface vector,
`S, are input. The light vector may be a user-provided light
`vector or a point light vector generated from the Point Light
`Unit. As a preferred practice of the present invention, it is
`assumed that the L vector is a 4-dimensional vector value
`Where the 4th vector component consists of the aforemen
`tioned scalar h value in the case of a point light vector and
`consists of a user de?ned scalar value (usually, but not
`necessarily, 1.0) in the case of a user-provided light vector.
`The surface vector, S, typically consists of either the surface
`angle vector, N, or the vieW re?ection vector, R, output by
`the Vector Generation Unit as previously detailed. HoWever,
`a user-provided S value may also be used. In either case, the
`S vector is assumed to be a 4-dimensional vector value. In
`the case Where the S vector is taken as the N or R vectors,
`as previously described, a value of 1.0 may be taken as the
`4th vector component to assure that S is a 4-dimensional
`vector. In a preferred embodiment, the Vector Shading Unit
`may be programmed to received the L and S vectors from
`user-speci?ed hardWare registers Wherein said registers may
`be either special-purpose registers designated to hold the
`output vectors of the Point Light/Vector Generation Units or
`general-purpose registers in the Programmable Shading
`Unit.
`
`[0031] At 46, a raW scalar dot product value, r, is calcu
`lated Where:
`
`[0032] The r value may be provided for use in the Pro
`grammable Shading Unit at 46. At 44, the raW dot product
`value, r, is optionally scaled. The purpose of this stage is
`primarily to incorporate the effects of shadoWs and surface
`specularity functions. TWo scalar values, y and Z, may be
`user-provided. A 2-dimensional specular map may also be
`speci?ed Wherein said map may reside in texture memory or
`local shader memory. The r and y values are used as a
`coordinate pair (r,y) to address said specular map Which
`provides a scalar value W. If a specular map is not used, the
`W value is set equal to r. The W value is then multiplied by
`the Z value producing the scaled dot product value q. At 47,
`the q value is provided for use in the Programmable Shading
`Unit. At 45, a 3 or 4-dimensional color vector c is scaled by
`said q value. Said color vector is user-provided through the
`Programmable Shading Unit, i.e., a general or special
`purpose register may be used to hold said color vector. A
`resulting color vector e is calculated Where:
`
`e=qc
`
`[0033] The e value is output at 48.
`
`[0034] An alternate embodiment implements the Point
`Light Unit and Vector Shading Unit as fully programmable
`hardWare elements Wherein the sub-tasks of said hardWare
`units may be executed independently through commands
`from the Programmable Shading Unit. The Point Light and
`Vector Shading Units should be designed to provide a
`throughput rate of one operation per clock cycle provided
`their subtasks operations are performed in a pre-de?ned
`sequence.
`
`[0035] FIG. 5 illustrates the operations of the Accumula
`tion Unit. The Accumulation Unit is responsible for the
`combination of specular and diffuse light colors. At 55, one
`or more diffuse color vectors are input Wherein said color
`
`vectors are 3 or 4-dimensional vector values. In a preferred
`practice of the present invention, the Accumulation is con
`?gurable to receive diffuse color vectors from a user-de?ned
`set of general or special purpose registers. At 59, said diffuse
`color vectors are combined through vector addition to pro
`duce a combined diffuse color vector. Said combined diffuse
`color vector is output at 57. At 56, one or more specular
`color vectors are input. At 60, said specular color vectors are
`combined through vector addition producing a combined
`specular color vector Which is output at 58. Alternate
`embodiments exclude the Accumulation Unit and alloW the
`user to combine diffuse and specular colors through the
`Programmable Shading Unit.
`[0036] After the combined diffuse and specular color
`vectors are arrived at, the user may use them to modify the
`pixel color in the Programmable Shading Unit. A preferred
`embodiment provides dedicated hardWare to perform a
`component-Wise multiplication of a surface color vector and
`the combined diffuse color vector folloWed by the addition
`of the combined specular color value. Using dedicated
`hardWare for this task, as opposed to general purpose
`Programmable Shading operations, alloWs a standard pixel
`lighting sequence to be completed at a rate of one pixel per
`clock cycle (or higher). Alternate embodiments use general
`purpose Programmable Shading operations to perform dif
`fuse and specular light combination.
`[0037] The pixel color may be further modi?ed, i.e., to
`incorporate fogging effects, or directly output from the
`Programmable Shading Unit as the ?nal pixel color. Prac
`tices of the present invention have alloWed for con?gurable
`per-pixel lighting at rates of one pixel per clock cycle or
`higher to be achieved in real-time 3D graphics hardWare.
`[0038] The detailed description presented herein is prima
`rily for the purposes of example and, as should be recog
`niZed by those skilled in the applicable art, changes and
`modi?cations may be made to disclosed implementation
`Without departing from the scope of the present invention as
`de?ned by the appended claims and their equivalents.
`
`1. Aper-pixel graphics processing unit for use in a system
`for lighting a plurality of polygon surfaces in a rendering
`system, the graphics processing unit comprising:
`a. dedicated hardWare logic operable to perform a
`sequence of lighting calculations that generate per
`pixel lighting equation lighting coefficients for a plu
`rality of the draWn pixels; and
`
`b. per-pixel user programmable hardWare logic commu
`nicating With the dedicated hardWare logic to receive
`the per-pixel lighting coef?cients and perform addi
`tional shading calculations using the lighting coef?
`cients.
`2. The graphic processing unit of claim 1 Wherein the
`dedicated hardWare logic communicates With the program
`mable hardWare logic through one or more shared registers.
`3. The graphic processing unit of claim 1 Wherein the
`dedicated hardWare logic comprises logic that uses the
`lighting coef?cients in the calculation of a color value.
`4. The graphic processing unit of claim 1 Wherein the
`dedicated hardWare logic includes a vector generation unit
`that receives vertex values for the polygon surfaces and
`calculates a 3-dimensional, unit-length surface normal vec
`tor.
`
`MEDIATEK, Ex. 1007, Page 8
`
`

`

`US 2003/0076320 A1
`
`Apr. 24, 2003
`
`5. The graphic processing unit of claim 4 wherein the
`vector generation unit calculates a 3-dimensional, unit
`length vieW re?ection vector.
`6. The graphic processing unit of claim 1 Wherein the
`dedicated hardWare logic includes a point light unit that
`calculates normaliZed point light vectors.
`7. The graphic processing unit of claim 6 Wherein the
`point light unit calculates scalar distance coe?icients.
`8. The graphic processing unit of claim 1 Wherein the
`dedicated hardWare logic includes a vector shading unit that
`performs vector dot product operations.
`9. The graphic processing unit of claim 8 Wherein the
`vector shading unit performs color scaling operations.
`10. The graphic processing unit of claim 4 Wherein the
`vector generation unit receives a bump map vector and
`combines the bump map vector With the normal vector to
`produce a composite surface angle vector.
`11. The graphic processing unit of claim 4 Wherein the
`vector shading unit receives eye vector information and
`generates a vieW re?ection vector therefrom.
`12. The graphic processing unit of claim 1 further com
`prising a texture memory communication With the program
`mable hardWare logic.
`13. A per-pixel graphics processing unit for use in a
`system for lighting a plurality of polygon surfaces in a
`rendering system, the graphics processing unit comprising:
`a. dedicated hardWare logic operable to perform a
`sequence of lighting calculations that generate per
`pixel specular lighting value coe?icients for a plurality
`of the draWn pixels; and
`
`b. per-pixel user programmable hardWare logic commu
`nicating With the dedicated hardWare logic to receive
`the per-pixel lighting coe?icients and perform addi
`tional shading calculations using the specular lighting
`value coefficients.
`14. The graphic processing unit of claim 13 Wherein the
`dedicated hardWare logic communicates With the program
`mable hardWare logic through one or more shared registers.
`15. The graphic processing unit of claim 13 Wherein the
`dedicated hardWare logic comprises logic that uses the
`lighting coe?icients in the calculation of a color value.
`16. The graphic processing unit of claim 13 Wherein the
`dedicated hardWare logic includes a vector generation unit
`that receives vertex values for the polygon surfaces and
`calculates a 3-dimensional, unit-length surface normal vec
`tor.
`17. The graphic processing unit of claim 16 Wherein the
`vecto

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket