`
`TOMAS AKENINE-MOLL
`ERIC HAINES
`
`MEDIATEK,Ex. 1015, Page 1
`IPR2018-00101
`
`MEDIATEK, Ex. 1015, Page 1
`IPR2018-00101
`
`
`
`Real-Time Rendering
`second Edition
`
`Tomas Akenine-Moller. ~*~
`
`Eric Haines
`
`
`
`A K Peters
`Wellesley, Massachusetts
`
`-
`
`MEDIATEK,Ex. 1015, Page 2
`IPR2018-00101
`
`MEDIATEK, Ex. 1015, Page 2
`IPR2018-00101
`
`
`
`Editorial, Sales, and Customer Service Office
`AK Peters, Ltd.
`Boyd
`888 Worcester Street, Suite 230
`Wellesley, MA 02482
`www.akpeters.com
`
`Copyright © 2002 by AK Peters, Ltd.
`
`All rights reserved. No part of the material protected by this copyright notice may be reproducedor
`utilized in any form, electronic or mechanical, including photocopying, recording, or by any
`information storage and retrieval system, without written permission from the copyright owner.
`
`Library of Congress Cataloging-in-Publication Data oy PAGEL,
`
`Miller, Tomas, 1971-/
`Real-time rendering / Tomas Miller, Eric Haines—2nd edition.
`-p.cm,
`Includes bibliographical references and index.
`ISBN 1-56881-182-9
`
`1. Computer graphics 2. Real-time data processing. I. Haines, Eric, 1958~ II. Title.
`
`T385 .M635 2002
`006.6°773—dce21
`
`The chameleon image on the cover: Courtesy of NVIDIA Corp.
`
`Printed in India
`
`08 07 06
`
`1098765
`
`2002025151
`
`
`
`MEDIATEK,Ex. 1015, Page 3
`IPR2018-00101
`
`MEDIATEK, Ex. 1015, Page 3
`IPR2018-00101
`
`
`
`Chapter 5
`Texturing —
`
`“All it takes is for the rendered image tolook right. ”
`—Jim Blinn _
`
`A surface’s texture is its look and feel—just think of the texture of an oil
`painting. In computer graphics, texturing is a process that takes a surface
`and modifies its appearance at eachlocation using some image, function,
`or other data source: As an example, instead of precisely representing the
`geometry ofa brick wall, a color image ofa brick wall is applied to a single
`polygon. When the polygon is viewed, the color image appears where the
`polygon is located. Unless the viewer getsclose to the wall, the lack of
`geometric detail (e.g., the fact that the image of bricks and mortar is on
`a smooth surface) will not be noticeable. Huge modeling, memory, and
`speed ‘savings are obtained by combining images and surfaces in thisway.
`Color image texturing also provides a way to use photographic images and
`animations on surfaces.
`-
`:
`However, some textured’ brick walls can be unconvincing forreasons
`other than lack of geometry. For example, if the bricks are‘supposed to be
`shiny, whereas the mortar is not, the viewer will riotice that the shininess
`is the same for both materials. To producea moreconvincing experience,
`a specular highlighting image texture can also be appliedto the surface.
`Instead of changing the’surface’s color, this sort of texture changes the
`wall’s shininess depending on location on thesurface. Now the bricks ‘have
`a color from the color image texture and a’shininess from this new texture.
`Once the shiny texture has been applied, however, the viewer may notice
`thatnow all the bricks aré shiny and the mortar is not, but each brick face
`appears to be flat. This does not look right, as bricks normally have some
`irregularity to their surfaces. By applying bump mapping, the surface
`normals of the bricks may be varied so that when they are rendered, they
`do not appear to be perfectly smooth. This sort of texture wobbles the
`direction of the polygon’s original surface normal for purposes of computing
`lighting.
`
`117
`
`MEDIATEK,Ex. 1015, Page 4
`IPR2018-00101
`
`MEDIATEK, Ex. 1015, Page 4
`IPR2018-00101
`
`
`
`118
`
`5. Texturing
`
`These are just three examples of the types of problems that can be
`solved with textures. In this chapter, texturing techniques are covered in
`detail. First, a general framework of the texturing process is presented.
`Next, we focus on using images to texture surfaces, since this is the most
`popular form of texturing used in real-time work. The various techniques
`for improving the appearance of image textures are detailed, and then
`methods of getting textures to affect the surface are explained.
`
`5.1. Generalized Texturing
`
`Texturing, at its simplest, is a technique for efficiently modeling the sur-
`face’s properties. One way to approach texturing is to think about what
`happensfor a single sample taken at a vertex of a polygon. As seen in the
`previous chapter, the color is computed by taking into account the lighting
`and the material, as well as the viewer’s position. If present, transparency
`also affects the sample, and then theeffect offog is calculated. Texturing
`works by modifying the values used inthe lighting equation. The way these
`values are changed is normally based on the position on the surface. So,
`for the brick wall. example, the color at any point on the surface is replaced
`by a corresponding color in theiimageof a brick wall, based on the surface
`location. The specular highlight texture modifies the shininess value, and
`the bump texture changes the directionof the normal, so each of these
`change the result of the lighting equation.
`Texturing can be described by a generalized. texture pipeline. “Much
`terminology will be introduced in a moment, but take heart: Eachpiece of
`the pipeline will be described in detail. This full texturing process is not
`performed by most current real-time rendering systems, though as time
`goes by, more. parts of the pipeline will be incorporated. Once we have
`presented the entire process, we will examine the various Simplieations
`and limitationsofreal-time texturing.
`A location inspace is the starting point for the texturing process. This
`location can be in world space, but is more often in the model’s frame of
`reference, so that as the model moves, the texture moves along with it..
`Using Kershaw’s terminology [418], this point in space then has a projec-
`tor function applied to it to obtain a set of numbers, called parameter-
`space values, that will be used for accessing the texture. This process is
`called mapping, which leads to the phrase terture mapping.’ Before these
`
`1Sometimes the texture image itself is called the texture map, though this is not
`strictly correct.
`
`MEDIATEK,Ex. 1015, Page 5
`IPR2018-00101
`
`MEDIATEK, Ex. 1015, Page 5
`IPR2018-00101
`
`
`
`5.1. Generalized Texturing
`
`
`
`
`119
`
`|
`
`
`
`tenneraecirmemneroerne
`
`k
`:
`
`e
`
`i‘
`
`ii\ b
`
`
`
`Figure 5.1. The generalized texture pipeline for a single texture.
`
`new values may be used to access the texture, one or more corresponder
`functions can be used to transform the parameter-space values to texture
`space. These texture-space values are used to obtain values fromthe: tex-
`ture, e.g., they: may be array indices into an image texture to retrieve a
`pixel. The retrieved values are then potentially transformedyet again by a
`value. transform function, and finally these new values are used to modify
`some property of the surface, such as the material or shading normal. Fig-
`ure 5.1 shows this process in detail for the application ofa single texture.
`The reason for the complexity of the pipeline is that each step provides the
`user with a useful control.
`Using this pipeline, this is what happens when a polygon has a brick
`wall texture and a sample is generated onits surface (see Figure 5.2). The
`(x,y, Z) position in the object’s local frame of reference is found; say it is
`(—2.3, 7.1,88.2). A projector function is then applied to this position. Just
`as a map of the world is a projection of a three-dimensional object into
`two dimensions, the projector function here typically changes the (x,y, 2)
`
`brick wall
`
`
`
` texel color
`(0.9,0.8,0.7)
`
`texture
`(u,v)
`Quy,Z)
`object space ———f parameter space ——B> image space
`(-2.3,7.1,88.2)
`_ (0.32,0.29)
`(81,74)
`
`Figure 5.2. Pipeline for a brick wall.
`
`.
`
`MEDIATEK,Ex. 1015, Page 6
`IPR2018-00101
`
`MEDIATEK, Ex. 1015, Page 6
`IPR2018-00101
`
`
`
`120
`
`5. Texturing
`
`vector into a two-element vector (u,v). The projector function usedfor this
`example is an orthographic projection (see Section 2.3.3), acting essentially
`like a slide projector shining the brick wall image onto the polygon’s surface.
`To return to the wall, a point on its plane could be transformed into a pair
`of values ranging from. 0 to 1. Say the values obtained are (0.32, 0.29).
`These parameter-space values are to be used to find what the color of
`the image is at this location. The resolution of our brick texture is, say,
`256 x 256, so the corresponder function multiplies the (u,v) by 256 each,
`giving (81.92, 74.24). Dropping the fractions, pixel (81,74) is found in the
`brick wall image, andis of color (0.9, 0.8, 0-7). The original brick wall image
`is too dark,.so a value transform function that multiplies the color by 1.1
`is then applied, giving a color of (0.99, 0.88, 0.77). This color modifies the
`surface properties by directly replacing the surface’s original diffuse color,
`which is‘then used in the illumination equation.
`Thefirst step in the texture process is obtaining the surface’s location
`and projecting it into parameter space. Projector functions typically work
`by converting a three-dimensional point in space into texture coordinates.
`Projector functions commonly used in modeling programs include spheri-
`cal, cylindrical, and planar projections [59, 418, 465]. Other inputs can be
`used to a projector function. For example, the surface normal can be used
`to choose which of six planar projection directions is used for the surface.
`Other projector functions are not projections at all, but are an implicit:
`part of surface formation; for example, parametric curved surfaces have a
`natural set of (u,v) values as part of their definition. See Figure 5.3. The
`texture coordinates could also be generated from all sorts of different para-
`meters, such as the view direction, temperature of the surface, or anything
`else imaginable. The goal of the projector function is to generate texture
`coordinates. Deriving these as a function of position is just one way to
`do it.Noninteractive renderers often call these projector functions as part of
`
`the rendering process itself. A single projector function may suffice for the
`whole model, but often the artist has to use tools to subdivide the model
`and apply various projector functions separately [611]. See Figure 5.4.
`In real-time work, projector functions are usually applied at this model-
`ing stage, and the results of the projection are stored at the vertices. This:
`is not always the case; for example, OpenGL’s glTexGen routine provides
`a few different projector functions, including spherical and planar. Having
`the accelerator perform the projection on the fly has the advantage ‘that
`texture coordinates then do not have to be sent to the accelerator, thereby
`saving bandwidth. Some rendering methods, such as environment map-
`ping (see Section 5.7.4), have specialized projector functions of their own
`
`MEDIATEK,Ex. 1015, Page 7
`IPR2018-00101
`
`MEDIATEK, Ex. 1015, Page 7
`IPR2018-00101
`
`
`
`5.1.. Generalized Texturing
`
`121
`
`Figure 5.3. Different texture projections. Spherical, cylindrical, planar, and natural
`(u, v) projections are shownleft to.right. The bottom row showseachof these projections
`applied to a single object (which has no natural projection).
`
`that are evaluated per vertex or per pixel. More generally, vertex shaders
`(Section 6.5) can be used to computea near-infinite variety of complex
`projector functions on the graphics acceleratoritself.
`‘The spherical projection casts points onto an imaginary sphere centered
`around some: point: . This. projection is.the same as used in Blinn. and
`Newell’s environment mapping scheme (Section 5.7.4); so Equation’ 5.4 on
`page 154 describes. this function. This projection method suffers. from the
`same problems of vertex interpolation described in that section:
`Cylindrical. projection computes the wu texture coordinate the same as
`spherical projection, with the v texture coordinate computed asthe.dis-
`tance alongthe cylinder’s axis.’ This projection is useful:for. objects that
`have a natural axis, such as surfaces of revolution. Distortion occurs when
`surfaces are near-perpendicular to.the cylinder’s axis.”
`The planar projection is like an x-ray slide projector, projecting along
`a direction and applying the texture to all surfaces. It uses orthographic
`projection (Section 3.5.1). This functionis commonly used to apply texture
`maps to characters, treating the model like itwas a paper doll by gluing
`separate textures on to its front and rear.
`As there is severe distortion for surfaces that are edge-on to the projec-
`tion direction, the artist often must manually decompose the model into
`
`MEDIATEK,Ex. 1015, Page 8
`IPR2018-00101
`
`MEDIATEK, Ex. 1015, Page 8
`IPR2018-00101
`
`
`
`
`
`122 5.°TexturingB
`
`
`
`
` meaHetRARESSARokit“
`
`Figure 5.4. How various texture projections are used‘ona single model.
`courtesy of Tito Pagan.)
`ir as
`
`(Images
`oe
`
`.A numberof smaller textures for the statue model saved in two larger
`Figure 5.5.
`textures. The right figure shows how the polygonal mesh is unwrapped and displayed
`on the texture to aid in its creation. (Images courtesy of Tito Pagdn.)
`
`MEDIATEK,Ex. 1015, Page 9
`IPR2018-00101
`
`MEDIATEK, Ex. 1015, Page 9
`IPR2018-00101
`
`
`
`5.1. Generalized Texturing
`
`123
`
`near-planar pieces. There are also tools that help minimize distortion by
`unwrapping the mesh,.or creating a near-optimal set of planar projections,
`or otherwise aid this process. The goal is to have each polygon be given
`a fairer share of a texture’s area, while also’ maintaining as much mesh
`connectivity as possible to ease the artist’s work [465, 611]. Figure 5.5
`shows the workspace used to create the statue in Figure 5.4. Eckstein et
`al. [204] give a brief overview of research to date, and give a way of fixing
`features (such as eyes and mouth texture locations) onto a mesh while
`otherwise minimizing distortion. Work such as “lapped textures”, done by
`Praunet al. holds promise for applying material textures to surfaces iina
`convincing fashion [632].
`Texture coordinate values are sometimes presented as a three-element
`vector, (u,v, w), with w being depth along the projection direction. Other
`systems use up to four coordinates, often designated (s,t,r,q) [600]; q is
`used as the fourth value in a homogeneous coordinate (see Section A.4) and
`can be used for spotlighting effects [694]. To allow each separate texture
`map to have its own input parameters during a rendering pass, APIs allow
`multiple sets of texture coordinates (see. Section 5.5 on multitexturing).
`However the coordinate values are applied, the idea is the same: These
`parameter values are interpolated across the surface and used to retrieve
`texturevalues. Before being interpolated, however, these parameter values
`are transformed by corresponder functions.
`Corresponder functioris convert parameter-space values to texture-space
`locations. They provide flexibility in applying textures to’ surfaces. One
`example of a corresponder function is to use the API to select a portion of an
`existing texture for display; only this subimagewill be used in’1 subsequent
`operations.°
`Another corresponderis an optional matrix transformation. The use of
`a 4x 4 matrix is supportedexplicitly in OpenGL,andissimple enough to
`support at the application stage under any API. This transform is useful
`for the sorts of procedures that transforms normally do well at:
`It can
`translate, rotate, scale, shear, and even project the texture on the surface.”
`Another class of corresponder functions controls the way an image is
`applied. We know that an imagewillappear on the surface where (u, v) are
`in the [0,1) range. But what happensoutsideof this range? Corresponder
`functions determine the behavior. In OpenGL, this type of corresponder
`
`2s discussed in Section 3.1.5, the order of transforms matters. Surprisingly, the
`order of transforms for textures must be the reverse of the order one would expect. This
`is because texture transforms actually affect the space that determines where the image
`is seen. The image itself is not an object being transformed;
`the space. defining the
`image’s location is being changed.
`
`MEDIATEK,Ex. 1015, Page 10
`IPR2018-00101
`
`MEDIATEK, Ex. 1015, Page 10
`IPR2018-00101
`
`
`
`124
`
`5. Texturing
`
`function is called the “wrapping mode”;.in Direct3D,it is called the “tex-
`ture addressing mode.”*? Commoncorresponderfunctionsofthis type are:
`e wrap (DirectX), repeat (OpenGL), or tile — The image repeatsitself
`across the surface; algorithmically, the integer part of the parameter
`value is dropped. This function is useful for having an image of a
`_ material repeatedly cover a surface, and is often the default.
`© mirror~ The image repeats itself across the surface, but is mirrored
`(flipped) on every other repetition. For example, the image appears
`normally going from 0 to 1, then is reversed between 1 and 2, then
`is normal between 2 and 3, then is reversed, etc. This provides some
`continuity along the edges of the texture.
`e clamp (DirectX) or clamp to edge (OpenGL) - Values outside the
`range '[0;1) are clamped to this range. This results in the repetition
`of the edges of the image texture. This function is useful for avoiding
`accidentally taking samples from the opposite edge of a texture when
`bilinear interpolation happens neara texture’s édge (600].4
`e border(DirectX) or clamp to border (OpenGL) - Parameter val.
`ues outside [0,1) are rendered with a separately defined border color
`or using the edge of the texture as a border. This function can be
`good for rendering decals onto surfaces, for example, as the edge of
`the texture will blend smoothly with the border color. Border tex-
`tures can be used to smoothly stitch togetheradjoining texture maps,
`€.g.,
`for terrain rendering. The texture coordinate is clamped to half
`a texel inside [0,1) for clamp-to-edge and half a texel outside (0, 1)
`for clamp-to-border.
`oo
`.
`.
`:
`See Figure 5.6. These corresponder functions can be assigned differently
`for each texture axis, e.g., the texture could repeat along the u axis and be
`clamped on the v axis.
`~
`,
`3Confusingly, Direct3D also has a feature called “texture wrapping,” which is used
`withBlinn environment mapping. See Section 5.7.4.
`, cs
`4OpenGL’s original GL.CLAMP was not well-specified prior to version 1.2. As defined;
`points outsideof the texture’s border are a blend of halfthe border color and half the edge
`pixels during bilinear interpolation. GL_CLAMP_TO_EDGE was introduced. as its replacement
`in OpenGL 1.2 to rectify this problem, and GL.CLAMP_TO_BORDER_ARB properly always
`samples only the border beyond the texture’s boundaries. However, much hardware
`does not support borders, so implements GL.CLAMP as if it were GL_CLAMP_TO_EDGE. Newer
`hardware (e.g., the GeForce3) implements GL.CLAMP correctly, but the result is normally
`not what is desired. The gist: Use GL_CLAMP.TO_EDGE unless you know you need a different
`clamping behavior.
`.
`
`;
`
`.
`
`MEDIATEK,Ex. 1015, Page 11
`IPR2018-00101
`
`MEDIATEK, Ex. 1015, Page 11
`IPR2018-00101
`
`
`
`5.1. Generalized Texturing
`
`125
`
`
`
`Figure 5.6. Image texture repeat, mirror, clamp, and border functions in action. -
`
`DirectX 8.0 also includes a mirror once texture addressing mode that
`mirrors the texture once, then clamping to the range (—1, 1). As of the end
`of 2001, onlyATI supports this corresponder function in hardware. The
`motivation for mirror onceis for three-dimensional texture light maps.
`For real-time work, the last corresponder function applied is implicit,
`and is derived from the image’s size. A textureis normally applied within
`the range [0,1) for u and v. As shown in the brick wall example, by
`multiplying parameter values in this range by the resolution of the image,
`one may. obtain the pixellocation. The pixels in the texture are often. called
`texels, to differentiate themfrom the pixels onthe screen.The advantage of |
`being able to specify (u, v)values in a range of (0, 1)is: that image textures
`with different resolutions canbe swappediin without: havingtochange.the
`values.stored at the vertices of the model.-
`eee
`The set of corresponder functions uses parameter-space valuestoypro-
`duce texture coordinates. Foriimage textures; the texture coordinatesare
`used to retrieve texel information fromthe image. This processis: dealt
`with extensively in Section 5.2, Two-dimensional. images constitute the
`vast majority of texture use in real-time work, but. there are other.texture
`functions. A direct extensionof image textures isthree-dimensionalimage
`datathat is accessed by (u,v,w) (or(s,t,7) values): For example, medical
`imaging data can be generated as a three-dimensional grid; by moving:a
`polygon through this grid, one may view two-dimensional slices of this data.
`Covering an arbitrary three-dimensional surface cleanly. with. a two:
`dimensional image is often difficult. or impossible [59]. As the texture.is
`applied to somesolid object, the imageis stretchedor. compressediinplaces
`to fitthe surface: Obvious ‘mismatches may be visible as differentpieces
`of the texture meet. A solid cone is a good example of both of these
`problems: The image bunches up at the tip, while the texture on the flat
`face of the cone does not match up with the texture on the curved face.
`Onesolution is to synthesize texture patches that tile the surface seamlessly
`while minimizing distortion. Performing this operation on complex surfaces
`
`MEDIATEK,Ex. 1015, Page 12
`IPR2018-00101
`
`MEDIATEK, Ex. 1015, Page 12
`IPR2018-00101
`
`
`
`126
`
`5. Texturing
`
`is technically challenging and is an active areaof research. See Turk [756]
`for.one approach and an overview of past research.
`~ The advantage of three-dimensional textures is that they avoid the dis-
`tortion and seam problems that two-dimensional texture mappings can
`have. A three-dimensional texture can act as a material such as wood or
`marble, and the model may be texturedas if it were carved from this mate-
`rial. Thetexture can also be used to modify other properties, for example
`changing (u,v) coordinatesin order to creating warping effects [536].
`Three-dimensional textures can be synthesized by a variety of tech-
`niques. One ofthe.most common is using one or morenoise functions to
`generate values [201]. See Figure 5.7. Because of the cost of evaluating
`the noise function, oftenthe lattice points in the three-dimensional array
`are precomputed and used to interpolate texture values. There are also
`methods that use the accumulation buffer or color buffer blending to gen-
`erate these arrays[536]. However, such arrays can be large to store and
`often lack sufficient detail. Miné and Neyret
`[550] use lower resolution
`
`
`
`Figure. 5.7. Two examples of real-time procedural texturing using a volume texture.
`The marble texture is continuous over the surface, with no mismatches at edges. The
`object on the left is formed by cutting a pair of spheres with a plane and using the
`stencil buffer to fillin the gap. (Images courtesy of Evan Hart, ATI Technologies Inc.)
`
`MEDIATEK,Ex. 1015, Page 13
`IPR2018-00101
`
`MEDIATEK, Ex. 1015, Page 13
`IPR2018-00101
`
`
`
`5.1. Generalized Texturing
`
`127
`
`three-dimensional Perlin noise textures and combine them to create mar-
`ble, wood, and other effects. Hart has created a pixel shader that computes
`Perlin. noise on the fly, albeit slowly (it takes 375 passes on a GeForce2,
`though surprisingly, this still gives a display rate of faster than a frame
`a-second) [325]. Hart et al. have. also done other research on hardware
`solutions for procedural textures [324].
`Two-dimensional texture functions can also be used to generate tex-
`tures, but here the major advantages are some storage savings (and band-
`width savings from not having to send down the corresponding image tex-
`ture) and the fact that such textures have essentially infinite resolution and
`potentially no repeatability.
`a
`It is also worth noting that one-dimensional texture images and func-
`tions have their uses. For example, these include contour lines [301, 600]
`and coloration determined by: altitude (e.g., the lowlands are green;. the
`mountain peaks are white). Also, lines can also be textured; one use of
`this is to render rain as a set of long lines textured witha semitransparent
`image.
`. The texture is accessed and. a set of values is retrieved-from it. When
`performing Gouraud interpolation, the texture coordinates are not linearly
`interpolated, as this will cause distortions [330]. Instead, perspective cor-
`rection is performed. on the texture coordinate. values. This. process is
`discussed further in Section 15.2.
`“
`
`The most straightforward data to return from a retrieval is an RGB
`triplet that is used to replace or modify. the surface color; similarly, a
`single grayscale value could be returned. Another type of data to return
`is RGBa, as described in Section 4.5.: The a (alpha) value is normally
`the opacity of the color, which: determines the: extent to which the color
`may affect: the pixel. There are certainly other types’ of data that can be
`stored in image textures, as will be seen when bump-mapping is discussed
`in detail (Section 5.7.5).
`|
`Once the texture values have been retrieved,. they may be useddirectly
`or further transformed. The resulting values are used to modify one or more
`surface attributes. Recall that almost all real-time systems use Gouraud
`shading, meaning that only certain values.are interpolated across a surface,
`so these are the only values that the.texture can modify. Normally, we
`modify the RGB result of the lighting equation, since this equation was
`evaluated at each vertex andthe color is then interpolated. However, other
`values can be modified. For example; as will. be discussed in Section 5.7.5,
`values such as the light’s direction can be interpolated and combined with
`a texture to make the surface appear bumpy.
`
`MEDIATEK,Ex. 1015, Page 14
`IPR2018-00101
`
`MEDIATEK, Ex. 1015, Page 14
`IPR2018-00101
`
`
`
`128
`
`5.
`
`-Texturing
`
`Most real-time systems let.us pick one of a number of methods for
`modifying the surface. The methods, called combine functions or texture
`blending operations, for gluing an image texture onto a surface include:
`
`e replace — Simply replace the original surface color with the texture
`color. Note that this removes any lighting computed for the surface,
`unless the texture itself includesit.
`
`e decal — Like replace, but when an: texture value is available, the
`texture is blended with the underlying color but the original a value
`is not modified. As the name implies, it is useful for applying decals
`(see
`5:Section 5.7.1).
`oo
`@ modulate ‘Multiply the surface color by’ thé texture’color. The
`‘shaded surface is modified by the color texture, giving a shaded,
`textured surface.
`
`These three:are the most common methods for simple color texture map-
`ping. Using replace for texturing in an illuminated environment is some-
`times called using a-glow texture, since:the texture’s color always appears
`the same, regardless of changing light conditions. There are other property
`modifiers, which will be discussed as other texture techniques are -intro-
`duced.
`,
`a,
`Modulating the entire shade of a surface by a texture can be uncon-
`vincing, because the texture will dim both the diffuse and specular terms:
`In reality, a material can reflect a highlight: even if its diffuse color is dark.
`So, on some-systems the diffuse and specular shading. colors can be interpo-
`lated separately, with only the diffuse shade being modified by the texture.
`See Plate V (following page 274);
`Revisiting the brick wall texture example, here is what hapjpenis*in a
`typical ‘real-time system. A modeler sets the (u,v) parameter values once
`in advance for the wall model vertices. The texture is readinto the ren-
`derer, and the wall polygons are sent down the rendering pipeline. A white
`material is used in computing the illumination at.each vertex. This color
`and (u,v) values are interpolated across the surface. At each pixel, the
`proper brick image’s texel is retrieved and modulated (multiplied) by the
`illumination color and displayed. In our original. example, this texture was
`multiplied by 1.1 at this point to make it brighter; in‘ practice, this color
`boost would probably be performed on the material or texture itself in the
`modeling stage. In the end, a lit, textured brick wall is displayed.
`
`MEDIATEK,Ex. 1015, Page 15
`IPR2018-00101
`
`MEDIATEK, Ex. 1015, Page 15
`IPR2018-00101
`
`
`
`5.2.
`
`Image Texturing
`
`129
`
`5.2._ Image Texturing
`In iimage texturing, a two-dimensional iimage is5 effectively glued onto the
`surface: of. apolygon.and rendered. We. have walkedthrough ‘the process
`with.respect’to:the polygon;now we will’address.the issues surrounding
`the image itselfand its application to the surface.° For the.rest of this
`chapter the: image texturewill be referred. to: simply:as: the:texture.
`In
`: addition,whenwerefer to a-pixel’s cell here,we mean the screen.grid cell
`surrounding that pixel. As mentionedin Section 4.4.2, a pixel is actually a
`displayed.color value that can (and should, forbetter quality) be affected
`by-‘samples outside ofits grid cell.
`The.texture image size used in hardware accelerators is usually re-
`stricted to gm x 2” texels, or sometimes even 27:2” square, where m and
`nare:nonnegative integers.5 Graphics: accelerators have different upper
`‘Assumethat wehave an image of size 256x 256 pixels and that;we5 want
`to“use it asa texture on a square. As long as the projected square on the
`screen is roughly the same size as the texture, the texture on thesquare
`looks almost like the originaliimage. But what happensif theprojected
`square covers ten times asmanypixels as the.originaliimage contains(called
`magnification), or if theprojected:squarecoversonly a fraction ofthe.pixels
`(minification)?. Theansweris that it: depends.on- what:kind of sampling
`and filtering methods you use for thesé two separate cases.
`
`limits ontexture size...
`
`5.2.1 Magnification
`In Figure 5.8, a texture ofsize 32x64 texels is textured ontoa rectangle, and
`the rectangle is viewed rather closely with respect to the texture size, so the
`underlying graphics system has to magnify the texture. The most common
`filtering techniques for magnification are nearest neighbor (the actual filter
`is called a box filter—see Section 4.4.1) and bilinear interpolation.®
`-
`In the left part of Figure 5.8, the nearest neighbor method is used.
`One characteristic of this magnification ‘technique is that the individual
`texels may become apparent. This effect is called pizelation, and occurs
`because the method takes the value of the nearest texel to each pixel center
`when magnifying, resulting iin a blocky appearance. Whilé the quality‘of
`5One exception to the powers-of-two rule is NVIDIA's texture rectangle extension,
`which allows any size texture to be stored and used. See Section 6.6.
`6There is. also cubic convolution, which uses the. weighted sum of a 4 x 4 array of
`texels, but it is currently not commonly available.-
`
`MEDIATEK,Ex. 1015, Page 16
`IPR2018-00101
`
`MEDIATEK, Ex. 1015, Page 16
`IPR2018-00101
`
`
`
`130
`
`5, Texturing
`
`
`
`a
`
`
`
`a
`‘
`:
`oy
`
`
`
`
`:
`y
`
`Figure 5.8. Texture magnification. Here, a texture of size 32 x 64 was applied to a
`rectangle, which was viewed very closely (with respect to texture size). Therefore, the
`texture had to be magnified. Ontheleft, the nearest neighborfilter is used, which simply
`selects the nearest texel to each pixel. Bilinear interpolation is used on the rectangle on
`the right. Here, each pixel iss computed from a bilinear interpolation of the closest four
`neighbor texels.
`
`this method is sometimes poor,it requires only one texel to be fetched
`per pixel.
`—
`In the right partof the same figure, bilinear interpolation (sometimes
`called linear interpolation) is used. For each pixel, this kind offiltering finds
`the four neighboring texels and linearly interpolates in two dimensions: to
`find a blended value for the pixel. The result is blurrier, and much of the
`jaggedness from using the nearest neighbor method has disappeared.7
`Returning to the brick texture example on page 120: Without dropping
`the fractions, we obtained (py, py) = (81.92, 74.24). These fractions are
`
`
`7Looking at these images with eyes squinted has approximately the same effect as a
`low-pass filter, and reveals the face a bit more.
`
`MEDIATEK,Ex. 1015, Page 17
`IPR2018-00101
`
`MEDIATEK, Ex. 1015, Page 17
`IPR2018-00101
`
`
`
`5.2..
`
`Image Texturing
`
`131
`
`
`
`(Py > Py)
`
`
`Figure 5.9. Notation for bilinear interpolation. The four texels involved areillustrated
`by the four squares.
`
`used in computing the bilinear combination of the four closest pixels, which
`thus range from (a7, yp) = (81,74) to (z,, yz) = (82,75). See Figure 5.9 for
`notation. First, the decimal part is computed as: (u’,v’) = (pu- |pul, Pu —
`|[py]). This is equal to (u’,v’) = (0.92,0.24) for our example. Assuming
`we can access the texels’ colors in the texture as t(x, y), where x and y are
`integer