throbber
PRINCIPLES AND], PRACTICE
`
`THIRD EDITION
`
`COMPUTER GRAPHICS
`
`
`
`JOHN F. HUGHES ° ANDRIES VAN DAM ° MORGAN MCGUIRE
`
`DAVID F. SKLAR ' JAMES D. FOLEY - STEVEN K. FEINER - KURT AKELEY
`
`3SHAPE EXHIBIT 2005
`
`Exocad V. 3Shape
`IPR2018—00788
`
`

`

`
`d:=,':rJ:I*74-4'Wwfi‘VV‘fif‘z7WWWYWVWE
`
`JefflfikJWWWW*M&MWWW&LV;"1:1,;..‘:3::»r2:~7~q
`
`
`
`
`5t.
`
`Computer Graphics
`
`Principles and Practice
`
`Third Edition
`/
`
`JOHN F. HUGHES
`
`ANDRIES VAN DAM
`
`MORGAN MCGUIRE
`
`DAVID F. SKLAR
`
`JAMES D. FOLEY
`STEVEN K. FEINER
`
`KURT AKELEY
`
`122.4%“!232.:4,ézgvanagm-m
`
`
`
`:r.z».;v,2.¢mmwam.m
`
`
`
`v‘vAddison-Wesley
`
`Upper Saddle River, NJ 0 Boston - Indianapolis - San Francisco
`New York ' Toronto - Montreal 0 London 0 Munich - Paris ° Madrid
`Capetown - Sydney - Tokyo ° Singapore - Mexico City
`
`
`
`

`

`
`
`
`
`
`
`Many of the designations used by manufacturers and sellers to distinguish their products are claimed
`as trademarks. Where those designations appear in this book, and the publisher was aware of a
`trademark claim, the designations have been printed with initial capital letters or in all capitals.
`
`The authors and publisher have taken care inthe preparation of this book, but make no expressed or
`implied warranty of any kind and assume no responsibility for errors or omissions. No liability is
`assumed for incidental or consequential damages in connection with or arising out of the use of the
`information or programs contained herein.
`
`The publisher offers excellent discounts on this book when ordered in quantity for bulk purchases or
`special sales, which may include electronic versions and/or custom covers and content particular to
`your business, training goals, marketing focus, and branding interests. For more information, please
`contact:
`
`U.S. Corporate and Government Sales
`governmentsales@pearsoned.com
`
`For sales outside the United States, please contact:
`
`International Sales
`intlcs @pearson.com
`
`Visit us on the Web: informit.com/aw
`
`Library of Congress Cataloging-in-Publication Dam
`Hughes, John F., 1955—
`Computer graphics : principles and practice / John F. Hughes, Andries van Dam, Morgan McGuire,
`David F. Sklar, James D. Foley, Steven K. Feiner, Kurt Akeley.-—Third edition.
`pages cm
`Revised ed. of: Computer graphics / James D. Foley. . . [et al.].—~2nd ed. - Reading, Mass. :
`Addison-Wesley, 1995.
`Includes bibliographical references and index.
`ISBN 978-0-321-39952-6 (hardcover : alk. paper)—ISBN 0—321—39952—8 (hardcover : alk. paper)
`1. Computer graphics. I. Title.
`T385.C5735 2014
`006.6—dc23
`
`2012045569
`
`Copyright © 2014_Pearson Education, Inc.
`
`All rights reserved. Printed in the United States of America. This publication is protected by
`copyright, and permission must be obtained from the publisher prior to any prohibited reproduction,
`storage in a retrieval system, or transmission in any form or by any means, electronic, mechanical,
`photocopying, recording, or likewise. For information regarding permissions, request forms and the
`appropriate contacts within the Pearson Education Global Rights & PermissionsDepartment, please
`visit www.pearsoned.com/perrnissions/.
`
`ISBN—13: 978-0—321-39952-6
`ISBN-10:
`0—321—39952-8
`2
`17
`
`
`
`
`
`

`

`
`
`
`
`anataimsfij,/'.’1
`
`
`
`
`
`
`
`
` wmxtcWee
`
`(1
`
`Contents at a Glance
`
`Contents ........................................................................................
`
`ix
`
`Preface ......................................................................................... XXXV
`
`About the Authors .......................................................................... le
`
`1
`
`2
`
`Introduction .........................................................................
`
`Introduction to 2D Graphics Using WPF ..............................
`
`3 An Ancient Renderer Made Modern ....................................
`4 A’2D Graphics Test Bed........................................................
`
`1
`
`35
`
`61
`81
`
`5 An Introduction to Human Visual Perception ....................... 101
`
`6
`
`Introduction to Fixed-Function 3D Graphics and
`HierarChical Modeling ..........................................................
`
`1 17
`
`7 Essential Mathematics and the Geometry of 2—-Space and
`3--Space ................................................................................. 149
`
`8 A Simple Way to Describe Shape1n 2D and 3D .................... 187
`9 Functions on Meshes............................................................. 201
`
`10 Transformations in Two Dimensions ..................................... 221
`
`11 Transformations in Three Dimensions ........... '. ...................... 263
`
`12 A 2D and 3D Transformation Library for Graphics ............. 287
`
`13 Camera Specifications and Transformations ......................... 299
`
`14 Standard Approximations and Representations .................... 321
`15
`
`Ray Casting and Rasterization ............................................. 387
`
`16 Survey of Real-Time 3D Graphics Platforms.,_.,_...................... 451
`
`17
`
`18
`
`Image Representation and Manipulation .............................. 481
`
`Images and Signal Processing ............................................... 495
`
`19 Enlarging and Shrinking Images .......................................... 533
`
`mi
`
`
`
`

`

`
`
`viii
`
`Contents at a Glance
`
`20 Textures and Texture Mapping ............................................. 547
`
`21 Interaction Techniques ......................................................... 567
`
`22 Splines and SubdivisionCurves 595
`
`23 Splines and Subdivision Surfaces .......................................... 607
`
`24
`
`Implicit Representations of Shape ........................................ 615
`
`25 Meshes .................................................................................. 635
`
`26 Light ..................................................................................... 669
`
`27 Materials and Scattering ...................................................... 711
`
`28 Color.................................................................................... 745
`
`29 Light Transport .................................................................... 783
`
`30 Probability and Monte Carlo Integration ............................. 801
`
`31 Computing Solutions to the Rendering Equation:
`Theoretical Approaches ........................................................ 825
`
`32 Rendering in Practice ........................................................... 881
`
`33 Shaders ................................................................................. 927
`
`34 Expressive Rendering ........................................................... 945
`
`35 Motion . . .; .............................................................................. 963
`
`.........................I .................. 1 023
`36 Visibility Determination.........-
`37 Spatial Data Structures.............: ........................................... 1065
`38 Modern Graphics Hardware................................................. 1103
`
`List of Principles ......................................................................... 1 145
`Bibliography...................; ............................................................. 1149
`Index ............................................................................................. 1183
`
`
`
`
`
`
`
`

`

`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`Chapter 1
`
`Introduction
`
`This chapter introduces computer graphics quite broadly and from several per-
`spectives: its applications, the various fields that are involved in the study of
`graphics, some of the tools that make the images produced by graphics so effec—
`tive, some numbers to' help you understand the scales at which computer graphics
`works, and the elementary ideas required to write your first graphics program.
`We’ll discuss many of these topics in more detail elsewhere in the book.
`
`1.1 An lntroductibn to Computer Graphics
`
`Computer graphics is the science and art of communicating visually via a com—
`puter’s display and its interaction devices. The visual aspect of the communica—
`tion is usually in the computer-to-human direction, with the human-to—computer
`direction being mediated by devices like the mouse, keyboard, joystick, game
`controller, or touch—sensitive overlay. However, even this is beginning to change:
`Visual data is starting to flow back to the computer, with new interfaces being
`based on computer vision algorithms applied to Video or depth—camera input. But
`for the computer—to—user direction, the ultimate consumers of the communica—
`tions are human, and thus the ways that humans perceive imagery are critical in
`the design of graphics1 programs—features that humans ignore need not be pre—
`sented ('nor computedl). Computer graphics is a cross-disciplinary field in which
`physics, mathematics, human :perception, human-computer interaction, engineer-
`ing, graphic design, and art all play important roles. We use physics to model
`light and to perform simulations for animation. We use mathematics to describe
`, shape. Human perceptual abilities determine our allocation of resources—we
`don’t want to spend time rendering things that will not be noticed. We use engi—
`neering in optimizing the allocation of bandwidth,lmemory, and processor time.
`Graphic design and art combine with human-computer interaction to make the
`computer—to—human direction of communication most effective. In this chapter,
`
`
`1. Throughout this book, when we use the term “graphics” we mean “computer graphics.”
`
`1
`
`
`
`

`

`
`
`2
`
`Introduction
`
`
`
`
`we discuss some application areas, how conventional graphics systems work, and
`how each of these disciplines influences work in computer graphics.
`A narrow definition of computer graphics would state that it refers to taking a
`model of the objects in a scene (a geometric description of the things in the scene
`and a description of how they reflect light) and a model of the light emitted into the
`scene (a mathematical description of the sources of light energy, the directions of
`radiation, the distribution of light wavelengths, etc.), and then producing a repre—
`sentation of a particular view of the scene (the light arriving at some imaginary eye
`or camera in the scene). In this view, one might say that graphics is just glorified
`multiplication: One multiplies the incoming light by the reflectivities of objects in
`the scene to compute the light leaving those objects’ surfaces and repeats the pro-
`cess (treating the surfaces as new light sources and recursively invoking the light—
`transport operation), determining all light that eventually reaches the camera. (In
`practice, this approach is unworkable, but the idea remains.) In contrast, computer
`vision amounts to factoring—given a View of a scene, the computer vision system
`is charged with determining the illumination and/or the scene’s contents (which
`a graphics system could then “multiply” together to reproduce the same image).
`In truth, of course, the vision system cannot solve the problem as stated and typ—
`ically works with assumptions about the scene, or the lighting, or both, and may
`also have multiple views of the scene from different cameras, or multiple views
`from a single camera but at different times.
` ,_
`
`In the field of computer graphics, the word “model” can refer to a geometric
`model or a mathematical model. A geometric model is a model of something
`we plan to have appear in a picture: We make a model of a car, or a house, or
`an armadillo. The geometric model is enhanced with various other attributes
`that describe the color or texture or reflectance of the materials involved in the
`model. Starting from nothing and creating such a model is called modeling,
`and the geometric—plus-other—informati0n description that is the result is called
`a model.
`
`A mathematical model is a model of a physical or computational process.
`For instance, in Chapter 27 we describe various models of how light reflects
`from glossy surfaces. We also have models of how objects move and models of
`things like the image-acquisition process that happens in a digital camera. Such
`models may be faithful (i.e., may provide a predictive and correct mathemat—
`ical model of the phenomenon) or not; they may be physically based, derived
`from first principles, or perhaps empirical or phenomenological, derived from
`observations or even intuition.
`I__
`
`In actual fact, graphics is far richer than the generalized multiplication pro—
`cess of rendering a view, just as vision is richer than factorization. Much of the
`' current research in graphics is in methods for creating geometric models, methods
`for representing surface reflectance (and subsurface reflectance, and reflectances
`of participating media such as fog and smoke, etc.), the (animation of scenes by
`physical laws and by approximations of those laws, the control of animation,
`interaction with virtual objects, the invention of nonphotorealistic representa—
`tions, and, in recent years, an increasing integration of techniques from computer
`vision. As a result, the fields of computer graphics and computer vision are grow—
`ing increasingly closer to each other. For example, consider Raskar’s work on a
`
`
`
`
`
`

`

`
`
`1.1 An Introduction to ComputerGraphics
`
`3
`
`
`
`
`
`Wt-ystmagser.e.“germWe”.«em....‘aarwzwath’aw.‘;y.J,,1-
`
`
`
`
`
`Figure 1.]: A nonphotorealiszic camera can create an artistic rendering of a scene by
`applying computer vision techniques to multiple flash-photo images and then rerender—
`ing the scene using computer graphics techniques. At left is the original scene; at right is
`the new rendering of the scene. (Courtesy ofRamesh Raskar; @2004 ACM, Inc. Included
`here by permission.)
`
`nonphotorealistic camera: The camera takes multiple photos of a single scene,
`illuminated by differently placed flash units. From these various images, one can
`use computer vision techniques to determine contours and estimate some basic
`shape properties for objects in the scene. These, in turn, can be used to create a
`nonphotorealistic rendering of the scene, as shown in Figure 1.1.
`In this book, we emphasize realistic image capture and rendering because this
`is where the field of computer graphics has had the greatest successes, represent-
`ing a beautiful application of relatively new computer science to the simulation
`of relatively old physics models. But there’s more to graphics than realistic image
`capture and rendering. Animation and interaction, for instance, are equally impor—
`tant, and we discuss these disciplines throughout many chapters in this book as
`well as address them explicitly in their own chapters. Why has success in the
`nonsimulation areas been so comparatively hard to achieve? Perhaps because
`these areas are more qualitative in nature and lack existing mathematical mod—
`els like those provided by physics.
`This book is not filled with recipes for implementing lots of ideas in computer
`graphics; instead, it provides a higher~level View of the subject, with the goal of
`teaching you ideas that will remain relevant long after particular implementations
`are no longer important. We believe that by synthesizing decades of research, we
`can elucidate principles that will help you in your study and use of computer
`graphics. You’llgenerally need to write your own implementations or find them
`elsewhere.
`
`This is not, by any means, because we disparage such information or the books
`that provide it. We admire such work and learn from it. And we admire those who
`can synthesize it into a coherent and well—presented whole. With this in mind,
`“We strongly recommend that as you read this book, you keep a copy of Akenine—
`Moller, Haines, and Hoffman’s book on real-time rendering [AMHHOS] next to
`you. An alternative, but less good, approach is to take any“ particular topic that
`interests you and search the Internet for information about it. The mathematician
`Abel claimed that he managed to succeed in mathematics because he made a prac-
`tice of reading the works of the masters rather than their students, and We advise
`that you follow his lead. The aforementioned real-time rendering book is written
`by masters of the subject, while a random web page may be written by anyone.
`
`
`
`

`

`
`
`.”'7Wm
`
`9.3.
`
`-
`
`'
`
`“
`
`'
`
`
`
`4
`
`Introduction
`
`We believe that it’s far better, if you want to grab something from the Internet, to
`grab the original paper on the subject.
`' Having promised principles, we offer two right away, courtesy of Michael
`Littman:
`
`v/ THE KNOW YOUR PROBLEM PRINCIPLE:
`
`Know what problem you are solv- Ing.
`
`
`
`v/ THE APPROXIMATE THE SOLUTION PRINCIPLE: Approximate the solution,
`not the problem.
`
`Both are good guides for research in general, but for graphics in particular,
`where there are so many widely used approximations that it’s sometimes easy
`to forget what the approximation is approximating, working with the unapproxi—
`mated entity may lead to a far clearer path to a solution to your problem.
`
`1.1.1 The World of Computer Graphics
`
`The academic side of computer graphics is dominated by SIGGRAPH, the Asso-
`ciation for Computing Machinery’s Special Interest Group on Computer Graph-
`ics and Interactive Techniques; the annual SIGGRAPH conference is the premier
`venue for the presentation of new results in computer graphics, as well as a large
`commercial trade show and several colocated conferences in related areas. The
`
`SIGGRAPH proceedings, published by the ACM, are the most important refer—
`ence works that a practitioner in the field can have. In recent years these have
`been published as an issue of the ACM Transactions on Graphics.
`Computer graphics is also an industry, of course, and it has had an enor-
`mous impact in the areas of film, television, advertising, and games. It has also
`changed the way we look at information in medicine, architecture, industrial pro—
`cess control, network operations, and our day—to-day lives as we see weather maps
`and other information Visualizations. Perhaps most significantly, the graphical
`user interfaces (GUIs) on our telephones, computers, automobile dashboards, and
`many home electronics devices are all enabled by computer graphics.
`
`1.1.2 Current and Future Application Areas ‘
`
`‘
`
`Computer graphics has rapidly shifted from a novelty to an everyday phenomenon.
`Even throwaway devices, like the handheld digital games that parents give to chil—
`dren to keep them occupied on airplane trips, have graphical displays and inter—
`faces. This corresponds to two phenomena: First visual perception is powerful,
`and Visual communication is incredibly rapid, so designers of devices of all kinds
`want to use it, and second, the cost to manufacture computer-based devices is
`decreasing rapidly. (Roy Smith [Smi], discussing in the 1980svarious claims that
`a GPS unit was so complex that it could never cost less than $1000, said, “Any-
`thing made of silicon will someday cost five dollars.” It’s a good rule of thumb.)
`As graphics has become more prevalent, user expectations have risen. Video
`games display many millions of polygons per second, and special effects in
`films are now so good that
`they’re no longer readily distinguishable from
`
`
`
`
`
`

`

`
`
`.
`
`l
`
`.
`
`
`
`
`
`Figure 17.1} An actor, photogra-
`phed in front of a green screen,
`is to be cornposited into a scene.
`(Jackson
`Lee/Splash
`News/
`Corbis)
`
`‘ F
`
`igure 17.2: The actor, compos—
`ited atop an outdoor scene. The
`detail shows how the horse’s tail
`
`obscures part of the background,
`while some background shows
`through.
`(Jackson Lee/Splash
`News/Corbis)
`
`17.4
`
`Image Compositing
`
`485
`
`For programs that manipulate images, the choice of image format is almost
`’ always irrelevant: You almost certainly want to represent an image as an array of
`double—precision floating~point numbers (or one such array per channel, or per—
`hapS a single three-index array where the third index selects the channel). The
`reason to favor floating-point representations is that we often perform operations
`in which adjacent pixel values are averaged; averaging integer or fixed—point val—
`ues, especially when it’s done repeatedly, may result in unacceptable accumulated
`roundoff errors.
`There are two exceptions to the “use floating point” rule.
`
`0
`
`0
`
`If the data associated to each pixel is of a type for which averaging makes
`no sense (e.g., an object identifier telling which object is visible at that
`pixel—a so-called object ID channel), then it is better to store the value in
`a form for which arithmetic operations are undefined (such as enumerated
`types), as a preventive measureagainst silly programming errors.
`
`If the pixel data will be used in a search procedure, then a fixed-point
`representation may make more sense. If, for example, one is going to
`look through the image for all pixels whose neighborhoods “look like” the
`neighborhood of a given pixel, integer equality tests may make more sense
`than floating-point equality tests, which must almost always be imple—
`mented as “near-equality” tests (i.e., “Is the difference less than some small
`value 6?”).
`
`17.4
`
`Image Compositing
`
`Movie directors often want to film a scene in which actors are in some type of
`interesting situation (e.g., a remote country, a spaceship, an exploding building,
`etc.). In many cases, it’s not practical to have the actors actually be in these situ—
`ations (e.g., for insurance reasons it’s impossible to arrange for top-paid actors to
`stand inside exploding buildings). Hollywood uses a technique called blue screen-
`ing (see Figure 17.1) to address this. With this technique, the actors are recorded in
`a featureless room in which the back wall is of some known color (originally blue,
`now often green; we’ll use green in our description). From the resultant digital
`images, any pixel that’s all green is determined to be part of the background; pix—
`.els that have no green are “actor” pixels and those that are a mix of green and some
`other color are “part actor, part background” pixels. Then the interesting situation
`(e.g., the exploding building) is also recorded. Finally, the image of the actors is
`composited atop the images of the interesting situation: Every green pixel in the
`actor image is replaced by the color of the situation-image pixel;._every nongreen
`'pixel remains. And the partially green pixels are replaced by a combination of a
`color extracted from the actor image and the situation image (see Figure 17.2).
`The resultant composite appears to» show the actor in front of the exploding
`building.
`”
`There are some limitations to this approach: The lighting on the actors does
`" IIOt come from the lighting in the situation (or it muSt be carefully choreographed
`to approximate it), and things like shadows present real difficulties. Furthermore,
`at the part actor, part background pixels, we have to estimate the Vgolor that’s to be
`associated to the actors, and the fraction of coverage. The result is a foreground
`«image and a mask, whose pixel values indicate what fraction of the pixel is cov-
`ered by foreground content: A pixel containing an actor has mask value 1; a pixel
`
`
`
`

`

`
`
`
`
`486
`
`Image Representation and Manipulation
`
`i
`
`_
`
`showing the background has mask value 0, and a pixel at the edge (e.g., in the
`actor’s hair) has some intermediate value.
`In computer graphics, we often perform similar operations: We generate a
`rendering of some scene, and we want to place other objects (which we also ren—
`der) into the scene after the fact. Porter and Duff [PD84] gave the first published
`description of the details of these operations, but they credit the ideas to prior
`work at the New York Institute of Technology. Fortunately, in computer graphics,
`as we render these foreground objects we can usually compute the mask value at
`the same time, rather than having to estimate it; after all, we know the exact geom—
`etry of our object and the virtual camera. The mask value in computer graphics
`is typically denoted by the letter oz so that pixels are represented by a 4—tuple
`(R, G, B, oz). Images with pixels of this form are referred to as RGBA images and
`as RGBa images.
`Porter and Duff [PD84] describe a wide collection of image composition oper—
`ations; we’ll follow their development after first concentrating on the single oper—
`ation described above: If U and V are images, the image “U over V” corresponds
`to “actor over (i.e., in front of) situation.”
`
`17.4.1 The Meaning of a Pixel During Image
`Compositing
`
`The value oz represents the opacity of a single pixel of the image. If we regard the
`image as being composed of tiny squares, then a = 0.75 for some square tells us
`that the square is three-quarters covered by some object (1.6., 3/4 opaque) but l/4
`uncovered (i.e., 1/4 transparent). Thus, if our rendering is of an object consisting
`of a single pure—red triangle whose interior covers three-quarters of some pixel,
`the a—value for that pixel would be 0.75, while the R value would indicate the
`intensity of the red light from the object, and G and B would be zero.
`With a single number, 04, we cannot indicate anything more than the opacity;
`we cannot, for instance, indicate whether it is the left or the right half of the pixel
`that is most covered, or whether it’s covered in a striped or polka—dot pattern. We
`therefore make the assumption that the coverage is uniformly distributed across
`the pixel: If you picked a point at random in the pixel, the probability that it is
`opaque rather than transparent is 04. We make the further assumption that there is
`no correlation between these probabilities in the two images; that is, if org 2 0.5
`and av = 0.75, then the probability that a random point is opaque in both images
`is 0.5 10.75 = 0.375, and the probability that it is transparent in both is 0.125.
`The red, green, and blue values represent the intensity of light that would arise
`from the pixel if it were fully opaque, that is, if a = 1.
`'
`I
`
`17.4.2 Computing Uover V
`
`Because compositing is performed one pixel at a time, we can illustrate‘our com-
`putation with a single pixel. Figure 17.3 shows an example in which 0411 = 0.4
`and av = 0.3. The fraction of the image covered by both is 0.4 - 0.3 = 0.12,
`while the fraction covered by V but not U is 0.6 - 0.3 = 0.18.
`'
`To compute U over V, we must assign both an (Dz—value and a color to the resul-
`tant pixel. The coverage, a, will be 0.4 + 0.18, representing that all the opaque
`parts of U persist in the composite, as do the parts of V not obscured by U. In
`more generality, we have
`
`}
`
`
`
`

`

`
`
`17.4
`
`image Compositing
`
`487
`
`
`
`(b)
`
`((1)
`
`(a)
`
`
`
`
`
`
`
`
`
`
`
`
`i .4.3 Simplifying Compositing
`
`
`
`Figure 17.3: (a) A pixel from an image U, 40% covered. Properly speaking, the covered
`area Should be shown scattered randomly about the pixel square.
`([9) A pixel from the
`. image V, 30% covered. (c) The two pixels, drawn in a single square; the overlap area is
`12% ofthe pixel. (d) The compositing resultfor U over V: All ofthe opaque part of U shows
`(covering 40% of the pixel), and the nonhidden opaque part of V shows (covering 18% of
`the pixel)-
`
`-
`
`or = on] +(1— 05(1)on = on; + av — ayav.
`
`(17.1)
`
`What about the color of the resultant pixel (i.e., the intensity of red, green, and
`blue light)? Well, the light contributed by the U portion of the pixel (i.e., the frac-
`tion aU containing opaque bits from U) is 0611 ~ (RU, GU, BU), where the subscript
`indiCates that these are the RGB values from the U pixel. The light contributed by
`the Vpart of the pixel is (1 ~ Oty)OtV - (RV, GV,BV). Thus, the total light is
`
`OéU ' (RU, GwBU) + (1 — OtU)O!V ' (RV,GV,BV),
`
`(17-2)
`
`while the total opacity is a 2 cm + (l — OCU)Ctv. If the pixel were totally opaque,
`the resultant light would be brighter by a factor of a; to avoid this brightness
`change, we must divide by a, so the RGB values for the pixel are
`
`
`O‘U ' (RU, GwBU) + (1 — OCU)CYV ' (RV, GVsBV)
`\OtU +(1— ay)OlV
`
`.
`
`(17.3)
`
`3 These compositing equations tell us how to associate an opacity or coverage
`value and a color to each pixel of the U over V composite.
`
`
`
`i. Orter and Duff [PD84] observe that in these equations, the color of U always
`
`ppears multiplied by av, and similarly for V. Thus, if instead of storing the val-
`:ues (R, G, B, or) at each pixel, we stored (aR, 04G, OtB, a), the computations would
`fmplify. Denoting these by (r, g, b, a) (so that r denotes Ra, for instance), the
`mpositing equations become
`
`
`
`a=l-ay+(l—aU)-avand
`
`(ragab)=1'(rU’gU9QU)+(1_aU)I(rV7gVabV)i
`
`
`
`
`, here the fraction has disappeared because the new (r, g, [7) values must include
`i e premultiplied a-value,
`’
`The form of these two equations is identical: The data for U are multiplied
`
`by 1, and the data for V are multiplied by (1 — Ody). Calling theseFU and FV, the
`fever” compositing rule becomes
`
`
`(r, g, b, 04) = FU ' (rt/£0,170, CYU) + F'V ‘ (ngvybv, OW)-
`
`
`
`(17-4)
`
`

`

`488
`
`Image Representation and Manipulation
`
`17.4.4 Other Compositing Operations
`
`Porter and Duff define other compositing operations as well; almost all have the
`same form as Equation 17.4, with the values FU and FV varying. One can think of
`each point of the pixel as being in the opaque part of neither U nor V, the opaque
`part of just U, of just V, or of both. For each, we can think of taking the color from
`U, from V, or from neither, but to use the color of V on a point where only U is
`opaque seems nonsensical, and similarly for the points that are transparent in both.
`Writing a quadruple to describe the chosen color, we have choices like (0, U, V, U)
`representing U over V and (0, U, V, 0) representing U xor V (i.e., show the part of
`the image that’s in either U or V but not both). Figure 17.4, following Porter and
`Duff, lists the possible operations, the associated quadruples, and the multipliers
`FA and F3 associated to each. The table in the figure omits symmetric operations
`(i.e., we show U over V, but not V over U).
`Finally, there are other compositing operations that do not follow the blending-
`by-Fs rule. One of these is the darken operation, which makes the opaque part of
`an image darker without changing the coverage:
`
`darken(U, s) = (srU, ng, shy, cm).
`
`(17.5)
`
`Closely related is the dissolve operation, in which the pixel retains its color, but
`the coverage is gradually reduced:
`
`dissolve(U, s) 2 (sm, sgu, sby, say).
`
`(17.6)
`
`
`
`{UClear
`
`
`
`
`
`
`
`
`
`
`
`Figure 17.4. Compositirzg operations, and the multipliers for each, to be used with colors
`premultiplied by a (following Porter and Duff).
`
`
`
`

`

`
`
`17.4
`
`Image Compositing
`
`489
`
`
`
`Inline Exercise 17.2: Explain why, in the dissolve operation, we had to multi—
`ply the “rgb” values by s, even though we were merely altering the opacity of
`the pixel.
`
`.
`
`The dissolve operation can be used to create a transition from one image to
`7 another:
`
`blend(U, V, s) = dissolve(U, (1 — 5)) + dissolve(V,s),
`
`(17.7)
`
`i where component—by—component addition is indicated by the + sign, and the
`parameter s varies from 0 (a pure—U result) to l (a pure—V result).
`
`
`
`
`
`
`
`
`
`\.
`
`
`17.4.5 Physical Units and Compositing
`We’ve spoken about blending light “intensity” using a-values. This has really
`
`een a proxy for the idea of blending radiance values (discussed in Chapter 26),
`hich are the values that represent the measurement of light in terms of energy.
`
`If, instead, our pixels’ red values are simply numbers between ,0 and 255 that
`epresent a range from “no red at all” to “as much red as we can represent,”
`
`then combining them with linear operations is meaningless. Worse still, if they
`
`'do not correSpond to radiance, but to some power of radiance (e.g., its square
`
`'l‘OOt), then linear combinations definitely produce the wrong results. Nonetheless,
`age composition using pixel values directly, whatever they might mean, was
`
`done for many years; once again, it’s a testament to the visual system’s adaptivity
`that we found the results so convincing. When people began to composite real-
`.World imagery and computer-generated imagery together, however, some prob-
`
`. Ins became apparent; the industry standard is now to do compositing “in linear
`space,” that is, representing color values by something that’s a scalar multiple of
`
`a physically meaningful and linearly combinable unit [Rob].
`
`
`
`
`
`
`Inline Exercise 17.3: Explain why, if ozy and av are both between zero
`and one, the resultant a-value will be as well so that the resultant pixel is
`meaningful.
`
`Image operations like these, and their generalizations, are the foundation of
`image editing programs like Adobe Photoshop [Wik].
`
`
`
`17.4.4.1 Problems with Premultiplied Alpha
`'Suppose you wrote a compositing program that converted ordinary RGBA images
`
`into premultiplied—ae images internally, performed compositing operations, and
`then wrote out the images after conversion back to unpremultiplied—a images.
`
`What would happen if someone used your program to operate

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket