throbber
l~illlhlffliill(~ill
`
`3 1822 03503 9379
`
`Computer Vision
`A Modern Approach
`
`David A. Forsyth
`University of California at Berkeley
`
`Jean Ponce
`University of Illinois at Urbana-Champaign
`
`An Alan R. Apt Book
`
`Prentice
`Hall
`
`Prentice Hall
`Upper Saddle River, New Jersey 07458
`~
`
`Legend3D, Inc. Ex. 2020-0001
`IPR2016-01243
`
`

`

`Library of Congress Cataloging-in-Publication Data
`
`CIP data on file.
`
`Vice President and Editorial Director, ECS: Marcia J. Horton
`Publisher: Alan Apt
`Associate Editor: Toni D. Holm
`Editorial Assistant: Patrick Lindner
`Vice President and Director of Production and Manufacturing, ESM: David W Riccardi
`Executive Managing Editor: Vince O'Brien
`Assistant Managing Editor: Camille Trentacoste
`Production Editor: Leslie Galen
`Director of Creative Services: Paul Belfanti
`Creative Director: Carole Anson
`Art Director: Wanda Espana
`Art Editor: Greg Dulles
`Manufacturing Manager: Trudy Pisciotti
`Manufacturing Buyer: Lynda Castillo
`Marketing Manager: Pamela Shaffer
`Marketing Assistant: Barrie Reinhold
`
`About the cover: Image courtesy of Garnma/Superstock
`
`© 2003 by Pearson Education, Inc.
`Pearson Education, Inc.
`Upper Saddle River, NJ 07458
`
`All rights reserved. No part of this book may be reproduced, in any form or by any means, without permission in
`writing from the publisher.
`
`The author and publisher of this book have used their best efforts in preparing this book. These efforts include the
`development, research, and testing of the theories and programs to determine their effectiveness. The author and
`publisher make no warranty of any kind, expressed or implied, with regard to these programs or the documentation
`contained in this book. The author and publisher shall not be liable in any event for incidental or consequential damages
`in connection with, or arising out of, the furnishing, performance, or use of these programs.
`
`Printed in the United States of America
`
`10 9 8 7 6 5 4 3 2 1
`ISBN 0-13-085198-1
`
`Pearson Education Ltd., London
`Pearson Education Australia Pty. Ltd., Sydney
`Pearson Education Singapore, Pte. Ltd.
`Pearson Education North Asia Ltd., Hong Kong
`Pearson Education Canada, Inc., Toronto
`Pearson Educacfon de Mexico, S.A. de C.V.
`Pearson Education-Japan, Tokyo
`Pearson Education Malaysia, Pte. Ltd.
`Pearson Education, Upper Saddle River, New Jersey
`
`Legend3D, Inc. Ex. 2020-0002
`IPR2016-01243
`
`

`

`82
`
`Sources, Shadows, and Shading
`
`A
`
`different images of a surface in a fixed view illuminated by different sources. This method re.
`covers the height of the surface at points corresponding to each pixel; in computer vision circles,
`the resulting representation is often known as a ]Jeight map, depth map, or dense depth map.
`Fix the camera and the surface in position and illuniinate the surface using a point source
`that is far away compared with the size of the surface. We adopt a local shading model und
`assume that there is no ambient illumination (more about this later) so that the radiosity at
`point P on the surface is
`
`B(P) = p(P)N(P) · S1,
`
`where N is the unit surface normal and S 1 is the source vector. With our camera model, there is
`only one point P on the surface for each point (x, y) in the image, and we can write B (x, y) for
`B(P). Now we assume t.hat the response of the camera is linear in the surface radiosity, so the
`value of a pixel at (x, y) is
`
`I(x, y) = kB(x, y)
`
`= kp(x, y)N(x, y) · S1
`= g(x, y) · Vi,
`
`where k is the constant connecting the camera response to the input radiance, g(x, y)
`p(x, y)N(x, y), and Vi = kS1.
`In these equations, gEx, y) describes the surface and V 1 is a property of the illumination
`and of the camera. We have a dot product between a vector field g(x, y) and a vector V 1, which
`could be measured; with enough of these dot products, we could reconstruct g and so the surface.
`
`5.4.1 Normal and Albedo from Many Views
`
`Now if we haven sources, for each of which V; is known, we stack each of these V; into a known
`matrix V, where
`
`For each image point, we stack the measurements into a vector
`
`i(x, y) = {/i(x, y), h(x, y), ... , In(x, y)}T.
`
`Notice that we have one vector per image point; each vector contains all the image brightnesses
`observed at that point for different sources. Now we have
`
`i(x, y) = Vg(x, y),
`
`and g is obtained by solving this linear system-or rather, one linear system per point in the
`image. Typically, n > 3 so that a least squares solution is appropriate. This has the advantage
`that the residual error in the solution provides a check on our measurements.
`The difficulty with this approach is that substantial regions of the surface may be in shadow
`for one or the other light (see Figure 5.10). There is a simple trick that deals with shadows. If
`there really is no ambient illumination, then we can form a matrix from the image vector and
`multiply both sides by this matrix; this zeroes out any equations from points that are in shadow.
`
`Legend3D, Inc. Ex. 2020-0003
`IPR2016-01243
`
`

`

`hap.5
`
`tod re(cid:173)
`;ircles,
`ap.
`source
`iel and
`ity at a
`
`, there is
`x, y) for
`.•
`:y, so the)
`
`~(x, y) =
`
`lumination·
`y 1, which
`:he surface;
`
`Sec. 5.4
`
`Application: Photometric Stereo
`
`Figure 5.10 Five synthetic images of a sphere, all obtained in an orthographic
`view from the same viewing position. These images are shaded using a local
`shading model and a distant point source. This is a convex object, so the only
`view where there is no visible shadow occurs when the source direction is paral(cid:173)
`lel to the viewing direction. The variations in brightness occuring under different
`sources code the shape of the surface.
`
`We form
`
`and
`
`I(x, y) =
`
`(
`
`Ii(~, y)
`...
`
`0
`
`l2(x, y)
`
`0
`
`0
`
`Ii= IVg(x, y),
`
`and I has the effect of zeroing the contributions from shadowed regions, because the relevant
`elements of the matrix are zero at points that are in shadow. Again, there is one linear system per
`point in the image; at each point, we solve this linear system to r~cover the g vector at that point.
`
`Measuring Albedo We can extract the albedo from a measurement of g because N is
`the unit normal. This means that Jg(x, y) I = p (x, y). This provides a check on our measurements
`as well. Because the albedo is in the range zero to one, any pixels where lgl is greater than one are
`suspect-either the pixel is not working or V is incorrect. Figure 5.11 shows albedo recovered
`using this method for the images of Figure 5 .10.
`
`Recovering Normals We can extract the surface normal from g because the normal
`is a unit vector
`
`N(x, y) =
`
`1
`lg(x, y)lg(x, y).
`
`Figure 5.12 shows normal values recovered for the images of Figure 5.10.
`
`Legend3D, Inc. Ex. 2020-0004
`IPR2016-01243
`
`

`

`84
`
`Sources, Shadows, and Shading
`
`Figure 5.11 The magnitude of the vector field g(x, y) recovered from the in(cid:173)
`put data of Figure 5.10 represented as an image-this is the reflectance of the
`surface.
`
`5.4.2 Shape from Normals
`
`The surface is (x, y, f (x, y)), so the normal as a function of (x, y) is
`}T
`{ Bf
`Bf
`- - , - - , 1
`ax
`ay
`
`N(x, y) =
`
`1
`
`J1 + aj2 + a12
`
`ax
`
`By
`
`To recover the depth map, we need to determine f (x, y) from measured values of the unit normal.
`Assume that the measured value of the unit normal at some point (x, y) is (a(x, y), b(x, y),
`c(x, y)). Then
`
`a(x, y)
`af
`-= - - -
`c(x, y)
`ax
`
`and
`
`af
`ay
`
`=
`
`b(x, y)
`c(x, y)
`
`20
`
`10
`
`0
`
`-10
`35
`
`30
`
`25
`
`35
`
`30
`
`25
`
`20
`
`15
`
`10
`
`5
`
`0
`
`0
`
`Figure 5.12 The normal field recovered from the input data of Figure 5 .10.
`
`Legend3D, Inc. Ex. 2020-0005
`IPR2016-01243
`
`

`

`Sec. 5.4
`
`Application: Photometric Stereo
`
`85
`
`We have another check on -our data set, because
`(J2 f
`axay
`
`az f
`ayax'
`
`so we expect that
`
`a (b(x,y))
`a (a(x,y))
`ay
`ax
`should be small at each point. In principle it should be zero, but we would have to estimate these
`partial derivatives numerically and so should be willing to accept small values. This test is known
`as a test of integrability, which in vision applications always boils down to checking that mixed
`second partials are equal.
`
`c(x,y)
`
`c(x,y)
`
`Algorithm 5.1: Photometric Stereo
`
`Obtain many images in a fixed view under different illurninants
`Determine the matrix V from source and camera information
`Create arrays for albedo, normal (3 components),
`p (measured value of ~{) and
`q (measured value of ~-~)
`For each point in the image array
`Stack image values into a vector i
`Construct the diagonal matrix I
`Solve IVg = Ii to obtain g for this point
`Albedo at this point is Jg/
`Normal at this point is 1~1
`p at this point is ~~
`q at this point is ~~
`end
`
`Check: is ( ~f, - * )2 small everywhere?
`
`· Top left corner of height map is zero
`-For each pixel in the left column of height map
`height value = previous height value + corresponding q value.
`-end
`For each row
`For each element of the row except for leftmost
`height value = previous height value + corresponding p value
`end
`end
`
`Shape by Integration Assuming that the partial derivatives pass this sanity test, we
`can reconstruct the surface up to some constant depth error. The partial derivative gives the
`change in surface height with a small step in either the x-or they direction. This means we can
`
`Legend3D, Inc. Ex. 2020-0006
`IPR2016-01243
`
`

`

`110
`
`Color
`
`Figure 6.11 The linear model of the color system allows a variety of useful
`constructions. If we have two lights whose CIE coordinates are A and B, all
`the colors that can be obtained from non-negative mixtures of these lights are
`represented by the line segment joining A and B. In turn, given B, C, and D,
`the colors that can by obtained by mixing them lie in the triangle formed by the
`three points. This is important in the design of monitors--each monitor has only
`three phosphors, and the more saturated the color of each phosphor, the bigger
`the set of colors that can be displayed. This also explains why the same colors
`can look quite different on different monitors. The curvature of the spectral locus
`gives the reason that no set of three real primaries can display all colors without
`subtractive matching.
`
`that is, blue. Notice that W + W = W because we assume that ink cannot cause paper to reflect
`more light than it does when uninked. Practical printing devices use at least four inks (cyan,
`magenta, yellow, and black) because mixing color inks leads to a poor black, it is difficult to
`ensure good enough registration between the three color inks to avoid colored haloes around text,
`and color inks tend to be more expensive than black inks. Getting really good results from a color
`printing process is still difficult: Different inks have significantly different spectral properties,
`different papers have different spectral properties too, and inks can mix nonlinearly.
`
`6.3.2 Non-linear Color Spaces
`
`The coordinates of a color in a linear space may not necessarily encode properties that are com(cid:173)
`mon in language or are important in applications. Useful color terms include: hue-the property
`of a color that varies in passing from red to green; saturation-the property of a color that varies
`in passing from red to pink; and brightness (sometimes called lightness or value)-the property
`that varies in passing from black to white. For example, if we are interested in checking whether
`a color lies in a particular range of reds, we might wish to encode the hue of the color directly.
`Another difficulty with linear color spaces is that the individual coordinates do not capture
`human intuitions about the topology of colors; it is a common intuition that hues form a circle,
`in the sense that hue changes from red through orange to yellow and then green and from there
`to cyan, blue, purple, and then red again. Another way to think of this is to think of local hue
`relations: Red is next to purple and orange; orange is next to red and yellow; yellow is next
`
`Legend3D, Inc. Ex. 2020-0007
`IPR2016-01243
`
`

`

`tp. 6
`
`Sec. 6.3
`
`Representing Color
`
`111
`
`G
`
`Green
`
`Value
`
`Green (120°) ,,..--1---~
`
`Yellow
`
`Magenta
`
`R
`
`Hue
`(angle)
`
`Saturation
`
`Figure 6.12 On the left, we see the RGB cube; this is the space of all colors
`that can be obtained by combining three primaries (R, G, and B-usually defined
`by the color response of a monitor) with weights between zero and one. It is
`common to view this cube along its neutral axis-the axis from the origin to the
`point (1, 1, 1)-to see a hexagon, shown in the middle. This hexagon codes hue
`(the property that changes as a color is changed from green to red) as an angle,
`which is intuitively satisfying. On the right, we see a cone obtained from this
`cross-section, where the distance along a generator of the cone gives the value
`(or brightness) of the color, angle around the cone gives the hue, and distance
`out gives the saturation of the color.
`
`to orange and green; green is next to yellow and cyan; cyan is next to green and blue; blue is
`next to cyan and purple; and purple is next to blue and red. Each of these local relations works,
`and globally they can be modeled by laying hues out in a circle. This means that no individual
`coordinate of a linear color .space can model hue because that coordinate has a maximum value
`that is far away from the minimum value.
`
`Hue, Saturation, and Value A standard method for dealing with this problem is to
`construct a color space that reflects these relations by applying a nonlinear transformation to the
`RGB space. There are many such spaces. One, called HSV space (for hue, saturation, and value),
`is obtained by looking down the center axis of the RGB cube. Because RGB is a linear space,
`brightness-called value in HSY-varies with scale out from the origin. We can flatten the RGB
`cube to get a 2D space of constant value and for neatness deform it to be a hexagon. This gets the
`structure shown in Figure 6.12, where hue is given by an angle that changes as one goes round
`the neutral point and saturation changes as one moves away from the neutral point.
`There are a variety of other possible changes of coordinate from between linear color
`spaces, or from linear to nonlinear color spaces (the recent book of Fairchild, 1998 is a good
`reference). There is no obvious advantage to using one set of coordinates over another (particu(cid:173)
`larly if the difference between coordinate systems is just a one-one transformation) unless one
`is concerned with coding, bit rates, and the like, or with perceptual uniformity.
`
`Uniform Color Spaces Usually one cannot reproduce colors exactly. This means it
`is important to know whether a color difference would be noticeable to a human viewer; it is
`generally useful to compare the significance of small color differences. It is usually dangerous
`
`r to rene
`nks (cyan
`difficult t
`round teX
`com a col
`properti
`
`kingW
`!or dire
`lo not ca
`fort11 a
`nd frotn
`k of lo°:
`yellow
`
`Legend3D, Inc. Ex. 2020-0008
`IPR2016-01243
`
`

`

`112
`
`Color
`
`to try and compare large color differences; consider trying to answer the question, "Is the blue
`patch more different from the yellow patch than the red patch is from the green patch?".
`One can determine just noticeable differences by modifying a color shown to an observer
`until they can only just tell it has changed in a comparison with the original color. When these
`differences are plotted on a color space, they form the boundary of a region of colors that arc
`indistinguishable from the original colors. Usually ellipses are fitted to the just noticeable differ(cid:173)
`ences. It turns out that in CIE xy space these ellipses depend quite strongly on where in the space
`the difference occurs, as the Macadam ellipses in Figure 6.13 illustrate.
`This means that the size of a difference in (x, y) coordinates, given by ((Llx) 2+(L'.ly)2) 11121.
`is a poor indicator of the significance of a difference in color (if it was a good indicator, the
`ellipses representing indistinguishable colors would be circles). A uniform color space is one in
`which the distance in coordinate space is a fair guide to the significance of the difference between
`two colors-in such a space, if the distance in coordinate space were below some threshold, a
`human observer would not be able to tell the colors apart.
`A more uniform space can be obtained from CIE XYZ by using a projective transformation
`to skew the ellipses; this yields the CIE u 'v' space, illustrated in Figure 6.14. The coordinates arc:
`
`1
`
`1
`
`)
`9Y
`4X
`(u 'v) = X + I5Y + 3Z' X + l5Y + 3Z
`
`(
`
`.
`
`Generally, the distance between coordinates in u', v' space is a fair indicator of the sig·
`nificance of the difference between two colors. Of course, this omits differences in brightness.
`
`1~-.--------------~ 1,.--.--..,---,.-..,--.--,--..,---,.-..,--,
`0.9
`0.9
`
`0.8
`
`0.7
`
`0.6
`
`y 0.5
`
`0.4
`
`0.3
`
`0.2
`
`0.1
`
`0.8
`
`0.7
`
`0.6
`
`y 0.5
`
`0.4
`
`0.3
`
`0.2
`
`0.1
`
`0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
`x
`
`1
`
`0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
`x
`
`Figure 6.13 This figure shows variations in color matches on a CIE x, y space.
`At the center of the ellipse is the color of a test light; the size of the ellipse
`represents the scatter of lights that the human observers tested would match to
`the test color; the boundary shows where the just noticeable difference is. The
`ellipses in the figure on the left have been magnified lOx for clarity; on the right
`they are plotted to scale. The ellipses are known as MacAdam ellipses after their
`inventor. Notice that the ellipses at the top are larger than those at the bottom of
`the figure, and that they rotate as they move up. This means that the magnitude
`of the difference in x, y coordinates is a poor guide to the difference in color.
`Ellipses are plotted using data from MacAdam (1942).
`
`Legend3D, Inc. Ex. 2020-0009
`IPR2016-01243
`
`

`

`114
`
`Color
`
`cant are assimilation-where surrounding colors cause the color reported for a surface patch 10
`move toward the color of the surrounding patch-and contrast-where surrounding colors cause
`the color reported for a surface patch to move away from the color of the surrounding patch.
`These effects appear to be related to coding issues within the optic nerve and to color constancy
`(Section 6.5).
`
`6.4 A MODEL FOR IMAGE COLOR
`
`To interpret the color values reported by a camera, we need some understanding of what cameras
`do and of what physical effects we wish to model. Our model supports several quite simple and
`powerful inference algorithms.
`
`6.4.1 Cameras
`
`Most color cameras contain a single imaging device. At each sensory element, there is one or
`three filters to give it the desired spectral sensitivity function (roughly, red, green, and blue).
`These filters are arranged in a mosaic; a variety of different patterns is used. The output of the
`CCD is then processed to reconstruct full red, green, and blue images. Of course, some signal
`information is lost in this process; what is lost depends on the details of the camera and or
`the mosaic. Typically, these losses do not affect the perceived quality of the image to a viewer.
`but they can significantly affect various spatial computations-for example, the ability of an
`edge detector to localize an edge (because the spatial resolution in intensity may not be what it
`seems).
`It is desirable to have the system of CCD element and filter have a spectral sensitivity that
`is within a linear transform of the CIE XYZ color matching functions. This property would mean
`that the camera would match colors in the same way that people did. The requirement seems to
`be quite hard to meet in practice because experimental evidence suggests that most cameras don't
`really have this desirable property.
`CCDs are intrinsically linear devices. However, most users are used to film, which tends
`to compress the incoming dynamic range (brightness differences at the top end of the range are
`reduced, as are those at the bottom end of the range). The output of a linear device tends to look
`too harsh (the darks are too dark and the lights are too light), so that manufacturers apply various
`forms of compression to the output. The most common is called gamma correction. This is a
`form of compression originally intended to account for nonlinearities within monitors. 'fypically,
`the intensity of a monitor goes as ~~,where Vin is the input voltage at the electron gun (y == 2.2
`for CRT monitors). In most computer display devices, the voltage supplied to the electron gun is
`a linear function of the value in the framebuffer. This means that, if we desire an intensity I, the
`value to use in the framebuffer is proportional to J 11Y. 'fypically, cameras are gamma corrected,
`meaning that one can take the value reported by a camera and put it directly in the framebuffer
`to get the right intensity. This means that the output of the camera is not a linear function of the
`input.
`There was a time when CCD cameras came with a separate box of control electronics,
`which had a switch that turned off the nonlinearities; those happy days are now past. 'fypically,
`the input-output relationship of a camera needs to be calibrated because there are a variety of
`possible nonlinearities (e.g., Barnard and Funt, 2002, Holst, 1998, or Vora, Farell, Tietz and
`Brainard, 1997). Extremists occasionally tinker with the camera electronics, a solution not for
`the faint of heart. In what follows, we assume that the input-output relationship of the camera is
`known and has been accounted for in a way that makes the camera seem linear.
`
`Legend3D, Inc. Ex. 2020-0010
`IPR2016-01243
`
`

`

`ap.6
`
`.ch to
`::a use
`)atch.
`:tancy
`
`lllleras
`)le and
`
`; one of
`d blue).
`it of the
`te signal
`a and of
`i viewer,
`ity of an
`,e what it
`
`dvity that
`mld mean
`: seems to.
`eras don't
`
`Sec. 6.4
`
`A Model for Image Color
`
`115
`
`Receptor response
`of k'th receptor class
`1 ak(A)P(A)E(A)dA
`Incoming spectral radiance /
`V
`E(A)
`
`/going spectral
`~~~~nee
`/
`~(A)P(A)
`
`Spectral albedo
`P(A)
`
`Figure 6.15
`If a patch of perfectly diffuse surface with diffuse spectral re(cid:173)
`flectance p().) is illuminated by a light whose spectrum is £().), the spec(cid:173)
`trum of the reflected light is p().)£().) (multiplied by some constant to do
`with surface orientation, which we have already decided to ignore). Thus, if
`a linear photoreceptor of the kth type sees this surface patch, its response is
`Pk = JA rh().)p().)£().)d)., where A is the range of all relevant wavelengths
`and erk().) is the sensitivity of the kth photoreceptor.
`
`6.4.2 A Model for Image Color
`
`The color of light arriving at a camera is determined by two factors: first, the spectral reflectance
`of the surface that the light is leaving; and second, the spectral radiance of the light falling on that
`surface. If a patch of perfectly diffuse surface with diffuse spectral reflectance p(Jc) is illuminated
`by a light whose spectrum is E(Jc), the spectrum of the reflected light is p(Jc)E(Jc) (multiplied by
`some constant to do with surface orientation, which we have already decided to ignore). Thus, if
`a linear photoreceptor of the kth type sees this surface patch, its response is:
`
`Pk = i ak(Jc)p(Jc)ECA)dJc,
`
`where A is the range of all relevant wavelengths and ak(Jc) is the sensitivity of the kth photore(cid:173)
`ceptor.
`The color of the light falling on surfaces can vary widely-from blue fluorescent light
`indoors, to warm orange tungsten lights, to orange or even red light at sunset-so that the color
`of the light arriving at the camera can be quite a poor representation of the color of the surfaces
`being viewed (Figures 6.16, 6.17, 6.18 and 6.19).
`By suppressing details in the physical models of chapter 5, we can model the value at a
`camera pixel as
`
`C(x) = gd(x)d(x) + gs(x)s(x) + i(x).
`
`In this model,
`
`• d(x) is the image color of an equivalent.fiat frontal surface viewed under the same light;
`• gd(x) is a term that varies over space and accounts for the change in brightness due to
`the orientation of the surface;
`• s(x) is the image color of the specular reflection from an equivalent.fiat frontal surface;
`• gs (x) is a term that varies over space and accounts for the change in the amount of energy
`specularly reflected;
`
`Legend3D, Inc. Ex. 2020-0011
`IPR2016-01243
`
`

`

`116
`
`Color
`
`Chap. 6
`
`Uniform reflectance
`illuminated by
`
`* Metal halide
`o Standard flourescent
`x Moon white flourescent
`o Daylight flourescent
`0 Uniform SPD
`
`* D
`
`0 x 0
`
`1
`
`0.9
`
`0.8
`
`0.7
`
`0.6 -
`
`y 0.5
`
`0.4
`
`0.3
`
`0.2
`
`0.1
`
`0.1
`
`0.2
`
`0.3
`
`0.4
`
`0.5
`x
`
`0.6
`
`0.7
`
`0.8
`
`0.9
`
`Figure 6.16 Light sources can have quite widely varying colors. This figure
`shows the color of the four light sources of Figure 6.2, compared with the color
`of a uniform spectral power distribution, plotted in CIE x, y coordinates.
`
`Blue flower
`illuminated by
`
`* Metal halide
`
`0 Standard flourescent
`X Moon white flourescent
`O Daylight flourescent
`\) Uniform SPD
`
`Violet flower
`illuminated by
`
`* Metal halide
`
`D Standard flourescent
`X Moon white flourescent
`0 Daylight flourescent
`\) Uniform SPD
`
`0.9
`
`0.8
`
`0.7 :
`
`0.6 -
`
`y 0.5
`
`0.4
`
`' Q
`
`0.9
`
`0.8
`
`0.7 '
`
`0.6
`
`y 0.5
`
`0.4
`
`0.3
`
`0.2
`
`0.1
`
`0.3 \ /
`
`0.2
`
`0.1
`
`00
`
`0.1
`
`0.2
`
`0.3
`
`0.4
`
`0.5
`
`0.6
`
`0.7
`
`0.8
`
`0.9
`
`x
`
`Figure 6.17 Surfaces have significantly different colors when viewed under
`different lights. These figures show the colors taken on by the blue and violet
`flowers of Figure 6.3 when viewed under the four different sources of Figure 6.2
`and under a uniform spectral power distribution.
`
`Legend3D, Inc. Ex. 2020-0012
`IPR2016-01243
`
`

`

`). 6
`
`Sec. 6.4
`
`A Model for Image Color
`
`117
`
`Yellow flower
`illuminated by
`
`llf Metal halide
`D Standard flourescent
`x Moon white flourescent
`O Daylight flourescent
`0 Uniform SPD
`
`0.9
`
`0.8
`
`0.6
`
`y 0.5
`
`0.4
`
`0.3
`
`0.2
`
`0.1
`
`Orange flower
`illuminated by
`
`llf Metal halide
`D Standard flourescent
`x Moon white flourescent
`0 Daylight flourescent
`0 Uniform SPD
`
`0.9
`
`0.8
`
`0.7
`
`0.6
`
`0.5
`
`0.4
`
`0.3
`
`0.2
`
`0.1
`
`OO m 0.2
`
`03
`
`0.4
`
`0.5 M ~ 08 09
`x
`
`0 o w 02 ~ 04 05 o~ 01 08 09
`x
`
`Figure 6.18 Surfaces have significantly different colors when viewed under
`different lights. These figures show the colors taken on by the yellow and orange
`flowers of Figure 6.3 when viewed under the four different sources of Figure 6.2
`and under a uniform spectral power distribution.
`
`Green leaf
`illuminated by
`
`* Metal halide
`
`0 Standard flourescent
`X Moon white flourescent
`0 Daylight flourescent
`0 Uniform SPD
`
`0.9
`
`0.8
`
`0.7
`
`0.6
`
`0.5
`
`0.4
`
`0.3
`
`0.2
`
`0.1
`
`White petal
`illuminated by
`
`llf Metal halide
`D Standard flourescent
`x Moon white flourescent
`O Daylight flourescent
`0 Uniform SPD
`
`0.9
`
`0.8
`
`0.7
`
`0.6
`
`y 0.5
`
`0.4
`
`0.3
`
`0.2
`
`0.1
`
`0 o m 02 03 04 o.s
`x
`
`o~ 01 M M
`
`0 o m ~ ~ 04 05 06 01 08 09
`
`Figure 6.19 Surfaces have significantly different colors when viewed under
`different lights. These figures show the colors taken on by the white petal of Fig(cid:173)
`ure 6.3 and one of the leaves of Figure 6.4 when viewed under the four different
`sources of Figure 6.2 and under a uniform spectral power distribution.
`
`• and i(x) is a term that accounts for colored interreftections, spatial changes in illumina(cid:173)
`tion, and the like.
`
`We are primaiily interested in information that can be extracted from color at a local level, and. so
`we are ignoring the detailed structure of the terms gd(x) andi(x). Nothing is known about how to
`extract information from i(x); all evidence suggests that this is difficult. The term can sometimes
`be quite small with respect to other terms and usually changes quite slowly over space. We ignore
`
`Legend3D, Inc. Ex. 2020-0013
`IPR2016-01243
`
`

`

`118
`
`Color
`
`this term, and so must assume that it is small (or that its presence does not disrupt our algorithms
`too severely). However, specularities are small and bright, and can be found.
`
`6.4.3 Application: Finding Specularities
`
`Specularities can have strong effects on an object's appearance. Typically, they appear as small,
`bright patches, called highlights. Highlights have a substantial effect on human perception of
`a surface properties; the addition of small, highlight-like patches to a figure makes the object
`depicted look glossy or shiny. Specularities are often sufficiently bright to saturate the camera so
`that the color can be hard to measure. However, because the appearance of a specularity is quite
`strongly constrained, there are a number of effective schemes for marking them, and the results
`can be used as a shape cue.
`The dynamic range of practically available albedoes is relatively small. Surfaces with very
`high or very low albedo are difficult to make. Uniform illumination is common too, and most
`cameras are reasonably close to linear within their operating range. This means that very bright
`patches cannot be due to diffuse reflection; they must be either sources (of one form or another(cid:173)
`perhaps a stained glass window with the light behind it) or specularities. Furthermore, specu(cid:173)
`larities tend to be small. Thus, looking for small, bright patches can be an effective way to find
`specularities (Brelstaff and Blake, 1988).
`In color images, specularities on dielectric and conductive materials often look quite dii~
`ferent. This link to conductivity occurs because electric fields cannot penetrate conductors (the
`electrons inside just move around to cancel the field), so that light striking a metal surface can be
`either absorbed or specularly reflected. Dull metal surfaces look dull because of surface rough(cid:173)
`ness effects, and shiny metal surfaces have shiny patches that have a characteristic color because
`the conductor absorbs energy in different amounts at different wavelengths. However, light strik(cid:173)
`ing a dielectric surface can penetrate it.
`Many dielectric surfaces can be modeled as a clear matrix with randomly embedded pig(cid:173)
`ments; this is a particularly good model for plastics and some paints. In this model, there are two
`components of reflection that correspond to our specular and diffuse notions: body reflection,
`which comes from light penetrating the matrix, striking various pigments, and then leaving; and
`suiface reflection, which comes from light specularly reflected from the surface. Assuming the
`pigment is randomly distributed (small, not on the surface, etc.) and the matrix is reasonable, the
`body reflection component behaves like a diffuse component with a spectral albedo that depends
`on the pigment and the surface component is independent of wavelength.
`Assume we are looking at a single object dielectric object with a single color. We expect
`that the interreflection term can be ignored and our model of camera pixel brightnesses becomes
`p(x) = gd(x)d + gs(x)s,
`where s is the color of the source and d is the color of the diffuse reflected light, gd(x) is a
`geometric term that depends on the orientation of the surface, and gs(x) is a term that gives
`the extent of the specular reflection. If the object is curved, then gs (x) is small over much of the
`surface and large only around specularities; gd(x) varies more slowly with the orientation of the
`surface. We now map the colors produced by this surface in receptor response space and look at
`the structures that appear there (Figure 6.20).
`The term gd(x)d produces a line that should extend to pass through the origin because
`it represents the same vector of receptor responses multiplied by a constant that varies over
`space. If there is a specularity, then we expect to see a second line due to gs(x)s. This does
`not, in general, pass through the origin (because of the diffuse term). This is a line, rather than
`a planar region, because gs(x) is large over only a small range of surface normals. We expect
`that, because the surface is curved, this corresponds to a small region of surface. The term gd(x)
`
`6.5 SU
`
`Legend3D, Inc. Ex. 2020-0014
`IPR2016-01243
`
`

`

`hap.6
`
`rithms
`
`small,
`tion of
`object
`neraso
`is quite
`results
`
`ith very
`1d most
`y bright
`tother(cid:173)
`, specu(cid:173)
`y to find
`
`1uite dif(cid:173)
`tors (the
`:e can be
`e rough-
`· because
`ght strik-
`
`ided pig(cid:173)
`·e are two
`-ejlection,
`ving: and.
`uning the
`nable, the
`tl depends
`
`Sec. 6.5
`
`Surface Color from Image Color
`
`119
`
`B
`
`s
`
`Illuminant color
`
`G
`
`Diffuse component
`
`R
`
`Figure 6.20 Assume we have a picture of a single uniformly colored surface.
`Our model of reflected light should lead to a gamut that looks like the drawing.
`

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket