throbber
17.4
`
`Geometric Transformations of Images
`
`823
`
`I• Expand the width of an image by p/q. •I
`void WeimanExpansioo(
`const grayscalePixmap source,
`grayscalePixmap target,
`lot 11, lot k,
`lot p, lnt q)
`
`{
`
`char roth[MAX);
`inl i , j , s;
`
`I• Source image. siz.e 11 x k •I
`I• Target image, width at least k • plq *I
`I• Siz.e of source image •I
`I• Scale factor is plq •I
`
`I• The array must hold at least p items. •I
`I• Loop indices • I
`
`I• Source image is 11 x k. target image is to be 11 x ceil (k • plq).3 •I
`lot target Width = ceil (k • p /(double} q);
`
`I• Store the Rothstein oode for pfq in array roth. •I
`I• Clear the target array. •I
`I• For several passes through the algorithm. .. •I
`
`I• Apply cyclic permutation to Rothstein code. • I
`
`Rothstein (roth, p, q) ;
`SetToBiank (target, n, target\Vuith};
`for (i = 0; i < p; i++) {
`lnt sourceCol = 0;
`Permute (roth );
`I• For each column of the target •I
`for U = O;j < target Width;}++) {
`If (rothlt1 == I) {
`I• If code says to copy source column *I
`for (s = 0; s < n; s++) {
`I• Copy aU the pixels. •I
`target[sJit1 += source[sJisourcecoq;
`
`}
`souroeCol++:
`
`I • Go to next column. •I
`
`}
`
`}
`
`}
`
`I• Divide by q to compensate for adding each source column to target q times. •I
`for (i = 0; i < n; I++)
`for U = O;j < targetWidJh;J++)
`target(iJV) =riot (target(iJVJ / (double} q):
`I• WeimanExpansion •I
`
`}
`
`Fig. 17.5 The Weiman algorithm for expanding an image.
`
`perform the identity transformation on an image, it blurs the values. ln Section 17 .5.3, we
`discuss this and other drawbacks of transformation algorithms.•
`At this point, it is worthwhile to separate two aspects of image transformation. The first
`is computing which point in the source image is mapped to the center of the pixel in the
`
`"The ceiling of a number is the smallest integer greater than or equal to the number; ceiling(l.6) = 2.
`ceiling( I. I) = 2, and ceiling(6.0) = 6.
`'Feibush, Levoy and Cook note that any filter can be used, but describe the algorithm in terms of a
`filter of diameter 2. The algorithm generally performs better wit.h this filter than it does with a
`unit-area box filter.
`
`TEXAS INSTRUMENTS EX. 1009 - 886/1253
`
`

`
`824
`
`Image Manipulation and Storage
`
`Source space
`
`:--,----:
`
`I
`I
`I
`I
`L - - - - - - . .J
`
`Source Image
`
`Target space
`
`Map
`
`These pixels
`need new
`values assigned
`
`Fig. 17.6 The relationship of the source space, source image, target space, and target
`image in the Feibush- levoy- Cook algorithm.
`
`Target Image
`
`target image. The second is computing the value for the pixel in the target image. The first
`task is merely algebraic, in that it involves computing values (and inverse values) of a
`transformation. This may be done efficiently by various incremental methods. The second
`task also has numerous solutions, all of which involve choosing some filtering function to
`apply to the original image. The method described in the next few pages assumes a filtering
`function that is circularly symmetric and has a modest size (i.e., is nonzero only on a small
`part of the plane).
`The algorithm starts with a source image (thought of as lying in one copy of the
`Euclidean plane, c.alled the source space), a projective map5 from another copy of the
`Euclidean plane (the target space) to the source space, and a polygonal region in the target
`space. The target image is the collection of pixels in the target space that are near the
`polygonal region, and it is these pixels whose values need to be assigned (see Fig. 17.6).
`Note that the projective map here goes from target to source, the reverse of the usual
`naming convention for mathematical functions.
`To start, we choose a symmetric filter function that is nonzero only for (x, y) very close
`to (0, 0) (perhaps within a 2-pixel distance). The support of this filter function is the set of
`points on which it is nonzero. We take a copy of the bounding rectangle for the support of
`the filter and translate it to each pixel in the target space. Whenever this rectangle intersects
`the target polygon, the pixel is considered to be in the target image. This translated
`rectangle is called the bounding rectangle for the target pixel, and the translated support of
`the filter function is called the pixel's convolution mask (see Fig. 17.7).
`The vertices of the target polygon are transformed to source space just once, for
`repeated use. The resulting polygon is called the source polygon. The bounding rectangle of
`each target pixel is transformed to the source space, where it becomes a quadrilateral. A
`bounding rectangle for this quadrilateral is computed, then is clipped by the source polygon
`(because clipping a rectangle by the source polygon is much easier than clipping a general
`quadrilateral). The pixels in the source space that lie in this clipped quadrilateral are
`transformed to the target space; only those that fall within the target pixel's bounding
`rectangle are retained.
`
`' A projectil'<' map is a map represented by a 3 X 3 matrix operating on the plane using homogeneous
`coordinates, in the manner described in Chapter 5.
`
`TEXAS INSTRUMENTS EX. 1009 - 887/1253
`
`

`
`17.4
`
`Geometric Transformations of Images
`
`825
`
`Transform
`
`Source polygon
`
`Convolution mask
`
`Fig. 17.7 Terms used in the Feibush-levoy-cook algorithm.
`
`These transfonned pixels are then averaged together by the weights given by the fi Iter to
`yield a value for the target pixel. This target pixel value is correct only if the entire pixel is
`within the transfonned image boundaries. If the image has been rotated, for example, then
`the transformed edges of the image may cut across pixels (more precisely, across their
`convolution masks).
`Thus pixels are not entirely determined by the value just computed; that value only
`contributes to the pixel' s value, in proportion to the coverage of the pixels. The contribution
`can be determined analytically. Figure 17.8 shows the transformed edge of the source
`image passing through a pixers bounding rectangle, and within that rectangle passing
`through the pixel' s convolution mask. To find the contribution of the computed value to the
`pixel's final value, we do the following:
`
`I. Clip the image polygon against the bounding rectangle for the pixel (see Fig. 17 .9).
`The points of intersection with the edges of the bounding rectangle were already
`computed in determining whether the pixel was in the target image.
`2. For each vertex of the clipped polygon (in Fig. 17 .9, a single triangle with vertices
`
`- - - - Transformed
`po4ygon edge
`
`Convolution mask
`or filter support
`
`Fig. 17.8 Filtering for a pixel at the edge of the polygon.
`
`TEXAS INSTRUMENTS EX. 1009 - 888/1253
`
`

`
`826
`
`Image Manipulation and Storage
`
`Side 1
`
`Side1
`
`Side2
`
`A
`
`Base
`
`Side2
`
`Base
`
`Fig. 17.9 The steps in filtering an edge.
`
`labeled A, 8 , and C, in clockwise order), construct a triangle with sides BASE (the edge
`from this vertex to the next vertex in clockwise order around the polygon), SIDE/ (the
`edge from this vertex to the center of the pixel) and SJDE2 (the edge from the next
`vertex to the center of the pixel).
`3. Consider the filter function as being planed in a third dimension above the convolution
`mask. The weight a region contributes to the total for a pixel is proportional to the
`volume above that region and under the graph of the filter. So we now compute the
`volumes above each triangle. As Fig. 17.9 shows, some of these volumes must be
`added and some subtracted to create the correct total contribution for the region. The
`rule is that the volume is added if the cross-product of SIDE/ and SJDE2 points into the
`page; otherwise, it is subtracted (see Exercise 17 .4).
`
`Computing the volumes in step 3 is easier than it might appear, in that they can be
`precomputed and then extracted from a look-up table during the actual filtering process.
`Exercise 17.5 shows how to do this precomputation.
`
`17 .4.3 Other Pattern Mapping Techniques
`
`The Feibush-Levoy·Cook algorithm provides excellent results for pattern mapping onto
`polygons, but requires computing a filtered value at each point, so that for each pixel in the
`image a filtering computation is perfonned. In a perspective picture of a plane (receding to a
`vanishing point) a single pixel in the final image may correspond to thousands of pixels in
`the source pattern, and thus require an immense filtering computation. Several techniques
`have been developed to produce more rapid (if sometimeS slightly less accurate) filtering .
`Williams [W1LL83] takes the source image and creates a M£P (multum in parvo(cid:173)
`many things in a small place) map, which occupies t of the memory of the original. If the
`original image is a 512 by 512 pixel, 24-bit true color image, using 8 bits each for the red,
`green, and blue information, the MIP map is a 1024 by 1024 by 8 bit image. The red,
`green, and blue parts of the original image each occupy one quarter of the MIP map, and
`the remaining quarter is filled with filtered versions of these, as shown in Fig. 17 .10. When
`a target pixel is covered by a collection of source pixels, the MIP map pixels corresponding
`to this collection most closely are used to give a filtered value. Linear interpolation between
`levels of filtering is used to further smooth the values.
`Crow [CROW84] devised a scheme by which box filtering of an image over any aligned
`rectangle can be done rapidly. For quick pattern mapping, this suffices in many cases-a
`rectangular box corresponding closely to the shape of the transformed target pixel is used to
`
`TEXAS INSTRUMENTS EX. 1009 - 889/1253
`
`

`
`17.4
`
`Geometric Transformations of Images
`
`827
`
`G
`
`B
`
`A
`
`G
`
`B
`
`A
`
`G A
`B
`
`Fig. 17.10 A MIP map. The red, green, and blue channels of the original image fill three
`quarters of the MIP map. Each is filtered by a factor of 4, and the three resulting images
`fill up three quarters of the remaining quarter. The process is continued until the MIP
`map is filled.
`
`compute a filtered pattern value for the pixel. The scheme is based on the algebraic identity
`(x + a)(y +b)- (x +a) y- x (y +b)+ xy = ab. Interpreted geometrically, this says that
`the area of the small white rectangle in Fig. 17 .I I can be computed by taking the area of the
`large rectangle and subtracting the areas of both the vertically and the horizontally shaded
`rectangles, and then adding back in the crosshatched rectangle (which has been subtracted
`twice). By taking the source image and creating a new image, whose value at pixel (x, y) is
`the sum of all the values in the source image in the rectangle with comers (0, 0) and (x, y),
`we create a summed area table, S. We can now compute the sum of the pixels in the
`rectangle with comers at (x, y) and (x +a, y +b), for example, by taking S[x +a, y +b)
`- S[x + a, y] - S[x, y + b) + S[x, y].
`Glassner [GLAS86] observes that if the transformed pixel is not approximately an
`aligned rectangle, then summed area tables may blur the result excessively. He therefore
`develops a system in which the excess area in the aligned bounding box for the pixel is
`systematically trimmed, in order to provide a more accurate estimate of the filtered source
`image at the point. This requires detecting the geometry of the inverse-mapped target pixel
`relative to its bounding box.
`Heckbert [HECK86a) proposes a system using both the Feibush-Levoy-Cook method
`and MIP maps. He maps the target pixel's filter support (which is supposed to be circular,
`
`(0, y +b)
`(0. y)
`
`(x +a, y + b)
`
`(0, 0)
`
`(x, 0)
`
`(x + a , 0)
`
`Fig. 17 .11 The area of the small white rectangle in the image is computed by
`subtracting the horizontally and vertically shaded areas from the area of the large
`rectangle, and then adding back in the area of the crosshatched rectangle.
`
`TEXAS INSTRUMENTS EX. 1009 - 890/1253
`
`

`
`828
`
`Image Manipulat ion and Storag e
`
`(a)
`
`{b)
`
`(d)
`(C)
`Fig. 1 7.12 (a) Point sampling of the source image. (b) MIP map filtering. (c) Summed
`area table filtering . (d) Elliptical weighted average using MIP maps and a Gaussian filter.
`(Courtesy of P. Heckbert.)
`
`and is defined by a quadratic function) to an elliptical region in the source image (defined
`by a different quadratic). Depending on the size of this region in the source image. an
`appropriate level in a MIP map for the source image is selected, and the pixels within it are
`collected in a weighted sum over the elliptical region. This weighted sum is the value
`assigned to the target pixel. This combines the accuracy of the Feibush-Levoy-Cook
`technique with the efficiency of the MIP map system. A comparison of pattern-mapping
`results is shown in Fig. 17 .12.
`
`17.5 MULTIPASS TRANSFORMATIONS
`
`Suppose that we take the image shown in Fig. 17 .13(a) and apply a vertical shearing
`transformation to it, as shown in part (b), and then we follow this with a horizontal shearing
`transformation,• as shown in pan (c). Provided we choose the right transformations, the net
`effect is to rotate the image as shown [CATM80]. Such a two-pass technique may be much
`
`"'The t"1> ~hearing transformations are actually shear-and-scale transformations. Tile first takes a
`column of pixels and translates and compresses it in the venical direction. The second does the same
`for a WN of pixels.
`
`TEXAS INSTRUMENTS EX. 1009 - 891/1253
`
`

`
`17.5
`
`Multipass Transformations
`
`829
`
`,
`-
`r -
`
`(I
`
`_...;.;
`
`:-:
`l
`
`' . :'1
`. , .
`"
`
`'1
`
`.
`
`~-
`
`(a)
`
`,__
`·-·
`
`•
`
`,.,.. ·-
`~.
`
`---
`
`Fig. 17.13 A rotation may be expressed
`as a composition of a column-preserving
`and a row -preserving transformation. (a)
`The original image. (b) A column-preserving
`transformation has been applied to the im(cid:173)
`age. (c) A row-preserving transformation
`has been applied to the second image.
`(Courtesy of George Wolberg, Columbia
`University.)
`
`(b)
`
`(c)
`
`faster to compute than a direct application of the rotation transformation, since it operates
`on one vertical or horizontal line of pi~s at a time, and the computations within each such
`line can be performed incrementally. Also, in many cases, the filtering necessary to a\'Oid
`aliasing artifacts can be performed line by line as well. More important still is that a wide
`class of trans formations can be implemented as multipass transformations [CATMSO,
`SMIT87 j . This multipass technique has been implemented in the Ampex digital optics
`(ADO) machine [BENN84), which is widely used in video production. A survey ofthis and
`other image warping techniques is given in [WOLB90] .
`Implementing t'M:l-pass (or multipass) transformations can be divided into two
`subtasks: finding the correct transformations for the individual passes, which is a purely
`algebraic problem, and applying the correct filtering to generate new pixels, which is an
`antialiasing problem. Since the second part will depend on the solution to t_he first , we
`begin by solving the first problem in the case of a rotation.
`
`TEXAS INSTRUMENTS EX. 1009 - 892/1253
`
`

`
`830
`
`Image M anipulation and Storage
`
`17.5 .1 The Algebra of Multipass Transforms
`
`To simplify the discussion, we will use three different sets of coordinates. 'The original
`image will be written in (x, y) coordinates, the vertically sheared image in (u, v)
`coordinates. and the final image in (r, s) coordinates. The first shearing transformation will
`be called A, the second 8, and their composition, which is the rotation, will be called T.
`Thus,
`
`(~) = A ~).
`
`and
`
`From Chapter 5, we know the formula forT:
`
`(:) = t) = (~~~:; ; : :).
`
`From this, we will determine the formulae for A and B.
`The transformation A is supposed to be column preserving; that is, it must send each
`column of the original image into the corresponding column of the transformed image.
`Thus, if the pixel (x, y) is sent to (11, v) by A, then 11 = x. In other 'Mlrds, A must be wriuen
`in the form
`
`( 11)-ix)-( x )
`\y -
`v -
`f(x, y)
`for some function f. In the same way, 8 is supposed to be row preserving, so 8 must be
`written in the form
`
`for some function g. To determine the formulae for A and 8 , we need to find the functions f
`and g.
`Writing out the composite. we have
`
`(') = s(") = s(ix)) = 8( x
`
`s
`
`v
`
`\y
`
`) = (s<x.f(x, Y»).
`j(x, y)
`f(x, y)
`
`From this equation, we see that s andf(x, y) are equal. Thus, the formula for sin terms of x
`andy gives the formula for f(x, y):j(x, y) = x sin t/> + y cos tf>. Determining the formula for
`g(u, v) is more complex. We know that, in terms of x and y, we can write g(u, v) = x cost/>
`- y sin tf>. To write this in terms of 11 and v, we must solve for x andy in terms of 11 and v and
`substitute. Solving for x is easy. ince we observed previously thatu = x. Solving for y is
`slightly more difficult: v = j(x, y) "' x sin t/> + y cos t/>, soy = (v - x sin t/>) I cos t/> =
`
`TEXAS INSTRUMENTS EX. 1009 - 893/1253
`
`

`
`17.5
`
`Mult ipass Transformations
`
`831
`
`(v- u sin 1/>) I cos if>. Substituting this result into the formula for g(u, v) in terms of x andy.
`we get
`
`g(11 , v) = 11 cos 4> -
`
`v- 11 sin 4> .
`san 4> = 11 sec 4> - v tan 1/>.
`.l.
`cos'+'
`
`ix) = (
`\y
`
`x
`
`)
`
`xsinl/>+ycosl/>'
`
`In summary, if we define
`
`and
`
`s(~) =("sec 4>: v tan 4>).
`
`then computing the composite gives
`
`s(AC)) = (~~: t ; ;: :).
`
`as desired.
`To do this for a general tranSformation T. we must do exactly the same work. If
`
`rC) = (:j;: ~0·
`then we define 11 = x and v = f(x , y) = t2(x, y). To define g(u, v), we need to solve for y in
`terms of 11 and v, using these definitions-that is, to find a function h such that (u, v) =
`(x. t.f.x, y)) is equivatent to (x, y) = (11, h(u, v)). When we have found h, the formula for
`g(u, v) is just g(u, v) = t,/.11, h(u, v)).
`The difficult part of the process is finding h. ln fact, in our example, h(u. v) =
`(v- u sin 1/>) I cos 4>, which is undefined if cos 4> = 0-that is, if 4> = 90" or 2700-so that
`finding h may be impossible. Fortunately, rotating by 90" is very easy (just map (x, y) to
`( -y, x)), so that this is not a problem. ln fact, we shall see that, to rotate nearly 90°, it is
`better to rotate the full 90" and then to rotate a small amount back; thus, to rotate 87", we
`would rotate 90" and then -3". Algebraically, there is no difference between the two maps;
`at the pixel level, however, where filtering is involved, the difference is significant.
`A rotation can also be broken into three tranSformations so as to avoid this bouftntck
`problem [PAET86; TANA86; WOLB90). The decomposition for a rotation by 4> is
`[si~ 4> ~] [~ -t~ 4>12).
`
`[:: ~= :J = [~ -~ 4>12)
`
`Note that each transformation involves a computation with one multiplication and one
`addition. Also, when 4> > 90", we can do the rotation by first rotating by 180" and then by
`1/>, so that the argument of the tangent function is never greater than 45".1
`180" -
`
`"The tangent functioo is -tl behaved for angles neat 0", but has singularities at z90". Evaluating it
`for angles near 0" is lbcreforc preferable.
`
`TEXAS INSTRUMENTS EX. 1009 - 894/1253
`
`

`
`832
`
`Image Manipulation and Storage
`
`To show lhatthe multipass technique is notlimiu:d to rotations, let us factor a different
`map, which distorts a square into a trJpezoid. (Such maps arise in the perspective
`transformations described in Chapter 6.) As an example, we take
`
`.../x) = ( :d(y + I))
`
`I \y
`
`\yt(y + I ) .
`
`Just as before, we wish to find functions
`
`and
`
`such that B(A(y)) = r(y). In lhis case, v = f(x, y) = t.f..x, y) = yl(y + I ). We need to find
`g(u, v) so that g(u, v) = :d(y + 1). Solving the equation of/for y, we get y = -vl(v- 1).
`Thus (recalling lhatu = x), we can writeg(u, v) = u l (-vl(v- I)+ I)= u l(-ll(v- I))=
`u(l - v). Ourtwo passes become
`
`and
`
`(;) = (u( l : v>).
`
`You should check that the composition of these transformations is really the original
`transformation T.
`The technique has been generalized to handle other map by Smith and colleagues
`ISMIT87l. Translation, rotation, scaling. and shearing all work easily. In addition, Smith
`considers functions of the form
`T(x, y) = S(m(x) h.(y), m(x) ht<J)),
`where Sis a standard computer grapbics transform-that is, a transformation of the plane
`by translation, scaling. rotation, and perspective transformations-and m(x). h1(y) and h.f..y)
`are arbitrary. He also considers maps T whose component functions t1(x, y) and t.f..x. y) are
`bicubic functions of x and y, under the special hypothesis thai Tis injective (i.e .• no two
`(x. y) points map to I he same (r , s) point).
`
`1 7 .5.2 Generating Transformed Images with Filtering
`
`When we transform an image by a row-preserving (or column-preserving) transformation,
`the source pixels are likely not to map exactly to the target pixels. For example, the pixels in
`a row might all be translated by Jt pixels to the right. In this case, we must compu1e values
`for the target pixels by taking combinations of the source pixels. What we are doing, in
`effect, is considering the values of the source pixels as samples of a function on a real line
`(the row); the values at t.he target pixels will be different samples of lhis same function.
`Hence. t.he process is called resampling.
`The theoretically ideal resampling process is to take, for a given target pixel , a weighted
`averdge ofthe source pixels whose transformed positions are near it. The weights associated
`wilh each source pixel should be sioc(kd), where d is the distance from lhe 1ransformed
`source pixel to the targel pixel and k is some constant. Unfortunately, lhis requires lhat
`every source pixel in a row contribute to every target pixel. As usual. ~-e can instead work
`
`TEXAS INSTRUMENTS EX. 1009 - 895/1253
`
`

`
`17.5
`
`Multipass Transformations
`
`833
`
`•
`
`(a)
`
`(b)
`
`• ••
`•••
`•••
`• ••
`•• •
`
`(C)
`
`Fig. 17 .14 The pixels that contribute to a n output pixel in a two-pass rotation form a
`small area in (x. y) space. (a) A single pixel in (r, s) space; (b) The horizontal span of pixels
`in (u, v) space that contribute to the value of that pixel; (c) The pixels in (x, y) space that
`contribute to the values of the pixels in (u, v) space.
`
`with various approximations ro the sine filter. The simplest is a box filter: Each source pixel
`is assumed to represent an interval in its row, and the endpoints of this interval are
`transformed. The contribution to the target pixel is the overlap of this transformed interval
`with the target pixel's interval multiplied by the value of the source pixel.
`Using this method, each target pixel has a value that is a weighted average of a short
`span of source pixels (the length of the span depends on the exact transformation). In a
`two-pass rotation, a pixel in (r, s)-space has a value that is a weighted average of a
`horizontal span of (u, v)-pixels. Figure 17 . 14(a) shows a pixel in (r, s) space, and (b) shows
`the span of pixels in (u, v) space that contribute to the value of that pixel. Each of these
`(u, v) pixels, however, has a value that is an average of a vertical span of pixels in (x, y)
`space. Figure 17. 14(c) shows these pixels in (x, y) space. Notice that the vertical spans in
`(x, y) space form a rhombus rather than a square, since the transformation from (x, y) to
`(u, v) is a shearing transformation.
`We know from Chapter 14 that the pixels contributing to an output pixel really ought to
`form a circular shape" (i.e., the filter should be radially symmetric). If the rhombus is too
`dift'erent from a square, the filtering will begin to degenerate and will produce bad results.
`The result of such a sepanuion of the filtering process into two component filters is
`discussed further in [MITC88].
`In addition, we want to avoid rbe boulenecking problem described previously, where
`many pixels in the source for a transformation contribute to each output pixel. In the case of
`a shear-and-scale operation, this occurs when the scale factor is small.
`Thus, in doing two-pass rotations (or any other multipass transformation) we want to
`avoid extreme s.hearing or botllenecking in our transformations. This is why rotating by 90•
`and then by -3" is superior to rotating by 87°. In general, when we are constructing a
`two-pass transform, we can transform by rows and then columns, or by columns and then
`rows, or can rotate 90° before doing either of these. According to Smith, one of these
`approaches appears always to resolve the bottlenecking problem, at least for standard
`operations such as translation, rotation, scaling, and shearing fSM!T87]. Nonetheless,
`there are more general transformations where this technique may not succeed: If one
`portion of an image is rotated 90" whi le some other stays fixed (imagine bending a long thin
`
`&rhis is particular to the case of a rotation. For a general transfonnation, the source pixe.ls
`contributing to an output pixel should consist of those pixels that are transfonned into a small disk
`about the target pixel; these pixels may or may not constitute a disk in the source image.
`
`TEXAS INSTRUMENTS EX. 1009 - 896/1253
`
`

`
`834
`
`Image Manipulation and Storage
`
`rectangle into a quarter-circle shape), then there is certain to be bonlenecking at some
`point. no matter what the order in which the transformations are applied.
`Wolberg and Boult [WOLB891 have developed a technique in which two simultaneous
`versions of a multipass transform are done at once. The original image is first transformed
`by a row-preserving map and a column-preserving map, and then also is transformed by a
`90" rotation and a different pair of maps. For both transformation sequences, the method
`records the amount of bottlenecking present at each pixel.
`Thus, each output pixel can be computed in two different ways. Wolberg and Boult
`select, for each pixel, the route that has less bottlenecking, so that some portions of the
`image may be row--column transformed, whereas others are column-row transformed.
`Since the two sets of transformations can be performed simultaneously in parallel
`processors, this technique is ideally suited to implementation in hardware.
`
`17 .5 .3 Evaluating Transformation Methods
`There are several criteria for judging image-transfom1ation algorithms. Filtering theory
`tells us that an image can be reconstructed from its samples, provided the original image
`had no high-frequency components. Indeed, from any set of samples, one can reconstruct
`som~ image with no high-frequency components. If the original image had no high
`frequencies, then "e get the original back; if it did contain high frequencies, then the
`sampled image contained aliases, and the reconstructed image is likely to differ from the
`original (see Exercise 17.6).
`So how should we judge a tr"'dnsformation algorithm'/ Ideally, a transformation
`algorithm would have the following properties:
`
`•
`•
`
`•
`
`•
`
`Translation by a zero \'eCtor should be the identity
`A sequence of translations should have t.he same effect as a single, composite
`translation
`Scaling up by a factor of A > I and then sealing down by I/ A should be the identity
`transfom1ation
`Rotating by any sequence of angles totaling 360° should be the identity transformation .
`
`Many workable algorithms clearly fail to satisfy any of these criteria. Weiman's algorithm
`fails on all but the fourth criterion. Feibusb, Levoy, and Cook's algorithm fails on the first
`if a filter more than I pixel wide is used. Even Catrnull and Smith's two-pass algorithm fails
`on all four criteria.
`None of this is surprising. To resample an image, we ought to reconstruct it faithfully
`from its samples by convolving with a sine filter. Thus, each new pixel ought to be a
`weighted average of a// the pixels in the original image. Since all the methods are sensible
`enough to use filters of finite extent (for the sake of computational speed), all of them end
`up blurring the images.
`There are many image-transformation methods not covered here. They eacll have
`advantages and disadvantages, mostly in the form of time- space tradeoffs. These are
`described in detail in an excellent survey by Heckbert (HECK86b).
`
`TEXAS INSTRUMENTS EX. 1009 - 897/1253
`
`

`
`17 .6
`
`Image Compositing
`
`835
`
`17.6
`
`IMAGE COMPOSITING
`
`that is, combining images to create new
`In this section, we discuss compositing of images-
`images. Porter and Duff [PORT84) suggest that compositing is a good way to produce
`images in general, since it is fairly easy to do, whereas rendering the individual portions of
`the image may be difficult. With compositing, if one portion of the image needs alteration,
`the whole image does not need to be regenerated. Even more important, if some portions of
`an image are not rendered but have been optically scanned into memory instead,
`compositing may be the only way to incorporate them in the image.
`We describe compositing using the a channel in Section 17 .6. I, compositing using
`frame-buffer hardware in Section 17.6.2, the artificial generation of a values in Section
`17.6.3, and an interface for image assembly in Section 17.6.4.
`
`17 .6.1 a -Channel Com positing
`What sort of operations can be done in compositing? The value of each pixel in the
`composited image is computed from the component images in some fashion. ln an overlay,
`the pixels of the foreground image must be given tnmsparency val11es as well as whatever
`other values they may have (typically ROB or other color information). A pixel's value in
`the composited image is taken from the background image un less the foreground image has
`a nontransparent value at that point, in which case the V'<llue is taken from the foreground
`image. In a blending of two images, the resulting pixel value is a linear combination of the
`values of the two component pixels. Ln this section, we describe the Port.er-Dulf
`mechanism for compositing images using such combinations and transparency values
`fPORT84].
`Suppose we have two images, one of a red polygon and one of a blue polygon, each on
`a transparent background. If we overlay the two images with the red polygon in front, then,
`at interior points of the front polygon, only the color red is visible. At points outside the
`front polygon but inside the back polygon, only blue is visible. But what about a pixel lying
`on the edge of the front polygon but inside the back polygon (see Fig. 17 .15)? Here, the
`front polygon covers only pan of the area of the pixel. If we color it red only, aliasing
`artifacts will result. On the other hand, if we know that the front polygon covers 70 percent
`
`Tricky pixel
`
`Fig . 17.1 5 Compositing operations near an ed ge: Ho w do we color the pixel?
`
`TEXAS INSTRUMENTS EX. 1009 - 898/1253
`
`

`
`836
`
`Image Manipulation and Storage
`
`of the pixel, we can make the composited pixel 70 percent red and 30 percent blue and get a
`much more attracti~~e result.
`Suppose that as an image is produced, coverage infonnation is recorded: The color
`associated with each pixel in the image is given an a value representing the coverage of the
`pixel. For an image that is to become the foreground element of a composited image, many
`of the pixels are registered as having coverage zero (they are transparent); the remainder,
`which constitute the important content of the foreground image, have larger coverage values
`(usually one).
`To do compositing in a reasonable fashion , we need this a infonnation at each pixel of
`the images being cornposited. We therefore assume that, along with the RGB values of an
`image, we also have an a value encoding the c011erage of each pixel. This collection of a
`values is often called the a channel (see Section 17. 7). Some types of renderers generate
`this coverage information easily, but it may be more difficult to generate for images that
`have been scanned into memory. We discuss this problem briefly in Section 17.6.3.
`How do a values combine? Suppose we have a red polygon covering one-third of the
`area of a pixel , and a blue polygon that, taken separately, covers one-half of the area of the
`pixel. How much of the first polygon is covered by the second? As Fig. 17.16 shows, the
`first polygon can be completely cOIIered, partly covered, or not covered at all. But suppose
`we know nothing more than the coverage information given. What is a reasonable guess for
`the amount of the first polygon covered by the second? Let us suppose that the area covered
`by the first is randomly distributed through the pixel area, and that the same is true of the
`second. Then any tiny spot's chance of being in the red polygon ist, and its chance of being
`in the blue polygon is t. so its chance of being in both is t. Notice that this value is exactl

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket