`
`International Bureau
`WORLD INTELLECTUAL PROPERTY ORGANIZATION
`
`
`
`INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT)
`
`(51) International Patent Classification 5 :
`
`(11) International Publication Number:
`
`WO 93/00650
`
`G06F 15/72
`
`(43) International Publication Date:
`
`7 January 1993 (07.01.93)
`
`(21) International Application Number:
`
`PCT/AU92/OO302
`
`(22) International Filing Date:
`
`19 June 1992 (19.06.92)
`
`(81) Designated States: AU, CA, JP, KR, US, European patent
`(AT, BE, CH, DE, DK, ES, FR, GB, GR, IT, LU, MC,
`NL, SE).
`
`Published
`With international search report.
`
`(30) Priority data:
`PK 6942
`PK 7305
`PK 8643
`PK 8645
`PK 9218
`
`28 June 1991 (28.06.91)
`19 July 1991 (19.07.91)
`1 October 1991 (01.10.91)
`1 October 1991 (01.10.91)
`30 October 1991 (30.10.91)
`
`AU
`AU
`AU
`AU
`AU
`
`(7l)(72) Applicant and Inventor: LIM, Hong, Lip [SG/AU];
`203/35-47 Wilson Lane, Darlington, NSW 2008 (AU).
`
`(74) Agent: SPRUSON & FERGUSON; G.P.O. Box 3898, Syd-
`ney, NSW 2001 (AU).
`
`
`
`(54) Title: IMPROVEMENTS IN VISIBILITY CALCULATIONS FOR 3D COMPUTER GRAPHICS
`
`,1?’
`
`(57) Abstract
`
`Disclosed is a method of reducing the complexity of hidden surface removal in 3D graphic systems. A fuzzy projection
`(FF) of a surface (SU) as seen from a number of viewpoints (VP) in a bounding box (BB) is stored in a buffer (FA) having ele-
`ments (FE). A combination of all the patches (PT) of the surface (SU) viewed form a fuzzy region (FR) where surfaces can be
`either visible, hidden, or unable to be determined with certainty as to whether or not visible/hidden. A non-fuzzy region (NF) de-
`scribes those patches (PT) that are always visible.
`
`Intel Exhibit 1005 Intro-001
`
`
`
`FOR THE PURPOSES OF INFORMATION ONLY
`
`Codes used to identify States party to the PCT on the front pages of pamphlets publishing international
`applications under the PCT.
`
`AT
`
`BB
`BE
`BF
`
`81
`HR
`CA
`
`CG
`
`Austria
`Australia
`Barbados
`Belgium
`Burkina l~aso
`Bulgaria
`Benin
`Bralil
`('anada
`Central African Republic
`(‘ongo
`Switzerland
`(.‘5te d'lvuire
`Cameroon
`(Iwchosluvaltia
`Germany
`Denmark
`Spain
`
`United States of America
`
`l~inland
`France
`Gabon
`Uniletl Kingdom
`Guinea
`Greece
`Hungary
`Ireland
`Italy
`Japan
`Democratic People's Republic
`of Korea
`Republic of Korea
`Liechtenstein
`Sri lanka
`Luxembourg
`Monaco
`Madagascar
`
`Mali
`Mongolia
`Mauritania
`Malawi
`Netherlands
`Norway
`Poland
`Romania
`Russian Federation
`Sudan
`Sweden
`Senegal
`Soviet Union
`Chad
`Togo
`
`Intel Exhibit 1005 Intro-002
`
`
`
`\V()93/00650
`
`PCT/AU92/00302
`
`IMPR V
`
`NT
`
`V
`
`ITY
`
`T
`
`N
`
`F R
`
`39_£QMEUIEB_GBAEH1£$
`
`EIELQ QF INVENTIQN
`
`The present
`
`invention relates to computer graphics and,
`
`in
`
`particular,
`
`to the efficient determination of visible and/or invisible
`
`surfaces to thereby preferably permit
`
`improved hidden surface removal,
`
`generally in 30 systems.
`
`BACKGRQUND ART
`
`lO
`
`l5
`
`20
`
`25
`
`Visible surface detection is one of the most basic operations in
`
`3D graphics.
`
`It is applied to generate images of surfaces directly
`
`visible to a viewer. Recently, it has also been adopted in the
`
`radiosity calculations to compute the energy interactions between
`surfaces.
`
`The standard strategy of visible surface detection is to divide
`
`surfaces into patch elements and compare the spatial relationship
`
`between these elements. Using this strategy,
`
`the visibility of surfaces
`
`cannot be determined until
`
`they have been analysed in detail. Although
`
`many techniques have been developed to address this issue, none are
`
`ideal as they either still require elaborate analysis of surfaces, or
`
`they impose various restrictions to the scene.
`
`The limitations of the current techniques can seriously affect the
`
`speed of visible surface computation.
`
`If the scene is complicated, many
`
`surfaces may be invisible. However,
`
`the image generation is often
`
`slowed down by the need to analyse each surface element
`
`in detail.
`
`The same limitation has also seriously affected the efficiency of
`
`the radiosity computations. Currently,
`
`these computations are very slow
`
`due to the need to elaborately compute the visibility between every pair
`
`of surface elements. Again,
`
`these computations may be substantially
`
`reduced if surface elements obviously visible or invisible to each other
`
`3O
`
`can be more easily computed.
`
`The early visible surface techniques mainly applied various
`
`sorting schemes to find the occluding surface primitives. However, with
`
`the advancement
`
`in hardware technology, it is now common practice to
`
`reduce the need for sorting and comparisons by using a large amount of
`
`35
`
`fast memory. This memory may be used to store the object data, such as
`
`a BSP-tree.
`
`It may also be used to store the depth and surface
`
`projection data, as with a z-buffer.
`
`Intel Exhibit 1005 Page 1
`
`
`
`wo 93/00650
`
`_ 2 _
`
`PCT/AU92/00302
`
`The z—buffer method is simple and has a very low growth rate.
`However, it still requires depth evaluations and comparisons at each
`pixel by all
`the patches projecting bnto it, because their visibility is
`unknown.
`The BSP-tree method has the advantage that if the scene is static,
`an orderly traversal of the BSP-tree is generally sufficient to
`establish the depth order. However, it still requires the
`scan-conversion of each path.
`In addition,
`the technique needs to
`re-compute the BSP-tree whenever objects in the scene move.
`There have been two main strategies of avoiding detailed depth
`analysis of totally invisible entities. One strategy, applies the
`property that visibility changes can only occur at the contour edges of
`surfaces. Visibility computations of internal edges or patches can be
`
`reduced by first comparing them with these edges.
`An alternative strategy is to use the invisible coherence of the
`These techniques apply the property that an edge is likely to
`scene.
`remain invisible in a scan line if it is invisible in the last scan
`line.
`Such an edge may therefore be treated specially to avoid
`
`unnecessary depth comparisons.
`Although the above strategies can reduce some visibility
`computations,
`they have limitations.
`The contour-oriented techniques
`can operate only in environments which consist exclusively of convex or
`non-penetrating surfaces. Both the contour-oriented and the scan line
`approaches are also limited in their ability to reduce visibility
`computations.
`In the former, an edge still needs to be tested against
`the contour edges even if it is invisible.
`In the latter, all
`the
`invisible edge segments at each scan line have to be sorted. These
`segments also require depth comparisons with the pixels they are on.
`Finally, all
`these techniques require some sorting or searching.
`§UMMARY OF THE INVENTIQN
`
`It is an object of the present invention to substantially
`overcome, or ameliorate,
`the problems associated with the prior art,
`through provision of an improved method for performing visibility
`calculations.
`In accordance with one aspect of the present invention there is
`
`disclosed a method of reducing the complexity of visibility calculations
`required for the production of multi-dimensional computer generated
`
`10
`
`15
`
`20
`
`30
`
`35
`
`Intel Exhibit 1005 Page 2
`
`
`
`WO 93/00650
`
`_ 3 __
`
`PCT/AU92/00302
`
`images or the reduction of multi—dimensional data to multi-dimensional
`
`data having at least oneless dimension, said method comprising the steps
`of:
`
`(1) prior to an occlusion or invisibility relationship
`
`5
`
`computation (known per se) being carried out on a plurality of surfaces,
`
`selected viewpoints to be calculated are divided into groups;
`
`(2)
`
`for selected ones of said surfaces, determining for each said
`
`group whether each said selected surface is
`
`(a)
`
`an always unoccluded surface, an always hidden surface,
`
`or a remaining surface; or
`
`(b)
`
`(c)
`
`an always unoccluded surface, or a remaining surface; or
`
`an always hidden surface, or a remaining surface;
`
`wherein said remaining surface is a surface which is
`
`unable to be determined with certainty as to whether it
`
`is either unoccluded or hidden;
`
`(3)
`
`exempting from said occlusion or invisibility relationship
`
`computation those surfaces which are either always unoccluded or always
`
`hidden;
`
`(4) maintaining a record of said remaining surfaces; and
`
`(5)
`
`carrying out occlusion or invisibility relationship
`
`computations on said remaining surfaces.
`
`In accordance with another aspect of the present
`
`invention there
`
`is disclosed a method of reducing the visibility related computations in
`
`an environment consisting of three—dimensional abstract or physical
`
`surfaces, said method comprising the steps of:
`
`(l) prior to a visibility computation (known per se) being
`
`carried out.
`
`some or all viewpoints or possible occurrences of
`
`viewpoints are classified into groups of viewpoints;
`
`(2)
`
`defining one or more projection surfaces (known per se) for
`
`the purpose of generating simultaneous projections of surfaces or
`
`surface elements with respect to a group of viewpoints;
`
`(3)
`
`for selected surfaces, and for each group of viewpoints,
`
`determining whether those surfaces or their surface elements are always
`
`invisible to said group by computing and comparing their simultaneous
`
`projections on said projection surfaces;
`
`10
`
`l5
`
`20
`
`25
`
`30
`
`35
`
`Intel Exhibit 1005 Pages
`
`
`
`WO 93/00650
`
`_ 4 _
`
`PCI‘IAU92/00302
`
`ignoring or treating specially some or all surfaces or surface
`(4)
`elements which are always invisible to said group of viewpoints during
`the actual visibility or visibility related computations for some or all'
`
`viewpoints in said group.
`In accordance with another aspect of the present invention there is
`disclosed a method of reducing the visibility, radiosity or visibility
`related computations in an environment consisting of three-dimensional
`abstract or physical surfaces, said method comprising the steps of:
`(1) prior to a visibility computation (known per se) being carried
`some or all viewpoints or possible occurrences of viewpoints are
`
`out,
`
`10
`
`classified into groups of viewpoints;
`(Z)
`defining one or more projection surfaces (known per se) for
`the purpose of the simultaneous projections of surfaces or surface
`
`elements with respect to a group of viewpoints;
`(3) dividing each of said projection surfaces into regular or
`
`15
`
`irregular grids;
`(4) defining a data structure which organizes computer storage for
`
`storing of projections on said grids;
`(5)
`for each of the selected surfaces and for each group of
`
`20
`
`viewpoints, simultaneously projecting said surfaces or their surface
`elements onto the projection surfaces, computing grid cells which are
`
`always under the projections, storing the furthest possible distances
`between viewpoints of said group and said surfaces to provide surface
`
`25
`
`30
`
`distances corresponding to those cells in said computer storage;
`(6)
`for each patch element of said surfaces, determining those
`said grid cells that said patch element might project onto, comparing the
`depths stored in the cells with the furthest possible depth between the
`patch element and said group of viewpoints to determine whether the patch
`element is always occluded by other surfaces;
`(7)
`ignoring or treating specially some or all surfaces or surface
`elements which are always visible to said group of viewpoints during the
`
`actual visibility or visibility related computations for some or all
`
`viewpoints in said group.
`
`BRIEF DE§CRIPTION QF THE DRAWINGS
`
`35
`
`A preferred and a number of other embodiments of the present
`
`invention will now be described with reference to the drawings in which:
`
`Intel Exhibit 1005 Page 4
`
`
`
`W0 93/0065!)
`
`_ 5 _
`
`PCT/AU92/00302
`
`Figs.
`
`lA and lB show two viewpoint hierarchies;
`
`Fig. 2 illustrates the projections at an arbitrary viewpoint;
`
`Fig.
`
`3 shows the relationship between projection boxes and a fuzzy
`
`projection box;
`
`Fig. 4 shows the creation of fuzzy and non—fuzzy regions;
`
`Figs. 5A, SB and 5C show a front-facing surface and its
`
`corresponding fuzzy and non-fuzzy regions;
`
`Fig. SD shows a point that is not totally hidden to a viewpoint
`
`bounding box;
`
`Fig. 5E shows a point that is totally hidden to a viewpoint
`
`bounding box;
`
`Figs.
`
`6A to SD show the detection of totally invisible patches:
`
`Figs.
`
`7A and 7B show the detection of totally visible/non-hiding
`
`patches:
`
`Figs.
`
`8A to 80 show the scan-conversion of the non-fuzzy regions;
`
`Figs.
`
`9A to 90 show the scan-conversion of the fuzzy regions; and
`
`Fig.
`
`lo illustrates the mapping between a cache and the fuzzy array.
`
`Figs. 11A and llB show views of a front-facing surface region;
`
`Figs. 12A.
`
`lZB and lZC show cut—plane views of the projection of
`
`lO
`
`l5
`
`20
`
`surface regions;
`
`Figs.
`
`l3A, 138 and 13C show respectively one,
`
`two and four room
`
`models used in testing the preferred embodiment;
`
`Fig. 14 illustrates the direct computation of the fuzzy region of
`
`an edge;
`
`25
`
`30
`
`Fig.
`
`l5 shows a hemi—cube pyramid structure;
`
`Appendix l describes The Detection of Patches Front—Facing to a
`
`Viewpoint Bounding Box;
`
`Appendix 2 describes The Computation of the Fuzzy Extent of
`
`Boundary Edges and Patches;
`
`Appendix 3 describes The Computation of the Non—Fuzzy Regions;
`
`Appendix 4 describes The Scan-Conversion of the Fuzzy Regions;
`
`Appendix 5 describes Using a Cache to Speed up the Access and
`
`Writing of Fuzzy Elements Within the Fuzzy Extent of a Patch;
`
`Appendix 6 describes the direct computation of the fuzzy region of
`
`35
`
`an edge;
`
`Appendix 7 describes scan-conversion of patches on the hemi—cube
`
`pyramid; and
`
`Intel Exhibit 1005 Page 5
`
`
`
`WO 93/00650
`
`_ 6 _
`
`PCT/AU92/00302
`
`Appendix 8 is a list of references.
`BE T AND
`THER M D
`F R ARRY N
`
`T THE NVENTI N
`
`The preferred embodiments relate to various methods for calculating
`three dimensional computer graphic images and they are generally termed
`
`herein as the "fuzzy projection methods".,
`
`A
`
`im 1
`
`F
`
`2 Pr
`
`'
`
`i
`
`n M
`
`h
`
`In this embodiment a method is provided to map the projections of
`
`entities at one or more viewpoints to a set of planes. This embodiment
`
`these
`also provides a means to compute regions which contain all
`projections and regions which are within each and every projection.
`
`l0
`
`The
`
`time and the optical properties described in this
`spatial position,
`embodiment can be replaced by other physical or abstract variables if the
`
`replacement does not affect the mathematical relationship of entities in
`
`the graphic image.
`
`l5
`
`Prior to detailed discussion of this and the other embodiments, it
`
`is useful
`
`to introduce various terms and define variables used in SD
`
`graphics.
`
`Firstly, with reference to Figs. 1A, 18 and 2, a viewer, as shown
`
`is an abstract and dimensionless observer
`by the eyeballs in the Figs.,
`where the visibility of its surrounding objects is to be obtained. A
`
`20
`
`point is visible to a viewer if a ray fired from the viewer to it is not
`
`blocked by any opaque object. There can be more than one viewer at any
`time and each viewer may move or rotate over the time.
`A viewpoint VP is
`
`the spatial position and orientation of a viewer at an instance of time.
`
`25
`
`A view vector VV is a vector from the viewpoint VP to a point P0 being
`
`observed (see Fig. 2).
`
`Several viewpoints VP can be grouped together through certain
`
`properties they share. Related groups of viewpoints may in turn form a
`
`higher level group. This merging may be repeated to create a viewpoint
`
`30
`
`hierarchy.
`
`A current group is the group of viewpoints VP that are
`
`IA, viewpoints VP1, VPZ' VP3
`In Fig.
`currently being considered.
`and VP4 each reside on a path PA1 and represent viewpoints at times
`l, 2,
`3 and 4, and can be considered as one group. Similarly, for the
`
`viewpoints VPS-VP7.
`Each group of viewpoints is associated with a coordinate system
`
`35
`
`called the group coordinate system CS.
`
`A viewpoint bounding box BB of a
`
`group of viewpoints VP is the smallest right quadrangular prism enclosing
`
`Intel Exhibit 1005 Page 6
`
`
`
`wo 93/00650
`
`_ 7 _
`
`PCT/AU92/00302
`
`these viewpoints and whose edges are parallel
`
`to the axes of the group
`
`coordinate system.
`
`If the positions or occurrences of a group of
`
`viewpoints are not precisely defined,
`
`the associated viewpoint bounding
`
`box should be the said prism which encloses the space likely to be
`
`occupied by these viewpoints.
`
`A group of_viewpoints may be lying on a
`
`plane or a point.
`
`In such cases,
`
`the viewpoint bounding box may
`
`degenerate into a plane or a point.
`
`A point P0 is totally visible from
`
`the bounding box BB if it is always visible from every possible viewpoint
`
`VP in the box BB. Conversely, a point
`
`is totally invisible from the
`
`bounding box if it is hidden from the view of every possible viewpoint VP
`
`In Fig. 1A, viewpoints VPI—VP4 reside in a bounding
`in the box BB.
`box BB], and viewpoints VP5-VP7 reside in a bounding box 882'
`The boxes BB1 and BB2 can be considered as first level bounding
`boxes.
`If the boxes BB1 and BB2 are combined, a second level
`bounding box BB3 is formed. Rather than the bounding box being a
`square or rectangular prism, it can also be an ellipsoid, sphere or
`
`arbitrary volume.
`
`A viewpoint group may be degenerated and contain only a single
`
`viewpoint.
`
`In this manner, all
`
`the techniques applicable to a normal
`
`viewpoint group are also applicable to a single viewpoint.
`
`In this case,
`
`the viewpoint bounding box BB degenerates into a point which coincides
`
`with the viewpoint VP.
`
`In Fig. 18, multiple viewers at different times are illustrated.
`
`A
`
`path PA2 of one viewer of a group is shown extending between locations
`at two points in time.
`VP8 represents the viewpoints at time "l" and
`VP
`the viewpoints at time "2", with BB4 the bounding box at time "i"
`9
`and BB5 the bounding box of the union of the viewpoints VP8 and VP .
`Referring now to Fig. 2,
`shown is a projection box PB, a riggt
`
`quadrangular prism with its centre on a viewpoint VP and its edges
`
`parallel
`
`to the principal axes of the current group coordinate system
`
`CS.
`
`The six facets of the box PB are called the projection faces PF.
`
`Each projection face PF is on a plane called a projection plane PP.
`
`The
`
`projection PR of a point P01 in the direction perpendicular to a
`projection plane PP is the intersection between that plane PP and the
`
`view vector VV from the viewpoint VP to the point P01. Conversely,
`since a straight line emanating from the viewpoint VP can only intersect
`
`10
`
`15
`
`20
`
`25
`
`3O
`
`35
`
`Intel Exhibit 160% Page 7
`
`
`
`wo 93/00650
`
`_ a _
`
`PCT/AU92/00302
`
`the projection point PR on a projection plane PP, a location on that
`plane also represents a unique viewing direction from the viewpoint.
`In this manner:
`
`VV = PO1 - VP
`is a visible patch PT
`The area around the visible space point P01
`and the view vector VV passes through this to an invisible space point
`
`1?, occluded by the patch PT.
`to the
`The projection calculations on each plane are identical
`finding of the image region of an object in the direction perpendicular
`to that plane.
`The projection of an object to a projection plane PP
`therefore represents its image region in the direction parallel
`to the
`
`normal N of that plane.
`A viewpoint VP may not be omni—directional but has a limited field
`of view. Hence certain projection faces may be partially or totally
`unobservable from the viewpoint.
`The hidden area of a projection face PF
`
`can be obtained by intersecting it with the viewing horizon of the
`
`viewpoint VP.
`Locations on the projection plane PP can be expressed by a
`coordinate system CS called the projection coordinates.
`The origin of
`this coordinate system is at the centre of the projection face PF.
`Its
`x, y axes are parallel
`to the edges and its 2 axis is parallel
`to the
`normal of the face.
`
`the projection coordinate systems of
`Because they are all parallel,
`the current group of viewpoints VP have the same scale factor so that
`points having the same viewing direction from their viewpoints always
`
`have the same projection coordinates.
`If points on two surfaces project on the same location, PR for
`example. on a projection face PF, an occlusion is taking place.
`The
`point on the surface further to the viewpoint VP is hidden by the point
`on the surface closer to the viewpoint VP unless the latter is
`
`transparent. This can be determined by the distances d between these
`
`points PR and the viewpoint VP.
`to one
`Every edge of the projection boxes PB of a group is parallel
`of the principal axes (x, y or z) of the respective group coordinate
`system CS.
`Since the vectors parallel
`to the principle axes can have at
`most six directions,
`these faces are classified by their normal
`
`directions into six sets.
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`Intel Exhibit 1005 Page 8
`
`
`
`wo 93/00650
`
`_ 9 _
`
`pcr/Au92/00302
`
`3 where viewpoints VP], VP2 and VP3,
`This is shown in Fig.
`each with respective projection boxes P81, PB2 and P83. reside
`
`within a bounding box 381'
`The projections on projection boxes PB1-PB3 can be correlated
`by another box whose edges are also parallel
`to the axes of the current
`group coordinate system CS. This box is called the fuzzy projection
`box FB.
`It is called "fuzzy" because it is the combined projection of a
`
`single surface (or object, entity, etc.) from all viewpoints, with the
`combined projection creating a fuzzy image of the surface.
`Each face of
`
`10
`
`the box is called a fuzzy projection face FF.
`
`'The face FF lies on a
`
`A projection coordinate
`plane called the fuzzy projection plane FP.
`system can be defined for a fuzzy projection plane PP and the fuzzy
`
`projection face PF on it.
`
`Similar to the projection coordinate systems
`
`CS of the projection planes,
`
`the origin of this coordinate system is at
`
`15
`
`the centre of the fuzzy projection face FF and its axes are respectively
`
`to the edges and the normal of the face FF.
`parallel
`By setting the scale factor of the projection coordinate system of
`each of the fuzzy projection planes to be the same as the scale factor of
`
`their associated projection planes PP, each set of projection
`
`20
`
`planes/faces have a one-to—one mapping with the fuzzy projection
`plane/face that is facing the same direction.
`Points on a projection
`
`plane/face can therefore be linearly mapped to the associated fuzzy
`
`projection plane/face.
`
`Points on a set of projection planes having the same projection
`
`25
`
`coordinates, and hence representing the same viewing direction, map to a
`
`point on the associated fuzzy projection plane that has the same
`
`coordinates. Therefore, similar to the projection planes PP, a point on
`
`the fuzzy projection plane FP also represents a unique viewing direction.
`
`However,
`
`the point
`
`is not associated with a particular viewpoint.
`
`30
`
`Since the viewpoints VP can have a limited field of view,
`
`some
`
`areas of the projection planes may be unobservable.
`
`The hidden area of a
`
`fuzzy projection face FF is the area where the corresponding areas on all
`
`the associated projection faces are hidden.
`
`A fuzzy projection face FF
`
`is inactive if all
`its area is hidden.
`To sample the projections on the fuzzy projection box FB, each
`
`35
`
`fuzzy projection plane FP is tessellated by two sets of parallel and
`
`evenly spaced grid lines forming a fuzzy array FA.
`
`By ensuring that
`
`Intel Exhibit 1005 Page 9
`
`
`
`WO 93/00650
`
`_ lO _
`
`PCT/AU92/00302
`
`there are always grid lines on the edges of the fuzzy projection faces
`FF.
`these faces are divided into identical rectangular or square cells.
`Each cell
`is called a fuzzy element FE.
`Each element FE,
`in additional
`to representing the viewing direction associated with its centre, also
`represents a unique solid angle A of viewing directions, as seen in Fig.
`2, represented by its perimeter.
`Although other representation schemes such as the quadtree
`subdivision can be applied. All
`the z-buffer oriented operations would
`
`be accordingly changed to operation under such scemes, however the
`regular subdivision of the faces is considered the most efficient
`
`10
`
`15
`
`20
`
`25
`
`embodiment of the present invention.
`
`Surfaces in the viewing environment are approximated by meshes of
`
`A surface can also be represented by a hierarchy of
`patches PT.
`patches. Different levels of the hierarchy are meshes of patches at
`different details.
`Each patch PT is treated as flat and is approximated
`
`The projection of a surface SU to a viewpoint VP is the
`by a polygon.
`region combining all
`the projection regions of its patches. This region
`is called the projection region PE of the surface.
`Since the projection
`region PE of a patch PT represents its image region,
`the projection
`region PE of a surface SU also represents the image region of that
`surface.
`
`The smallest right quadrangular prism just enclosing the patch and
`
`to the axes of the current group coordinate
`whose edges are parallel
`system is called the bounding volume of the patch.
`The shortest and
`longest depths of a patch are the shortest and longest depths between the
`current viewpoints and points on the patch.
`The depth corresponds to
`
`the z-magnitude in the coordinate direction perpendicular to the
`
`projection planes currently being considered.As they are usually
`difficult to find,
`they are approximated by the shortest and the longest
`
`30
`
`depths between the viewpoint bounding box and the bounding volume of the
`patch, which can be determined by partitioning the space by the planes of
`the viewpoint bounding box and determining which partitions the bounding
`
`volume of the patch is in.
`
`This is shown in Fig. 4 where a patch PT of a surface is viewed
`
`35
`
`from points VP1, VP2 and VP3 in a bounding box BB]. Associated
`with each viewpoint VPl—VP3 is a projection region PE1—PE3 which
`
`Intel Exhibit 1005 Page 10
`
`
`
`W0 93/00650
`
`_ 1 1 _
`
`PCT/AU92/00302
`
`is seen to intersect with the corresponding projection face PFl—PF3
`in a different manner for each viewpoint VP1-VP3.
`The projections of the patch PT on the same set of projection faces
`
`PFl-PF3 can be mapped to the associated fuzzy projection face FF.
`The areas on the fuzzy projection box FB1 containing all
`these mappings
`are called the fuzzy (combined) regions FR of the patch because logically
`
`They cannot tell whether the patch PT1 actually
`the region is fuzzy.
`projects on the corresponding area on each projection box PB1—P83.
`Similar to the case of patches,
`the fuzzy region FR of a surface SU
`
`is produced by combining the mappings of the surface projections for a
`
`number of patches.
`
`In this manner each projection region of the surface
`
`is the logical—OR of all
`
`the projections of the patches, and the fuzzy
`
`projection of each patch is the logical-OR of its projection regions.
`
`Accordingly,
`
`the fuzzy region FR of a surface SU is also the logical—OR
`
`of the fuzzy projections of its patches. This region is equivalent to the
`
`superimposition of all
`
`the images of the surface seen from the current
`
`viewpoints.
`
`there can also be areas which
`On the fuzzy projection box FBl’
`are always mapped on by the projection regions of the surfaces from the
`
`current group of viewpoints.
`
`On these areas the logical-AND of the
`
`projection regions PE1—PE
`3 of the surface SU is true. Because the
`outcomes of the logical-AND operations are always a subset of the
`
`outcomes of the logical—0R operations,
`
`these areas are always within the
`
`fuzzy region FR of the surface SU.
`
`Logically the areas are non—fuzzy as the projection statUs of the
`
`corresponding areas on the current projection boxes is always true.
`
`Therefore they are called the non—fuzzy region NF.
`
`An alternative term
`
`is the umbra region.
`
`The non—fuzzy region NF of a surface SU can be obtained by mapping
`
`the projections of the surface at all
`
`the current group of viewpoints on
`
`the fuzzy projection box F8 and finding the area within every projection.
`
`However,
`
`this is costly if there are many viewpoints. To reduce the
`
`computations, a series of approximations which err on the side of caution
`
`are applied.
`
`It will be apparent to those skilled in the art that there can
`
`exist plural non-fuzzy regions for each surface.
`
`l0
`
`15
`
`20
`
`25
`
`3O
`
`35
`
`Intel Exhibit 1005 Page 11
`
`
`
`WO 93/00650
`
`_ 12 _
`
`PCT/AU92/00302
`
`the first approximation assumes that the
`Referring to Fig. 5A,
`viewpoints can be at any position in the current viewpoint bounding box
`382 and each viewpoint has a field of view containing all
`the possible
`fields of view of the current group of viewpoints.
`The patches PTn
`facing all
`these viewpoints are determined.
`They are called the
`front-facing patches.
`A method for finding these patches is described in
`
`detail
`
`in Appendix l.
`Interconnected front-facing patches PTn are grouped into surface
`regions called the front—facing sub-surfaces SUZ'
`The edges at the
`borders of these regions are called the boundary edges BE.
`Since the front-facing patches PT“, and their boundary edges BE,
`are facing all
`the possible viewpoints VP in the viewpoint bounding box
`BB. a front—facing sub—surface SU2 never curves back and is always
`facing these viewpoints.
`The projections of its boundary edges BE
`therefore always surround its projection region PR.
`Similar to the patches as shown in Fig. SB,
`the projection of each
`boundary edge BE can be mapped on the fuzzy projection box FB]. A fuzzy
`region FRl of the edge BE1
`is the region on the box FB1 which
`contains all the projection mappings of that edge at every possible
`
`in the current viewpoint bounding box 882'
`viewpoint
`The fuzzy regions of all boundary edges belonging to a front-facing
`sub-surface can be combined into an area called the fuzzy boundary region
`
`the boundary
`in the fuzzy boundary region BF,
`If a point is not
`BF.
`edges BE do not project on all
`the associated points on the projection
`boxes. Because any change of projection status must be at the projection
`of the boundary edges,
`the point must be within every mapping of the
`projection regions of the front—facing sub-surface SU. or it must be
`outside all
`the mappings.
`
`Since the fuzzy region is the combined projection of the
`front—facing sub-surface, any area within it must be mapped by at least
`one projection region of the sub-surface. Therefore,
`the areas inside it
`but outside the fuzzy boundary region BF always contain the mappings of
`the projection regions of the sub-surface.
`These areas by definition are
`the non-fuzzy region NF1 of the front-facing sub-surface.
`As
`the viewpoints can have any orientation and be anywhere in the
`
`the fuzzy region FR of an edge is difficult to
`viewpoint bounding box BB,
`compute. However,
`the extent of this region on a fuzzy projection face
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`Intel Exhibit 1005 Page 12
`
`
`
`WO 93/00650
`
`_ 13 _
`
`PCT/AU92/00302
`
`can be more easily obtained.
`
`The computation of this extent
`
`is described
`
`in Appendix 2.
`
`The smallest rectangle totally enclosing this extent and
`
`with boundary on the grid lines of the fuzzy projection plane is called
`
`the fuzzy extent EF of the edge on that plane.
`
`This is seen in Fig. 5C where the same fuzzy projection face of
`
`Fig. SB is shown in which the fuzzy boundary region BF1
`the fuzzy extents EF, of the boundary edges BE1
`... etc.
`The region containing all
`the fuzzy extents EF of the boundary
`
`is replaced by
`
`edges always encloses the fuzzy boundary region of the front-facing
`
`sub-surface. Therefore,
`
`the subtraction of it from the fuzzy region FR
`
`produces an area always within the non—fuzzy region. This area is used
`
`to approximate the non—fuzzy region NF.
`
`If a surface contains large patches and hence large boundary edges,
`
`the use of the fuzzy extents to approximate its fuzzy boundary region may
`
`be inefficient.
`
`The fuzzy regions of its boundary edges can be directly
`
`evaluated using the method described in appendix 6. Alternatively, each
`
`edge can be divided into sub—edges.
`
`The fuzzy region of that edge can
`
`then be replaced by the fuzzy extents of its sub-edges.
`
`An Invisibility Fuzzy Prgjeetign Methgg for the Deteetign Qf Tgtelly
`
`10
`
`15
`
`20
`
`Invieible Petehes.
`
`In this embodiment a method is provided to compare the mappings of
`
`entities on the fuzzy projection faces FF associated with a group of
`
`viewpoints VP. Through this operation, entities which may be treated as
`
`invisible to all
`
`these viewpoints, can be detected.
`
`Referring to Fig. 6A, a viewpoint boundary box BB3 is shown to
`observe three patches PAA, PTB and PTC through a mesh surface
`SUM.
`Figs. 68, 6C and GD show respectively the fuzzy extents EFA,
`EFB and EFC for the patches PAA, PTB and PTC on respective
`fuzzy projection faces FFA-C' Also shown is the non-fuzzy region NF3
`of the mesh surface SUM.
`The determination of totally invisible surfaces is based on a
`
`property of the non-fuzzy regions.
`
`In Fig. SD, point P02 is hidden by
`
`surface SUl
`
`from viewpoint VPll
`
`in the current viewpoint bounding box
`
`BBS.
`
`If the point is visible from another viewpoint VPlO which is also
`
`in BBS,
`
`then SUl has to curve back and reach behind P02. However,
`
`in
`
`such a situation part of SUl next to a point SP would become back-facing
`
`to viewpoint VPlZ in BBS.
`
`SUl would then contain several front-facing
`
`25
`
`30
`
`3