throbber
Topology Preserving Data Simplification with Error
`Bounds
`
`Chandrajit L. Bajaj
`
`Department of Computer Sciences and TICAM, University of Texas, Austin, TX
`78712
`
`Daniel R. Schikore
`
`Center for Applied Scientific Computing, Lawrence Livermore National
`Laboratory, P. 0. Box 808, L-561, Livermore, CA 94550
`
`Many approaches to simplification of triangulated terrains and surfaces
`have been proposed which permit bounds on the error introduced. A
`few algorithms additionally bound errors in auxiliary functions defined
`over the triangulation. We present an approach to simplification of scalar
`fields over unstructured grids which preserves the topology of functions
`defined over the triangulation, in addition to bounding of the errors. The
`topology of a 2D scalar field is defined by critical points (local maxima,
`local minima, saddle points), in addition to integral curves between them,
`which together segment the field into regions which vary monotonically.
`By preserving this shape description, we guarantee that isocontours of
`the scalar function maintain the correct topology in the simplified model.
`Methods for topology preserving simplification by both point-insertion
`(refinement) and point-deletion (coarsening) are presented and compared.
`
`Keywords: Simplification, Scientific Visualization, Scalar Fields
`
`1
`
`Introduction
`
`Scientific data is often sampled or computed over a dense mesh in order to cap(cid:173)
`ture high frequency components or achieve a desired error bound. Interactive
`display and navigation of such large meshes is impeded by the sheer num(cid:173)
`ber of triangles required to sufficiently model highly complex data. A number
`of simplification techniques have been developed which reduce the number
`of triangles to a particular desired triangle count or until a particular error
`
`Preprint submitted to Elsevier Preprint
`
`3 December 1997
`
`Bradium Technologies LLC
`
`Exhibit 2003
`
`

`
`threshold is met. Given an initial triangulation M of a domain D and a func(cid:173)
`tion F(x) defined over the triangulation, the simplified mesh can be called M'
`and the resulting function .:F'(x). The measure of error in a simplified mesh
`Mi is usually represented as:
`
`E(M') = max(IF(x)- F'(x)l)
`xE'D
`
`(1)
`
`The ability to bound the error E(M') is very important, but t he error defini(cid:173)
`tion (1) is inherently a local measure, neglecting to consider global features
`of the data. We introduce new criteria for the simplification of sampled func(cid:173)
`tions which preserves scalar field features in addition to bounding local errors.
`Two-dimensional scalar field topology is described by the critical points and
`arcs between them. Preserving the scalar field criticalities maintains an in(cid:173)
`variant of the connectivity and combinatorial structure (topological genus) of
`successively simplified isocontours.
`
`In Section 2 we discuss related work in mesh simplification and feature de(cid:173)
`tection. Section 3 introduces the definition of 2D scalar topology as it will be
`used in our simplification strategy. In Section 4 we introduce two algorithms
`for simplification with topology preserving characteristics. The first is an ex(cid:173)
`tension to existing coarsening techniques which iteratively delete vertices or
`contract edges in the mesh. The second algorithm adopts an inverse approach,
`iteratively introducing detail (refinement) to an initially sparse mesh, preserv(cid:173)
`ing the scalar topology of the fine mesh.
`
`2 Related Work
`
`2.1 Simplification
`
`A wide variety of algorithms have been developed for the simplification of
`meshes. We present a brief overview of the classes of algorithms which have
`been proposed for geometry and data simplification.
`
`Vertex insertion/ deletion
`
`A large class of data and geometry simplification algorithms are based on suc(cid:173)
`cessive application of one or more topological mesh operators, such as edge
`contraction, which contracts an edge of the mesh to a point, or vertex dele(cid:173)
`tion, in which a vertex and adjacent triangles are removed and replaced with
`a covering of the resulting hole. Point insertion and deletion approaches have
`
`2
`
`Bradium Technologies LLC
`
`Exhibit 2003
`
`

`
`been explored by many researchers for application in Geographical Informa(cid:173)
`tion Systems (GIS). A common technique is to extract key points of data
`from the originally dense set of points, and compute a Delaunay triangula(cid:173)
`tion [7,11,12,33,39,41]. Silva et al. [37] use a greedy approach for inserting
`points into an initially sparse mesh. Schroeder et al. [36] compute reduced
`representations for dense triangular surface meshes such as those computed
`by Marching Cubes [25] or similar isosurfacing algorithms. Vertices in the
`dense mesh are examined and classified based on geometric features in the
`triangulation surrounding the vertex. If error criteria are satisfied, the vertex
`is deleted and the resulting hole is retriangulated. Retriangulation is guided
`by local edges detected in the classification stage and aspect ratios of new
`triangles. Several passes over the object successively remove vertices until no
`vertex satisfies the criteria for removal. There is no cumulative error measure,
`and therefore no guarantee on the amount of accumulated error in the final
`representation. Hamann [15] applies a similar technique in which triangles are
`considered for deletion based on curvature estimates at the vertices. Reduc(cid:173)
`tion may be driven by mesh resolution or, in the case of functional surfaces,
`root-mean-square (RMS) error. Ronfard et al. [34] apply successive edge con(cid:173)
`traction operations to compute a wide range of levels-of-detail for triangulated
`polyhedra. Edges are extracted from a priority queue based on a computed
`edge cost such that edges of lesser significance are removed first. Gueziec [14]
`introduces a tolerance volume for bounding the error resulting from successive
`edge contraction operations. The resulting merged vertex is positioned such
`that the volume remains constant. Cohen et al. [6] introduce Simplification
`Envelopes to guide mesh simplification with global error bounds. Envelopes
`are an extension of offset surfaces which serve as an extreme boundary for the
`desired simplified surface.
`
`Region Merging
`
`Rinker et al. [19] perform "geometric optimization" on triangular surface meshes
`by grouping faces into contiguous sets which are nearly co-planar. Points inte(cid:173)
`rior to a region and points along nearly lin ar boundaries of regions are deleted,
`and the resulting hole is retriangulated. Kalvin et al. [24] cluster mesh faces
`into superfaces, triangulating the resulting polygons for a simplified represen(cid:173)
`tation.
`
`Filtering
`
`Filtering techniques are capable of producing a large range of simplified mod(cid:173)
`els through application of grouping and merging rules. An attractive feature of
`
`3
`
`Bradium Technologies LLC
`
`Exhibit 2003
`
`

`
`filtering techniques is the ability to simplify objects to a minimal representa(cid:173)
`tion through successive applications. Subsampling is a simple type of filtering
`which is easily applied to subdivision meshes for which there exists a natural
`remeshing when nested sets of vertices are successively deleted. The major
`drawback to subsampling is that there is no bound on the error which is intro(cid:173)
`duced through its application. Rossignac et al. [35] use clustering and merging
`of features of an object based on a regular spatial subdivision. Clustering ap(cid:173)
`proaches have the advantage that small features which are geometrically close
`but not topologically connected can be grouped and merged for higher rates
`of simplification. In this scheme long, thin objects may collapse to an edge
`and small objects may collapse to a point. He et al. [16] provide more control
`over subsampling of regular grids by filtering the simplified mesh at each step.
`The regular grid corresponds to a sampling of the signed-distance function of
`a 3D surface. A multi-resolution triangle mesh is extracted from the resulting
`multi-resolution volume buffer using traditional isosurfacing techniques.
`
`Optimization
`
`Optimization methods define measures of energies for point sets or triangula(cid:173)
`tions based on an original mesh, and use interactive optimization to minimize
`these energies in forming a simplified mesh. Thrk [40] computes simplified
`polygonal surfaces at a desired number of vertices. Contrast this with the
`point insertion and deletion methods which are usually driven by error com(cid:173)
`putations rather than desired resolution. Given the desired number of vertices,
`point repulsion on the polygonal surface spreads the points out. A mutual tes(cid:173)
`sellation of the original triangulation and the introduced points followed by
`deletion of the original vertices guarantees that the topology of the polyg(cid:173)
`onal surface is maintained. Point repulsion is adjusted based on estimated
`curvature of the surface, providing an adaptive triangulation which maintains
`geometric features. Hoppe et al. [21] perform time-intensive mesh optimization
`based on the definition of an energy function which balances the need for accu(cid:173)
`rate geometry with the desire for compactness in representation. The level of
`mesh simplification is controlled by a parameter in the energy function which
`penalizes meshes with large numbers of vertices, as well as a spring constant
`which helps guide the energy minimization to a desirable result. In [20], Hoppe
`applies the optimization framework to the simplification of scalar fields.
`
`Multi-resolution analysis
`
`Multi-resolution analysis is a structured mathematical decomposition of func(cid:173)
`tions into multiple levels of representation. Through the use of wavelet trans-
`
`4
`
`Bradium Technologies LLC
`
`Exhibit 2003
`
`

`
`forms [10,27], a hierarchical representation of functions can be obtained by
`repeatedly breaking the function into a coarser representation in addition to
`a set of perturbation coefficients which allow the full recovery of the original
`representation from the coarse representation. Generally, the wavelet basis is
`chosen such that the perturbation coefficients have desirable attributes such
`as direct correlation with some measure of error which is introduced at a
`given level of representation. During reconstruction from the wavelet repre(cid:173)
`sentation, sufficiently small wavelet coefficients can be left out, resulting in
`a coarser approximation to the original data, with a known bound on the
`amount of error [26,8,38]. Further extensions have provided similar basis for
`the decomposition of surfaces [9] . Muraki [31] applies wavelets in 3D to com(cid:173)
`pute multi-resolution models of 3D volume data. Isosurfaces and planar cross
`sections of the resulting data show little change in image quality with large
`reductions in the amount of data representing the volume.
`
`2. 2 Feature Detection
`
`The problem of detecting ridges and valleys in digital terrain has been con(cid:173)
`sidered in several papers [12]. McCormack et al. [29] consider the problem of
`detecting drainage patterns in geographic terrain. Interrante et al. [22] used
`ridge and valley detection on 3D surfaces to enhance the shape of transpar(cid:173)
`ently rendered surfaces. Extrema graphs were used by ltoh and Koyamada
`to speed isocontour extraction [23]. A graph containing extreme points and
`boundary points of a scalar field can be guaranteed to intersect every iso(cid:173)
`contour at least once, allowing seed points to be generated by searching only
`the cells contained in the extrema graph. Helman and Hesselink detect vector
`field topology by classifying the zeros of a vector field and performing particle
`tracing from saddle points [18]. The resulting partitioning consists of regions
`which are topologically equivalent to uniform flow. Globus et aldescribe a soft(cid:173)
`ware system for 3D vector topology and briefly note that the technique may
`also be applied to the gradient of a scalar field in order to identify maxima
`and minima [13]. Bader examines the gradient field of the charge density in
`a molecular system [1]. The topology of this scalar field represents the bonds
`linking together the atoms of the molecule. Bader goes on to show how fea(cid:173)
`tures higher level structures in the topology represent chains, rings, and cages
`in the molecule.
`
`2. 3 Our Approach
`
`Simplification techniques have advanced to the point at which it is useful to
`now consider preserving global mesh and data features. In the following sec-
`
`5
`
`Bradium Technologies LLC
`
`Exhibit 2003
`
`

`
`tion we present a definition and method of computation of scalar field topology,
`which describes scalar field structure in terms of criticalities and relationships
`between them and captures the topological connectivity of isocontours. Based
`on these definitions, we develop two approaches for simplification which are
`motivated by the preservation of the scalar topology, and hence the preser(cid:173)
`vation of the topology of approximate isocontours. The first approach uses
`a bottom-up simplification approach, consisting of a small set of consistency
`checks which guarantee scalar topology consistency for the case of iterative
`simplification by point deletion or edge contraction. The second approach in(cid:173)
`verts the order of operation, beginning with the scalar topology computation,
`which serves as a sparse base mesh which is refined to meet error-bound cri(cid:173)
`teria.
`
`3 Scalar Field Topology
`
`Scalar field structure can be characterized by the critical points of the scalar
`field and higher order relationships between them [4]. A point x is a critical
`point of the scalar function F(x) if all first-order partial derivatives of F
`evaluated at x are zero [30]. In degenerate cases, a critical point may be part
`of a larger critical curve or critical region. We will restrict our attention to
`critical points, because our motivating work is not greatly affected by the
`treatment of these special cases.
`
`For our purpose of feature detection and application to simplification, the
`topology of a scalar field F defined over a domain 1J consists of the following:
`
`( i) The local maxima of F
`( ii) The local minima of F
`(iii) The saddle points ofF
`(iv) Selected integral curves joining each of the above
`
`Integral curves are defined as curves which are everywhere tangent to the
`gradient field of F. In vector field topology, the curves advected in the flow field
`segment the field into regions which are topologically equivalent to uniform
`flow [17]. In the case of scalar topology, the analagous property holds that
`integral curves partition the field into regions in which the gradient flow is
`uniform, or in other words, the scalar function is monotonic. Another way of
`understanding this property is that within each region, the isocontours of the
`scalar field will consist of a single connected component.
`
`Figure 1 displays several isocontours of a scalar field and the corresponding
`scalar field topology. Figure 2 displays a surface representation of the same
`field.
`
`6
`
`Bradium Technologies LLC
`
`Exhibit 2003
`
`

`
`Fig. 1. Isocontours (dashed) of a scalar field along with the critical points and
`integral curves
`
`'
`'
`- ---~-- - ----- --- -~ ----------:- - -- -
`
`1
`
`:
`
`l
`
`Fig. 2. A surface representation of the function in Figure 1. A change of contour
`topology occurs at the saddle point.
`
`The procedure for computation of scalar topology can be broken down into
`three stages:
`
`(i) Detect critical points of :F.
`(ii) Classify critical points.
`(iii) Compute selected integral curves in gradient field.
`
`In the following subsections, we describe the above components under the as(cid:173)
`sumption that the scalar field :F has continuous first derivatives. In Section 3.4
`we will remove this restriction and discuss scalar topology computation for the
`case of a piecewise C0 function defined over a triangulation.
`
`3.1 Computing Critical Points
`
`Critical points of a scalar function are defined as points at which the gra(cid:173)
`dient vanishes [28]. Given a d-dimensional scalar field :F, the critical points
`correspond to solutions of the system of equations:
`
`0
`
`0
`
`7
`
`Bradium Technologies LLC
`
`Exhibit 2003
`
`

`
`3.2 Classification of Critical Points
`
`Qualitative information about the behavior of the gradient field near a critical
`point is obtained by analysis of the Hessian ofF (2D):
`
`The eigenvalues ( e1 , e2 ) and eigenvectors ( v1 , v2 ) of the above matrix relate to
`the behavior of the gradient field, and hence the scalar field, near the critical
`point, much the same as for the behavior of a general vector field [5,18].
`One difference to note is that for a gradient field, the matrix of derivatives
`is symmetric ( (]2 F I oxoy = 82 F I oyox), and therefore the eigenvalues are
`real. This is intuitively expected, as imaginary eigenvalues indicate rotation
`about the critical point, and a gradient field is an irrotational vector field.
`This observation allows us to simplify the classification of critical points to
`the cases depicted in Figure 3.2 (additional degenerate criticalities exist when
`e1 = 0 or e2 = 0).
`
`Maximum
`(e1,e2<0)
`
`Minimum
`(e1,e2 > 0)
`
`Regular Saddle
`(e1Xe2<0)
`
`_)JL
`""1/
`""!/
`7i"" 7j~ 1il
`
`Fig. 3. Scalar critical point classifications
`
`A positive eigenvalue corresponds to gradient flow away from the critical point,
`while a negative eigenvalue indicates gradient flow toward the critical point.
`In the case of a saddle point, there is gradient flow toward and away from the
`critical point, distinguishing them from the field behavior near other critical
`points. In this case, the eigenvectors corresponding to the positive and negative
`eigenvalues define the separatrices of the saddle in the directions of flow toward
`and away from the critical point , respectively. It is this property that is used
`in the next section to compute integral curves in the gradient field .
`
`8
`
`Bradium Technologies LLC
`
`Exhibit 2003
`
`

`
`Fig. 4. Scalar topology of a simple scalar field. In this example only interior criti(cid:173)
`calities are computed. Additional criticalities occur at the boundaries.
`
`3.3 Computing Integral Curves
`
`Having computed and classified the critical points, the final step for describ(cid:173)
`ing the scalar topology is the tracing of selected integral curves between the
`detected points. Saddle points have the property that the eigenvectors of the
`Hessian are the separatrices of the saddle. The behavior of the scalar field
`near the saddle point can be determined based on the behavior of the func(cid:173)
`tion in the direction of the eigenvectors. Integral curves are defined as curves
`which are everywhere tangent to the gradient field of F. For a 2D scalar field,
`four integral curves meet at a saddle, dividing the domain into regions. Two
`curves follow the (forward and backward) direction corresponding to the pos(cid:173)
`itive eigenvalue, and two follow the direction corresponding to the negative
`eigenvalue.
`
`Integral curves can be computed efficiently using a 4th order adaptive step
`Runge Kutta integration in the gradient field [32]. The initial position for
`the iterative stepping is placed a small distance from the saddle point along
`the appropriate eigenvector. Computation of the integral curve ends when we
`reach the vicinity of another critical point within a certain E, in which case
`the curve terminates at that point. Other curves may end at the boundaries
`of the mesh. The scalar topology for a simple function is shown in Figure 4.
`
`3.4 Piecewise Linear Topology
`
`Critical point analysis for functions with continuous derivatives provides a
`fundamental theory for describing the structure of scalar fields. Extending
`these definitions for functions which are not smooth requires relaxation of
`
`9
`
`Bradium Technologies LLC
`
`Exhibit 2003
`
`

`
`Li' ,.
`I
`I•
`
`some of our definitions. We address the case of a function which is sampled
`at the vertices of a 2D triangle mesh and linearly interpolated within each
`triangle.
`
`For the case of a piecewise linear function, we must revise our definition of
`"critical point" due to the fact that the gradient is not everywhere defined.
`The feature that we want to capture is the change in the topology of the
`isocontours, as was illustrated in Figure 2. It is clear that the contour topology
`does not change within a triangular cell, and so we confine our attention to
`the classification of vertices with respect to the local neighborhood. In the
`
`Fig. 5. Normal space (left) for a cluster of 3 triangles incident on a vertex (right)
`
`general case at a vertex, the gradient is undefined, and may be considered to
`take on a range of values based on the normals of the adjacent triangles, as
`illustrated in Figure 5. As a result, critical points are defined as vertices at
`which the normal space of the adjacent triangles includes the vector (0, 0, 1).
`Figure 3.4 illustrates several of the cases which may occur.
`
`vertex configuration
`
`s addle point
`
`maximum
`
`minimum
`
`Fig. 6. A subset of the pseudo critical-points for a piecewise linear scalar field.
`
`Computation of integral curves requires a similar modification, as the gradi(cid:173)
`ent is undefined along the edges of the triangulation as well. Given a saddle
`point in a piecewise linear triangulation, the local topology is determined
`by examining the adjacent vertices in a clockwise or counter-clockwise order.
`Each local maximum or local minimum in this 1D ordering corresponds to a
`"ridge" or "channel" of the scalar field. Integral curves can be computed by
`traversing the edges of the triangulation corresponding to ridges and channels.
`Figure 3.4 illustrates a the topology of a piecewise-linear function similar to
`that presented in Figure 1.
`
`10
`
`Bradium Technologies LLC
`
`Exhibit 2003
`
`

`
`L
`I
`
`Fig. 7. Topology of a piecewise linear scalar field
`
`4 Topology Preserving Simplification
`
`We describe two approaches to the simplification of functional meshes while
`maintaining the structure, as defined by the scalar topology. We assume that
`the domain is discretized by a triangular mesh M of nt triangles Ti and nv
`vertices Vj . The function :F(x) is defined for x = Vj and is linearly interpolated
`over each triangle. :F(Ti) is used to represent the local interpolant over a
`triangle.
`
`4.1 Bottom- Up Simplification
`
`In this section we derive necessary and sufficient conditions which avoid the
`change of scalar topology in the framework of existing point-deletion or edge
`contraction simplification techniques.
`
`(a)
`
`(b)
`
`(c)
`
`Fig. 8. General simplification operations applied to a local region of the mesh in (a).
`In (b), the resulting triangulation is formed by collapsing the deleted point to an
`adjacent point. In (c), the retriangulation is general and may be chosen to minimize
`the error introduced.
`
`The basic framework of our approach is an error-bounded removal of vertices
`based on an ordering criteria [2,3] . We restrict our simplification to use the
`original vertices and to local operations such as vertex deletion and vertex
`collapse, as illustrated in Figure 8. The new mesh which results from the kth
`step of simplification is called Mk, and its triangles are denoted Tik. We further
`define :Fk (x) to represent the piecewise defined height function associated with
`the mesh Mk. Note that for all triangles not involved in the operation at step
`
`11
`
`Bradium Technologies LLC
`
`Exhibit 2003
`
`

`
`k , Tl = Tik- 1 and thus :Fk(Tl) = :Fk- 1(Tik- 1
`rio = T;,.
`
`) . Initially, we have M 0 = M and
`
`Error bounds are computed using a two phase process, (i) measurement of
`error introduced by a point deletion and (ii) approximation of the accumulated
`error due to all point deletions in an overlapping region. In the following
`sections we describe the approach for bounding errors and derive conditions
`which preserve the topology of our given function.
`
`4.1.1 Local measurement of error
`
`The first step in maintaining an error-bound on the simplification is to measure
`the error introduced by deleting a vertex or collapsing an edge. Computing the
`introduced error is particularly simple due to the limited local influence of the
`operation.
`
`Fig. 9. Measurement of introduced errors at black vertices and the fragmentation
`into linear error regions. The introduced error at the white vertices and along the
`boundary is zero.
`
`As illustrated in Figure 9, a local retriangulation due to removal of a vertex
`replaces the triangles T/-1 with triangles T( The error incurred at any point
`x in the local domain is quantified by the difference between the interpolated
`function value in the two triangulations, defined as Ek(x) = ;:k-1(x)- :Fk(x).
`As the difference of two linear functions is again linear, the introduced error
`is defined as a piecewise linear function over the decomposition implied by
`the intersections of the triangulations, as illustrated above. The regions of the
`decomposition are called Rf and the (continuous) range of introduced error
`across the region is [E:[ ( Rf), Ef ( Rf) ]. Nate that the E j 1- ( Rf) represent signed
`error values, and that the minimum and maximum errors occur at the extreme
`points of the convex regions, and thus can be determined by computing the
`errors at only a few points in the domain.
`
`12
`
`Bradium Technologies LLC
`
`Exhibit 2003
`
`

`
`4.1. 2 Global errors from local errors
`
`Equally important as the measurement of error introduced through one sim(cid:173)
`plification operation is the measurement or approximation of errors which
`accumulate through successive operations. Features which are desirable in, or
`even required for, an error approximation strategy include:
`
`- The estimation of error must be strict (must not underestimate the actual
`error) .
`- The estimation should be tight, such that the estimate provides a useful
`measure of the actual error.
`
`At the same time, our desire for scalability and efficiency for extremely large
`scientific datasets requires that the strategy for bounding errors be simple
`and not impose a significant cost in addition to computing the introduced er(cid:173)
`rors. We have designed a simple error representation and accumulation scheme
`which fulfills these criteria.
`
`Error representation
`
`We associate with each triangle two floating-point errors c(Tl) and E+(Tik)
`(illustrated in Figure 10, such that for x E Tl,
`
`(2)
`
`Initially, c (Tn = E+ (Ti0
`) = 0, indicating that the original representation is
`exact. Note that one can easily incorporate input data with known source
`error bounds into this scheme by initializing the error bounds appropriately .
`
`.:F(x)
`
`Fig. 10. lD illustration of the accumulated error bound representation
`
`With such a representation in hand, we examine the problem of computing
`the errors E+/- (Tik) given E+/ - (~k-l) such that inequality ( 2) is maintained
`as an invariant. We noted previously that the local domain is segmented into
`regions in which the introduced error varies linearly. Each introduced triangle
`~k maps to a set of regions { Rf,0 • · · Rf.r}, For each region Rf,j we determine
`the minimal sufficient error range which aggregates the existing error with the
`
`13
`
`Bradium Technologies LLC
`
`Exhibit 2003
`
`

`
`I
`
`accumulated error. Each region Rf,j corresponds to a portion of a triangle T/
`with associated error interval [t- (Tn, E+(Jik)] and by interval arithemetic we
`arrive at the following inequality which bounds the error over the region Rf;
`
`which leads to the accumulated error interval:
`
`For a minimal bound on the triangle T/+1 it is sufficient to take the maximum
`of the E+I-(Rf) for all regions Rf.j· which contribute to the triangle Tt This
`process is illustrated geometrically for a simple 1D example in Figure 11.
`
`F(x)
`
`Fig. 11. Geometric interpretation of the accumulation of error bounds. The intro(cid:173)
`duced error f.J contributes to the error interval as illustrated.
`
`X
`
`--
`-- .
`
`. ··
`
`··· .... ____ .... __ _
`
`(•)
`
`(c)
`
`(b)
`
`{d)
`
`Fig. 12. Illustration of propagation of error bounds. As each successive point is
`deleted, the error bounds on each segment are updated to reflect the minimum
`bound between the new segment and the original polyline.
`
`14
`
`Bradium Technologies LLC
`
`Exhibit 2003
`
`

`
`[
`
`4.1.3 Preserving Local Topology
`
`Topology preservation is easily integrated into a vertex-delete and edge-contract
`simplification framework by considering the topology of the contours in the
`neighborhood of each candidate edge being considered for contraction. Fig(cid:173)
`ure 13 illustrates edge contract operations which preserve the original topology
`of the field as well as edge contraction operations which modify the original
`topology.
`
`o ri g in a l triang ul a ti on
`
`valid coll apse
`
`invalid coll a p se
`
`Fig. 13. Illustration of topology preserving and topology modifying edge contraction
`operations.
`
`Topology preservation can be enforced by comparing the classification of the
`vertices in the local neighborhood before and after the edge contraction. If the
`collection of classifications change, criticalities have either been introduced or
`destroyed. Note that criticalities may move from one vertex to a neighboring
`vertex without modifying the field topology. If this is undesirable, one can en(cid:173)
`force that only regular points be contracted to a neighbor. Note that topology
`preserving simplification of the scalar field in general also simplifies the scalar
`topology diagram (the piecewise linear arcs between the critical points).
`
`4.1.4 Ordering of operations
`
`Critical to the success of an error approximation approach is the order in which
`vertices are selected for removal. Vertex selection by priority queue approaches
`have permitted simplification which spans wide ranges of resolution [34].
`
`We apply a priority queue driven approach which initially measures the error
`introduced by removal of each vertex. At each simplification step, the vertex
`which introduces the minimum error is extracted and the simplification oper(cid:173)
`ation is applied. After each operation, neighboring vertices which are affected
`by the result are reprioritized based on the accumulated error which results
`from their removal.
`
`4.1.5 Examples
`
`Figure 14 illustrates the features of a topology preserving mesh simplification
`scheme. Using the same input triangulations, two simplification strategies are
`applied. The mesh in Figure 14(b) is the result when topology preserving
`
`15
`
`Bradium Technologies LLC
`
`Exhibit 2003
`
`

`
`criteria are enforced. The mesh in Figure 14(c) is the result when only local
`error bounds are used to guide the simplification.
`
`(a)
`
`(b)
`
`(c)
`
`(d)
`
`(e)
`
`(f)
`
`Fig. 14. (a) close-up of input triangulation of MRI data of a heart, with contour
`in white (b) a topology preserving simplified mesh (c) simplification which violates
`the original topology ( d-f) the associated triangulations
`
`4.2 Top-Down Simplification
`
`Coarsening techniques are desirable for their simplicity. Another class of sim(cid:173)
`plification works in the opposite order, adding detail to an initially coarse tri(cid:173)
`angulation. In this section we consider the application of scalar field topology
`in guiding the insertion and triangulation. The scalar field topology diagram
`consists of a subset of the edges of a triangular mesh. Top-down simplification
`proceeds beginning with a base mesh consisting of the critical points in addi(cid:173)
`tion to edges connecting those critical points which are connected by an arc in
`the scalar topology diagram. Figure 15 illustrates the base mesh construction
`from an initial dense mesh based ont the scalar topology diagram.
`
`Having constructed a base mesh, iterative refinement operations can be ap-
`
`16
`
`Bradium Technologies LLC
`
`Exhibit 2003
`
`

`
`r
`
`plied, in order to reconstruct both the scalar field and the scalar topology
`diagram to a desired level of detail.
`
`original triangulation
`
`initial coarse triangulation
`
`Fig. 15. Generating a base mesh which incorporates the scalar field topology.
`
`5 Conclusions
`
`Error-bounded data simplification techniques have progressed rapidly in re(cid:173)
`cent years. We have described a definition of scalar field structure which makes
`it possible to preserve global features in addition to bounding local errors. The
`techniques described can be applied to functions defined over any 2D trian(cid:173)
`gulation. The

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket