throbber
Case 6:21-cv-00755-ADA Document 45-9 Filed 02/28/22 Page 1 of 14
`
` Exhibit 9
`
`

`

`Case 6:21-cv-00755-ADA Document 45-9 Filed 02/28/22 Page 2 of 14
`
`PROCEEDINGS OF SPIE
`
`SPIEDigitalLibrary.org/conference-proceedings-of-spie
`
`Localized feature selection to
`maximize discrimination
`
`DueII, Kenneth, Freeman, Mark
`
`Kenneth A. Duel!, Mark 0. Freeman, "Localized feature selection to maximize
`discrimination," Proc. SPIE 1564, Optical Information Processing Systems
`and Architectures II I, (1 November 1991); doi: 10.1117/12.49693
`SPIE. Event: San Diego, '91, 1991, San Diego, CA, United States
`
`Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 04 Feb 2022 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
`G NTX0001617
`
`

`

`Case 6:21-cv-00755-ADA Document 45-9 Filed 02/28/22 Page 3 of 14
`
`Localized feature selection to maximize discrimination
`
`Kenneth A. Due11 and Mark 0. Freeman
`
`Department of Electrical and Computer Engineering and Optoelectronic Computing Systems Center,
`Campus Box 425, University of Colorado, Boulder, Colorado 80309-0425
`
`Abstract
`We present an automatic method of designing correlation filters for pattern recognition that are com-
`posed of select local features (i.e. small parts of a reference object). The local features are selected for
`their ability to discriminate between the reference object and other known objects or patterns. In the
`basic localized feature selection problem, we design a correlation filter from a single optimal local feature.
`In the general localized feature selection problem, we design a correlation filter composed of several local
`features. We show the discrimination ability of a correlation filter designed from properly selected local
`features is actually greater than the discrimination ability of a traditional matched filter.
`
`1 Introduction
`
`If we have a number of similar objects, say faces in a crowd, what features, i.e. nose, eyes, etc., do we use to
`recognize a particular individual? A traditional matched filter correlation system's ability to distinguish one
`object from many similar objects is poor. Correlation filters based upon distinguishing features improves our
`ability to locate the desired object. This paper introduces an automatic way of designing correlation filters
`based upon select local features that will maximize our ability to discriminate between similar objects.
`The simplest correlation system is the matched filter bank [1] [2]. Each matched filter in the bank is
`designed to recognize a particular known image. A common notation for representing matched filters is the
`inner product of two vectors, whose scalar value represents the maximum height of the correlation peak
`between the matched filter and the input image. Using this notation, we want to decide if a sampled image
`represented by the vector y is really one of the known images represented by the vectors sn, n = 1, M.
`If the images an all have equal energy (snTsn is constant Vn) and are equally likely to occur in the image y,
`then the Bayes correlator decides that y is the image am if [1]
`
`sm y > sn y V n m.
`
`If a particular sampled image y sm and all the known sn's are very similar images, then the difference
`smT sni and sTn y
`SnTSm is small for all n. Thus for similar images, the potential error
`between sTmy
`in the decision process is great. A useful performance metric for such correlation systems is the relative
`discrimination (or distance) between sTms, and sTmsn, given by
`
`8m3m — Sm Sn
`Dmn =
`
`8m8m
`
`(1)
`
`The average of Dmn over all n provides an indication of how well we can recognize sm in a sampled image
`versus other sn's. We use this average relative discrimination metric as the optimization criteria in designing
`correlation filters based on localized features.
`The use of localized features for rapid pattern recognition has been studied within a model-based mul-
`tiresolution framework [3] [4]. This pattern recognition technique begins by decomposing a model object
`image into a multiresolution pyramid structure. Then square KxK pixel local features are manually selected
`by the system designer at each resolution level. In [3] K = 16 and in [4] K = 8. The goal is to select localized
`features that maximize the systems ability to discriminate between model objects. To recognize a model
`
`22 / SPIE Vol. 1564 Optical Information Processing Systems and Architectures /11 (1991)
`
`0-8194-0692-9/91 /84.00
`
`Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 04 Feb 2022
`Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
`
`GNTX0001618
`
`

`

`Case 6:21-cv-00755-ADA Document 45-9 Filed 02/28/22 Page 4 of 14
`
`object in a sampled image, the sampled image is first decomposed into the same multiresolution pyramid
`structure. Then a hypothesis of the model object's location is formed by using the lowest resolution level of
`the image pyramid and the respective lowest resolution local feature. Once the initial location of the object
`has been estimated, more detailed searches are performed at that location (using the manually selected local
`features at finer and finer resolutions of the pyramid) to confirm the identity of the object. Such coarse
`to fine search detection strategies have considerable speed advantages. In addition, using localized features
`improves the probability of detecting occluded objects. Neither [3] nor [4] discuss how to select the localized
`features nor attempt to quantify the performance of such a coarse to fine search detection strategy. This
`paper is motivated by the need for finding an automatic way of selecting aK xK pixel local feature and
`then quantifying the performance of the selected local feature. In section 2, we formulate and solve this basic
`problem of localized feature selection, namely given some K x K window, design a correlation filter based
`on the local feature of that window size and shape that maximi7es average relative discrimination.
`We show that using a correlator designed from localized features actually increases the discrimination
`ability of the correlator over traditional matched filter correlation techniques. Insight into the reason behind
`this increased discrimination ability is found by looking at the localized feature selection problem as a space
`domain version of Shannon's water pouring arguments [5] or rank reduction signal processing techniques [6].
`We use the insights of Shannon and those of [6], in section 3, to formulate and study the general problem of
`localized feature selection, namely design a correlation filter based on a combination of local features that
`maximizes average relative discrimination.
`The following notation is used throughout this paper. We represent images as two dimensional matrices.
`If A is a two dimensional matrix, then [A]k,, represents the k,1 element (pixel) of the matrix (image) A and
`[A].,1 represents the lth column in the matrix. We let 1N represent a N x 1 vector with all elements equal
`to 1, ON represent a N x 1 vector with all elements equal to 0, and IN represent an N x N identity matrix.
`
`2 Basic Localized Feature Selection
`
`In this section we consider the basic localized feature selection problem: given some K x K window, design
`a correlation filter using a local feature from the deterministic image 5„„ of that window size and shape,
`that maximizes the average relative discrimination.
`We consider solutions to the basic localized feature selection problem under the following assumptions.
`Assume that Sin is N x N pixels, and is one of M images, Sk for k =1,..,M, in the "training" set. Assume
`that all the Sk's are equally likely to appear in any sampled image we will process. Also assume that we
`want our correlation filter to maximize discrimination against M0ut of the images that belong to the set Sout
`of "out of class" images and we want our correlation filter to minimize discrimination against Min of the
`images that belong to the set Sin of "in class" images. In addition assume that the images belong either
`to the set Sdet of deterministic images or to the set Sstat of statistical images. By a statistical image (or
`statistical pattern), we mean an image described by its first and second order statistics. The inclusion of
`statistical images in the training set allows us to build statistical pattern discrimination into the feature
`selection process.
`
`2.1 Local discrimination
`
`The initial step in the localized feature selection process is to normalize the energy of all the Sk's in the
`training set to 1. This is done to account for initial differences in object illumination while obtaining the
`training set. If we do not normalize the energies, the selected local features will be biased due to the energy
`differences.
`
`Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 04 Feb 2022
`Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
`
`SPIE Vol. 1564 Optical Information Processing Systems and Architectures 111(1991) / 23
`
`GNTX0001619
`
`

`

`Case 6:21-cv-00755-ADA Document 45-9 Filed 02/28/22 Page 5 of 14
`
`In order to quantify how well a particular local feature can discriminate, we must first break up the image
`S, into a set of local features. We define K x K windowed portions, W(i, j)„, of the image S, E Sdet , Sin
`at position indexed by ij as (element by element)
`
`[W(i,i)n]k,/ = [Sm]k+i,t+i, 1 :5_ k,l< K, 1 < j < (N — K).
`Think of the K X K matrix W(i,j), as the K X K windowed piece of 5,, with upper left hand corner at
`pixel position ij.
`Next we test the relative discrimination ability of each local feature W(i, j), against all other Sn, n m,
`in the training image set using a modified form of (1). We place the results into the (N — K) x (N — K)
`matrices Dmn, Vu such that 1 < n < M, with elements
`i)mml — max {14(i,j)mn cna(i, j) }
`max fii(i,i)rnml
`
`max
`
`(2)
`
`where the matrix gilj)mn = E[W(i, i)m* Sn], the matrix cr(i,i)nin = VEIW(1,j),*Sni2 — (12(i,i)mn)21 *
`represents 2-D discrete spatial correlation [7], E is the expectation operator, and max{.} is the maximum
`over all the matrix elements. Think of Dm n as holding the discrimination metrics of all the local features of
`the image 5, against the image Sn with each matrix element ij representing a local feature with location
`ij. When S, is a deterministic image, i.e. Sn E Sdet , then cr(i, j),,, is zero and (2) reduces to the intuitive
`relative discrimination metric
`1D 1. = max {WO, * Sin} — max 1147(i, Dm. * Sn)
`•
`max {W(i, j),* S,„}
`
`(3)
`
`The more general form of (2) allows us to test the discrimination ability of a deterministic local feature,
`on average, against a noise or background clutter image. For a statistical image, gi,j)mn
`i)mn
`is the mean "cross" correlation image plus c, times the standard deviation "cross" correlation image. By
`appropriate choice of c, the quantity max fiL(i, j)„ cr,a(i, j),,n) sets a noise or background clutter floor
`that we want the correlation peak of p(i, j)„ to be above. If the correlation peak is not above this floor, then
`[D,„„]i,i will be negative. Typically en is chosen to achieve a certain probability of detection or probability
`of false alarm [1].
`We now organize the matrices Dm,-, into an Mao x (N — K)2 out of class discrimination matrix, G'm,
`given by
`
`(vec D,„, )7"
`(vec Dm,„ )7'
`
`nP E Sout
`
`11 J
`(vec Dynnm )
`and an Min x (N K)2 in class discrimination matrix, .11,1 , given by
`
`(vec D„,,, )7'
`(vec Dmn2 )T
`
`(vec Dmnm.)
`
`np E Sin
`
`where vec A is the operation of stacking the columns of the p X q matrix A into the single 1 x pq vector [8].
`The rows of G' and H contain the discrimination of all the local features of S,, i.e. all possible window
`positions, against a particular Sr,. The columns of G'm and /9" contain the discrimination of a particular
`local feature of Srn against Sn for all n m.
`
`24 / SPIE Vol. 1564 Optical Information Processing Systems and Architectures /11 (1991)
`
`Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 04 Feb 2022
`Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
`
`GNTX0001620
`
`

`

`Case 6:21-cv-00755-ADA Document 45-9 Filed 02/28/22 Page 6 of 14
`
`Because a correlator designed from the local feature W(i, A,. is useful only if the relative discrimination
`metric is strictly positive for all out of class objects (i.e. the "autocorrelation" peak of W(i, j),n*Sm is higher
`than the "crosscorrelation" peak of W(i, j)„, * Sn), we zero out window positions (columns) with negative
`discrimination, giving the matrices
`
`[Gmb,,k = [Glm]*,k C[ m]I,k > 0 V/
`otherwise
`CIAG.,
`
`[Hmb,,k = 1[11:ni*,k [Cnit,k > 0 VI
`otherwise
`0.m,„
`
`(4)
`
`The operation (4) also removes from consideration those local features that do not have peak correlations
`above the noise or background clutter floor set in (2). Note that we do not require the in class discrimination
`metrics in Hm to have strictly positive elements. In fact, we desire that the in class discrimination metrics
`in H are minimal, or even better negative, and only zero out columns of Hm' if the corresponding column
`in G'm are zeroed.
`In a detection problem, we do not know a priori which set of objects will be contained in a scene. So
`when we choose a particular local feature, we would like it to have high discrimination on average against all
`out of class objects and low discrimination on average against all in class objects. This amounts to averaging
`the rows of Gm into a vector and similarly averaging the rows of Hm into another vector. We therefore let
`
`and
`
`9,Tn = mlaut 1TAI out G m
`
`hTm =
`
`(5)
`
`(6)
`
`where gm, and him are both (N — K)2 x 1 vectors.
`
`2.2 Solution
`
`To solve the basic local feature selection problem, we want to find the element (equivalently local feature)
`that is maximal in gm and is simultaneously minimal in hm (equivalently maximal in —140. This leads to
`one obvious simultaneous solution of finding the element of
`
`dm = gm — hm
`
`that is maximal. In other words just perform a direct search. Large values of dm imply that, on average, the
`in class correlation peaks are large and are well separated in height from the out of class correlation peaks.
`It appears that to achieve the true maximum, we want the denominator of (2) to be zero, i.e. we should
`select W(i, j), to be a matrix with all elements equal to zero or K = 0. However as the energy of W(i,
`tends toward zero, the the numerator of (2) also tends toward zero. Furthermore, we require that W(i, j)m
`has enough energy so its correlation peak is above the noise floor (4), which prevents us from selecting a
`local feature W(i,j)m that is zero.
`Another slightly more sophisticated way of looking at the problem is to assign a relative weight to each
`local feature according to its discrimination ability and then select a subset of local features that have large
`weight values. The goal of finding such a relative weighting is to somehow combine the information contained
`in dm to create a more general correlation filter composed of several local features. In mathematical terms, we
`
`Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 04 Feb 2022
`Terms of Use: https://wwvv.spiedigitallibrary.org/terms-of-use
`
`SPX Vol. 1564 Optical Information Processing Systems and Architectures 111 (1991) / 25
`
`GNTX0001621
`
`

`

`Case 6:21-cv-00755-ADA Document 45-9 Filed 02/28/22 Page 7 of 14
`
`pose this as wanting to find a relative (N — K)2 x1 weight vector xm that maximizes glxm and simultaneously
`minimizes h,„T xm (equivalently maximizes —hxm). We can pose this simultaneous problem as
`
`maximize dxm
`subject to
`1(N_K)2xm = 1
`
`X m 0(N.402
`
`(7)
`
`where the constraints 1(N_K)2xm = 1 and x, > 0(N_K)2 provides a relative positive weighting of the window
`positions.
`If we let c = —dm, for a specific m, then our problem statement in (7) can clearly be recast into the
`standard form of a linear programming problem [9], namely
`
`minimize
`subject to
`
`c X
`Ax = b
`X > Oq
`
`(8)
`
`where in general c and x are q x 1 vectors, b is a p x 1 vector, and A is an p x q matrix. To solve (8), we
`first recall the following definition
`
`Definition [9]. Given the set of p simultaneous linear equations in q unknowns Ax = b, let B
`be any nonsingular p x p submatrix made up of columns of A. Then, if all q — p components
`of x not associated with columns of B are set equal to zero, the solution to the resulting set of
`equations is said to be a basic solution to Ax = b.
`
`In our case the constraint Ax = b is simple, namely 1qTx = 1. So by definition the only possible basic
`solutions x are
`
`i <p}.
`The optimum solution is now found with the aid of the following theorem
`
`x E lei : e1 =
`
`Fundamental theorem of linear programming [9]. Given a linear program in standard
`form (8) where A is apxq matrix of rank p,
`
`1. if there is a feasible solution, there is a basic feasible solution,
`2. if there is an optimal feasible solution, there is an optimal basic feasible solution.
`
`The task of finding the optimal solution to the linear programming problem is thus simply the task of
`searching over basic feasible solutions and finding which one is optimal. For our problem, this simply
`amounts to finding the element (equivalently local feature) in the vector dm that is maximal. This is the
`same solution as we stated in the beginning paragraph of this subsection. We see that we cannot weight the
`discrimination ability of the local features, as in (7), and use those weights to select subsets of local features.
`In fact, when two features with large average discrimination values (i.e. the respective values of elements in
`dm) are combined into a correlation filter, the discrimination of the resulting correlation filter will often be
`less than the discrimination of each feature individually. If we want to solve the more general problem of
`creating a correlation filter based on a selection of local features, we must find other methods.
`If we use the indices imax and jmax to represent the location of the local feature that has maximal
`value in dm, then W(;— ,-tnax imax)m is the corresponding optimal correlation filter based on aKxK square
`local feature. The value of dm associated with ;-max imax gives a quantitative performance measure for the
`optimal correlation filter W(imax, jmax)m. In solving this problem we have found the local feature that
`
`26 / SP1E Vol. 1564 Optical Information Processing Systems and Architectures 111(1991)
`
`Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 04 Feb 2022
`Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
`
`GNTX0001622
`
`

`

`Case 6:21-cv-00755-ADA Document 45-9 Filed 02/28/22 Page 8 of 14
`
`(a)
`
`(b)
`
`(c)
`
`(d)
`
`(e)
`
`Figure 1: See Examples 1, 5, and 6.
`
`maximizes dm = g,, — hm, which effectively gives us the local feature that maximizes the average out of class
`discrimination metric in gm. and minimizes the average in dass discrimination metric in hm. However, for
`brevity, we will just say that we have maximized the average relative discrimination.
`We now illustrate the basic localized feature selection process with several examples.
`
`Example 1 (Basic local feature selection for M = 2) In this example the training set consists of the
`64 x 64 pixel images in Figs. 1-(a) and 1-(b). These images are reduced versions of illustrations digitized
`Sdet and 32 E Sout,Sdot be unit
`from [10] and placed on a zero level (black) background. We let Si E
`energy normalized versions of Figs. 1-(a) and 1-(b), respectively. We want to design a correlation filter
`from a 16 x 16 pixel local feature of Si that maximizes the relative discrimination. The computed optimal
`correlation filter is shown in Fig. 1-(c). The following table compares the performance between the local
`feature correlation filter and a matched filter. The quantity D12 is given in (1) for a matched filter and is
`given in (2) ([Di2]1„ ‘i,..) for the local feature correlation filter.
`
`Filter
`Local Feature
`Matched
`
`D12
`0.214
`0.146
`
`Average
`Discrimination
`0.214
`0.146
`
`Note the improvement factor of 1.5 above that of a matched filter. 0
`
`Example 2 (Basic local feature selection for M = 4, binary data set) In this example, the training
`set consists of the 64 x 64 pixel binary images in Figs. 2-(a) through 2-(d). These images are reduced versions
`of illustrations digitized from [11] and are contrast reversed so the background is zero. The airplanes in Figs.
`2-(a) and 2-(b) are commercial jetliners. The airplanes in Figs. 2-(c) and 2-(d) are military transport
`aircraft. The task is to discriminate between commercial and military aircraft. Because the binary images
`are edge maps, which are ideally obtained independent of object illumination, we do not normalize their
`energies. We let Si E Sin, Sdet and 52 E Sin, Sdet be the images in Figs. 2-(a) and 2-(b), respectively. We let
`53 E Sout, Sdet and 54 E Sout, Sdet be the images in Figs. 2-(c) and 2-(d), respectively. We want to design a
`correlation filter from a 16 x 16 pixel local feature of Si. The computed optimal correla,tion filter is shown
`in Fig. 2-(e). The following table compares the performance between the local feature correlation filter and
`a matched filter.
`
`Filter
`Local Feature
`Matched
`
`D12
`D13
`0.000 0.211
`0.303 _ 0.398
`
`Avg. out of
`class discr.
`D14
`0.342 0.276
`0.514 0.456
`
`Avg. in
`class discr.
`0.000
`0.303
`
`Average
`Discrimination
`0.276
`0.153
`
`Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 04 Feb 2022
`Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
`
`SP1E Vol. 1564 Optical Information Processing Systems and Architectures 111(1991)/ 27
`
`GNTX0001623
`
`

`

`Case 6:21-cv-00755-ADA Document 45-9 Filed 02/28/22 Page 9 of 14
`
`(a)
`
`(b)
`
`(c)
`
`(d)
`
`(e)
`
`Figure 2: See Example 2.
`
`Note the improvement factor of 1.8 above that of a matched filter. Also note that the optimal local feature
`has been chosen so the in class discrimination metric, D12, is actually reduced to zero for this example (i.e.
`the algorithm selected a 16 x 16 local feature of S1 that has an identical counterpart in S2). 0
`
`If we repeat the basic localized feature selection procedure for each level in a pyramid structure as used
`in [3] or [4], we will arrive at an optimal local feature and have a quantitative discrimination measure for
`each resolution level in the pyramid. In addition, the quantitative value at the lowest resolution gives an
`indication of how well this coarse to fine strategy (which relies on the lowest resolution for initial detection)
`will perform. We also note that if we intend to use a local correlation filter in an optical correlation system
`[12], we may compute the correlations needed in (3) using the optical system to account for the imperfections
`of the optics in the discrimination metric.
`
`2.3 Computational cost
`
`When looking for an optimal K x K pixel local feature in the N x N pixel image Sm with a training set of
`M images, we will need to compute (M — 1)(N — K)2 correlations. Clearly (M — 1)(N — K)2 may be large,
`so is this algorithm computationally feasible? The answer is yes, for many images, as we illustrate with the
`following example.
`
`Example 3 (Computation Time) Consider the case where the training set 5„,,, m = 1, M, are all
`smaller than 256 x 256 pixels, the local feature window is 16 x 16 pixels and we have a data set of M = 45
`images. Finding Gin, and Hm will require 1 day of computing time using a video rate (30 frames/sec) 512 X 512
`image convolver. 0
`
`From Example 3 we see that the design process, in general, cannot be performed in "real" time. However
`it is acceptable to spend a large amount of computation time in the design process for a filter that will be
`used over and over again in a real time correlation system.
`
`3 General Localized Feature Selection
`
`In this section we consider the general localized feature selection problem: design a correlation filter from a
`set of local features in the deterministic image Sm, of arbitrary size and shape, that maximizes the average
`relative discrimination. We will use the same assumptions as were stated in the beginning of section 2.
`
`3.1 An Optimal solution
`
`In the general problem, a local feature may be as small as a pixel. So we want to find which pixels increase
`the discrimination metric and which pixels decrease the discrimination metric of the correlation filter. A
`
`28 / SPIE Vol. 1564 Optical Information Processing Systems and Architectures 111 (1991)
`
`Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 04 Feb 2022
`Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
`
`GNTX0001624
`
`

`

`Case 6:21-cv-00755-ADA Document 45-9 Filed 02/28/22 Page 10 of 14
`
`direct method of finding the optimal solution to this problem is to test the discrimination ability of all
`correlation filters consisting of a single pixel held fixed and the remaining pixels zeroed out in Sm. Then
`test the discrimination ability of all correlation filters consisting of permutations of 2 pixels held fixed and
`the remaining pixels zeroed out in Sm. This test is then continued for all permutations of 3,4, ..., N2 pixels
`held fixed and the rest zeroed out in Sm to determine the maximum. This direct search will require
`
`(M-1)
`
`correlations and is clearly not computationally feasible. We thus are motivated to find other methods of
`solving this problem.
`
`3.2 Computable solution
`
`We now present a suboptimal, but computationally feasible, solution to the general localized feature selection
`problem. In this solution, instead of the minimum local feature size being a pixel, we let our minimum local
`feature size be aK xK windowed set of pixels. This method finds an N x N matrix Wm (the correlation
`filter) that consists of square K x K windowed portions of Sm.
`We begin by finding a single K x K window Mimax,itnax)m from the algorithm in section 2. We let
`Wm be the correlation filter of W(imax, imax)m at the appropriate spatial location given by imax,imax and
`the rest of its entries zeros. With this one window fixed within Wm, we begin testing discrimination ability
`of the union of Wm and a second windowed portion of Sm. We call the resulting N x N matrix with two
`windowed pieces of 5,„, one window at imaz, jmar and another at i,j, the matrix W'(i, j),n. We then test
`the discrimination ability of WV, j)m for all i,j, 1 < i,j < (N — K), using (2), except we replace W(i, j),
`in the computations of p(i, j)„,„ and cr(i, j)m, with W'(i, j)m. This will give us a 1,-(i W— mar, imax)m. We then
`recursively redefine Wm to be W'(ima„ jmax),, which is now a correlation filter with two windowed portions
`of Sm. Then, in a similar manner, we use this new Wm having two fixed windows, and search for a third
`window. The algorithm continues in by finding and fixing a 4th, 5th, ... window in W, until at some step a
`previously chosen window is selected and the algorithm stops. Note that the windows are allowed to overlap
`so that local features of very unusual shapes can be created.
`A pseudo code outline of the algorithm is
`
`Find aKx1( W(imax, imax)m for Sm using section 2.
`Initialize: r = 0,
`Wm = ONO ,
`[Wm]
`
`Iterate
`
`= [W(imax,imar)m]k,l,
`
`1< k,l< K.
`
`r = r + 1.
`Let: WV, j)m = Wm,
`[WV,
`=
`1 5- ic,1 5- K.
`Test Wi(i, j)m Vi, j 1 j < (N — K) using (2) to find
`Let: )Vm W1(imax,./max)m•
`Until Wm does not change or r = (N — K)2.
`
`Wi(imax, imax)m •
`
`The algorithm is guaranteed to converge in less than (N — K)2 steps. This is because we have a total of
`(N — K)2 window positions and at each step we must choose a new window position (up to the total number
`of window positions) or repeat a window position (algorithm has converged). Thus this algorithm for the
`general feature selection problem will require at most (M — 1)(N — K)4 correlation computations. However
`for the examples we have run, we have found that this algorithm converges in considerably fewer steps.
`
`Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 04 Feb 2022
`Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
`
`SPIE Vol. 1564 Optical Information Processing Systems and Architectures III (1991)/ 29
`
`GNTX0001625
`
`

`

`Case 6:21-cv-00755-ADA Document 45-9 Filed 02/28/22 Page 11 of 14
`
`Although the result from this algorithm is suboptimal, the algorithm has some noteworthy properties.
`First, the change in discrimination at each step in the algorithm is monotonic increasing. Thus the resulting
`correlation filter is guaranteed to perform at least as well as the single window (local feature) chosen in
`section 2. Second, the algorithm converges to a solution in a finite number of steps. If the algorithm does
`take all (N — 102 steps we will have just reproduced the matched filter for Sm.
`We now illustrate the general localized feature selection process with several examples.
`
`Example 4 (General local feature selection problem for M = 2) In this example we continue with
`the basic correlation filter we designed in Fig. 1-(c) to solve the general local feature selection problem for
`the data set in Example 1. In Fig. 1-(d) we see the resulting correlation filter made up of 12 different 16 x 16
`windowed pieces of Si that our general local feature selection algorithm converged upon. The following table
`compares the performance between the local feature correlation filter and a matched filter.
`
`Filter
`Local Feature
`Matched
`
`Average
`Discrimination
`0.345
`0.146
`
`Note the improvement factor of 2.4 above that of a matched filter and the improvement factor of 1.6 above
`that of the single local feature correlation filter of Example 1. 0
`
`Example 5 (General local feature selection problem for M = 3) In this example, we use the same
`data set as in Example 1 but add a third image, 53 E Sout,Sstat, to the data set that consists of zero mean
`white Gaussian noise with variance cr2 = 0.01. In computing the discrimination metric we have let cn = 2.
`Fig. 1-(e) shows the resulting correlation filter made up of 17 different 16 x 16 windowed pieces of Si, that our
`general local feature selection algorithm has converged upon. The following table compares the performance
`between the local feature correlation filter and a matched filter.
`
`Filter
`Local Feature
`Matched
`
`D12 D13
`0.204 0.771
`0.146
`0.800
`
`Avg. out of
`class discr.
`0.488
`0.473
`
`Avg. in
`class discr.
`0.000
`0.000
`
`Average
`Discrimination
`0.488
`0.473
`
`We note two things here. First, the locations of the windows have changed from those of Example 4, as is
`expected when the training set is changed. Specifically, a considerable portion of the plane is included com-
`pared to Example 4 in order to increase the noise discrimination of the metric D13 for the local feature filter.
`Second, the overall performance improvement beyond a matched filter is small. Only a small improvement
`occurs because the matched filter has optimal discrimination against the white noise background image and
`the average out of class discrimination is weighted by the matched filters optimal value for D13.
`
`3.3 Interpretation and tradeoffs
`
`The dramatic performance improvement in Example 4 can be understood by looking at the localized feature
`selection problem as a space domain version of Shannon's water pouring arguments [5] or rank reduction
`signal processing techniques [6]. In Shannon's case, he considered how to shape the power spectra of a
`signal with fixed energy to maximize the capacity of a communication channel. Shannon's solution was to
`keep portions of the power spectra that are above the water level (i.e. the noise floor level) because those
`frequencies will increase channel capacity and remove all portions of the power spectra that are below a
`noise floor because they reduce channel capacity. A similar idea is used in rank reduction techniques. The
`performance criteria for rank reduction is mean squared error and we keep portions of a signals spectral
`
`30 / SPIE Vol. 1564 Optical Information Processing Systems and Architectures 11111991)
`
`Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 04 Feb 2022
`Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
`
`GNTX0001626
`
`

`

`Case 6:21-cv-00755-ADA Document 45-9 Filed 02/28/22 Page 12 of 14
`
`decomposition (eigenvalue decomposition) that have eigenvalues above the water level (again the noise floor
`level) because those components reduce mean squared error and remove (rank reduce) components whose
`eigenvalues are below the water level because those components increase mean squared error.
`In our problem, the criteria is average relative discrimination and we design a correlation filter from
`windowed portions (in the space domain) of the image that increase discrimination and zero out the remaining
`parts that de

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket