throbber
F. S. Rovati et al.: Spatial-Temporal Motion Estimation for Image Reconstruction and Mouse Functionality with Optical or Capacitive Sensors
`
`711
`
`Spatial-Temporal Motion Estimation for Image Reconstruction
`and Mouse Functionality with Optical or Capacitive Sensors
`Fabrizio S. Rovati, Pieluigi Gardella, Paolo Zambotti and Danilo Pau*
`
`
`To be able to reconstruct accurately a complete image from
`such small snapshots, or to estimate the motion imposed to the
`sensor we need a reconstruction algorithm. Simply stacking
`input images will not produce satisfactory scanning results,
`and will not give any information about navigation, as fig. 2
`demonstrates.
`
`
`Fig. 2: Example of fingerprint scanning. By simply stacking the sensor
`output images (on the right), we have an unacceptable result
`
`
`
`
`II. STATE OF THE ART REVIEW
`This paper describes an innovative application using a
`spatial-temporal
`block-matching motion
`estimator
`to
`reconstruct bigger images starting from a small optical or
`capacitive sensor output, and to provide mouse functionality.
`Exactly the same algorithm is able to achieve both functions.
`Applications range from optical mice to user identification for
`small PDA’s or mobile phones. This double integrated
`functionality is very novel in the art, as we target applications
`that are, if at all present today, implemented without a unified
`approach. Example for optical mouse can be found in [1]. We
`will therefore review the state of the art of the two single
`applications; keeping in mind we also have the advantage of
`offering an integrated approach.
`
`
`
`
`Abstract — This paper describes an innovative solution to
`perform mouse
`functionality (“navigation”) and
`image
`reconstruction (“scanning”). The application uses a recursive
`block-matching motion estimation algorithm and an optical or
`capacitive sensor. Results show
`this solution provides
`integrated mouse and scanner functionality at a very low cost.
`Index Terms — Capacitive
`finger sensor,
`image
`reconstruction, motion estimation, optical mouse, scanner.
`
`
`T
`
`I. INTRODUCTION
`ODAY, an optical CMOS sensor with resolution of tens of
`pixels for each dimension can be integrated on a chip with
`processing logic at a very low cost. A capacitive sensor,
`used to scan fingerprints, is also affordable, if we use a ‘stripe’
`version able to capture a small part of the finger image. The
`sensor can be as small as 256 pixels by 2 lines; this
`compactness makes it ideal for applications where power
`consumption, component dimensions and cost must be
`minimized.
`Ideal target applications are navigation/mouse and/or a
`portable scanner. We can also use the fingerprint sensor for
`non-critical security features; for example, to gain access to a
`consumer device, or simply to recall preferred personal
`settings. The user would move the sensor on a surface (see fig.
`1) and by processing the input images, we are able to extract
`the movement occurred and/or to reconstruct the underlying
`global image.
`
`
`
`
`Typical movement for scanning
`Typical movement for navigation
`
`
`Fig. 1: Application example. User moves finger upon capacitive
`sensor, or slides optical sensor on an image or surface
`
` All authors are with STMicroelectronics Advanced System technology
`Labs, Agrate Brianza, Italy
`
` *
`
`Manuscript received June 10, 2003 0098 3063/00 $10.00 © 2003 IEEE
`
`Authorized licensed use limited to: UNIV OF SOUTH ALABAMA. Downloaded on December 19,2022 at 19:49:48 UTC from IEEE Xplore. Restrictions apply.
`
`CPC Ex. 2037 – Page 001
`ASSA ABLOY AB v. CPC Patent Technologies Pty Ltd.
`IPR2022-01006
`
`

`

`712
`
`A. Optical mouse state of the art
`Nowadays optical mouse navigation functionality is based
`mainly on edge detection and/or phase correlation, algorithms
`much more complex than the proposed one. Complexity of the
`processing involved for finding out the motion limits the frame
`rate to around 1000 frames per second, especially in wireless
`mice, where power consumption is very important. Having a
`high number of pictures per unit time is beneficial to be able to
`follow hi-speed motion, for example when we use the mouse
`to play games. It also helps when scanning a document or a
`picture, as it makes the process less tedious by being able to
`make it more quickly.
`Correlation between hi-speed motion detection and high me
`rate is simply explained: there is an intrinsic threshold to
`respect to be able to estimate motion between two frames,
`whatever the technology used. This limit is the maximum
`displacement occurred between the frames, e.g. P pixels in
`each direction. Let’s assume this as a constant. To find out
`maximum detectable speed, we must multiply the displacement
`by the number of times we are able to measure it in a second.
`This is given by the frame rate, F. Having a higher frame rate
`F means covering a wider displacement per unit time, P*F,
`which grants the ability to follow faster motion.
`
`B. Fingerprint sensor state of the art
`Capacitive sensors are composed by a bi-dimensional array
`of capacitors laid out on a silicon surface in order to be able to
`capture finger’s ridges and valleys. Ridges are the pieces of
`skin in relief; valleys are the remainders of it. Vicinity of
`finger’s skin modifies the capacitance of each of such devices.
`The varied capacitance can be sensed in terms of different
`voltage resulting from applying a fixed electrical charge. After
`A/D conversion of this physical measure, the detection gives
`an array of values (one for each capacitor) that can be seen as
`a picture of
`the finger
`just scanned. Each pixel
`is
`monochromatically encoded into 8 bits: 0 if totally on a ridge,
`255 if totally on a valley. We therefore have a picture
`composed of 8-bit pixels, the same output of a monochromatic
`optical sensor.
`State of the art capacitive fingerprint sensors are ‘complete
`image’ sensors, with large cost in term of silicon area. A
`typical array [2] is composed by 256x360 sensing elements,
`with a pixel pitch of 50 µm. This geometry leads to an area of
`around one squared inch, enough to capture the entire
`fingerprint at once. Such a device has some distinctive
`drawbacks: cost (silicon area is huge), power consumption and
`physical size. The latter is important if this sensor has to be
`used into miniaturized terminals, like a Personal Digital
`Assistant (PDA) or a mobile phone, where area as well as cost
`and power consumption are key factors.
`To overcome these disadvantages, instead of capturing a
`whole fingerprint at once, in our approach we scan it by letting
`the finger slip over a ‘stripe’ sensor. The difference in this case
`is sensor size: the horizontal dimension stays the same, but it
`will be only a few lines high, ideally only two. Therefore the
`sensor area is reduced by a factor of (360/2) = 180, with
`evident benefits in terms of area and power consumption. To
`
`IEEE Transactions on Consumer Electronics, Vol. 49, No. 3, AUGUST 2003
`
`date, this is a very high lines reduction ratio. References [3]-
`[6] can be made to other devices that were able to reduce the
`size by factors of 11 at most, i.e. using 32 lines. Using a stripe
`sensor, of course, we have
`to
`introduce
`the step of
`reconstructing the whole fingerprint starting from the partial
`images taken. The algorithm we propose performs this task.
`Once this function is present in the device, it can also be used
`as mouse output, just reusing displacement information and
`discarding the reconstructed image.
`
`III. MOTION ESTIMATION
`
`
`The algorithm we present is based on spatial/temporal
`recursive motion estimation. Motion estimation is a very much
`known technique in the digital video-processing field, where it
`is used to exploit temporal redundancy (i.e. similarities among
`different pictures) during video compression. Another use is
`for interpolation of missing information, for example for frame
`rate up conversion.
`Various types of motion estimations have been proposed in
`literature [7], [8]. The most successful have been pixel-by-
`pixel motion estimation (where each single pixel is estimated
`by its own) and block-matching motion estimation, where a bi-
`dimensional array of pixels is estimated at once, to decrease
`computational complexity.
`
`A. Block-matching motion estimation
`
`
`
`The basic principle of block matching motion estimation
`(see fig. 3) is to use two images A and B. We select an area C
`inside first picture A that needs to be ‘motion estimated’ on the
`other and we look for an area D inside picture B that is as
`similar as possible to area C. The position of each area C, D is
`identified by the position of its upper-left pixel. The difference
`in position of these two pixels in the respective pictures is
`called motion vector (E), and it is the expression of the motion
`that area C underwent between the two images. The motion
`
`E
`
`D
`
`C’
`
`B
`
`C
`
`A
`
`FRAME n-1
`FRAME n
`(predictor
`(current frame under
`frame)
`prediction)
`Fig. 3: Block-matching motion estimation technique
`
`
`
`vector can also be imagined as the vector joining the
`projection of upper-left C pixel into B (C’) and upper-left D
`
`Authorized licensed use limited to: UNIV OF SOUTH ALABAMA. Downloaded on December 19,2022 at 19:49:48 UTC from IEEE Xplore. Restrictions apply.
`
`CPC Ex. 2037 – Page 002
`ASSA ABLOY AB v. CPC Patent Technologies Pty Ltd.
`IPR2022-01006
`
`

`

`F. S. Rovati et al.: Spatial-Temporal Motion Estimation for Image Reconstruction and Mouse Functionality with Optical or Capacitive Sensors
`
`713
`
`an estimation of picture N over N-1, and then of picture N+1
`from N, the results will be highly correlated, i.e. the vectors
`will be similar. Spatially means that, in case we have more
`than one area C that we want to estimate in picture A, results
`of areas C1, C2, Cn are usually similar, at least if the blocks
`are neighbors. We will call motion vectors that are taken from
`estimations of different parts of the same picture “spatial
`vectors”; vectors that are taken from estimations of parts of
`different pictures will be referenced as “temporal vectors”.
`Algorithms following this approach exploit the correlation
`among motion vectors to decrease the number of matches and
`increase consistency of the results. It has to be noted that these
`algorithms have been developed for TV (one PAL image is
`720x576 pixels) sequences digital compression. In
`this
`application motion estimation is performed on 16x16 pixels
`blocks, so there are a lot of vectors to be taken as spatial-
`temporal references (1620 for each frame). The algorithm will
`test a certain amount of such temporal and spatial vectors as
`the starting point of the estimation. A second ‘refine’ phase
`will test small variations of the first phase winner in order to
`see if a better match is available in some neighboring position
`(see fig. 4).
`
`FRAME N+2
`
`
`
`FRAME N
`
`MVN
`
`FRAME N+1
`
`MVN+1
`
`MVN+2
`
`Fig. 5: Recursive motion estimation applied to capacitive stripe (or
`optical) sensor output: we have only one vector per frame, but high
`frame rate
`
`IV. PROPOSED ALGORITHM
`
`
`
`In our application, however, there is not plenty of such
`vectors among which to select the best starting point for new
`estimations. Pictures used for estimations are very limited in
`dimensions: 2 lines by 256 columns for capacitive ‘stripe’
`sensor, or 20 by 20 for the optical one. We choose this
`dimension for the optical sensor as it yields a similar number
`of pixels per frame as the capacitive one, even if aspect ratio is
`different. Another constraint comes from the application
`target, which is to estimate a single global motion, not a
`plurality of local motions. The net result of the above
`constraints is that we can inherit only one vector per previous
`
`pixel. The area D is often called the predictor, as we are trying
`to ‘predict’ area C from D.
`
`
`
`Ideally, we could test all the D areas that we can extract
`from the B frame, to provide a matching as accurate as
`possible. This approach is called Full Search. This generally
`provides very good estimations, but at the expense of great
`computational complexity. For example,
`to perform an
`estimation of an X by Y area from an N by M frame, we would
`require to test (M-X+1)(N-Y+1) areas, each X*Y pixels wide.
`Moreover, normally matches need sub-pixel accuracy, i.e.
`movements of ½ or ¼ of pixel or even less need to be tested.
`The formula above would be multiplied by, respectively, a
`factor of 4 and 16. Each test usually consists in computing the
`Mean Absolute Error (MAE) function:
`
`
`1 Σ |p(i,j) – q(i, j)|
`YX *
`
`MAE =
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`(1)
`
`
`With i = 0 to X-1, and j = 0 to Y-1; p(i,j) is a pixel from area
`C , position i,j; q is the pixel in the same position from area D.
`If areas are perfectly identical, MAE = 0. In general, the lower
`the MAE, the more similar we can assume areas C and D are.
`The inner part of (1) is often called Sum of Absolute
`Differences, SAD. One MAE is then the average of X*Y SAD
`values
`
`B. Reduced-complexity block-matching motion estimation
`
`
`
`In literature [7]-[13] there are other approaches that try to
`decrease the computational complexity by a focusing on the
`selection of candidate predictors. One among the most
`successful is the spatial-temporal approach [14]-[16]. It
`exploits the principle that when estimating a sequence of video
`images, results of successive estimations are not independent,
`but they instead tend to be strongly correlated. This is true
`spatially and temporally. Temporally means that if we perform
`
`E1 E3
`
`E2
`
`C’
`
`B
`E1, E2, E3 are
`spatial/temporal
`vectors
`
`E3.2
`
`E3
`E3.1
`
`
`
`C’
`
`B
`E3 is the best
`spatial/temporal vector. A
`few vectors around that
`position are tested to
`refine the estimation
`
`Fig. 4: Spatial/temporal recursive block-matching motion
`estimation
`
`Authorized licensed use limited to: UNIV OF SOUTH ALABAMA. Downloaded on December 19,2022 at 19:49:48 UTC from IEEE Xplore. Restrictions apply.
`
`CPC Ex. 2037 – Page 003
`ASSA ABLOY AB v. CPC Patent Technologies Pty Ltd.
`IPR2022-01006
`
`

`

`714
`
`picture, and no vectors from previous estimations of current
`picture, as shown fig. 5. It is a theoretical challenge to prove
`that a recursive algorithm can work also in this ‘depleted’
`environment (1 temporal and no spatial vectors per frame).
`
`
`Obviously, the correlation between previous motion vectors
`and the current estimation to be performed is inversely
`proportional to the temporal distance between the current
`frame and the one from which vectors are taken. The high
`frame rate, anyway, makes us infer that we could use temporal
`vectors from more pictures in the past, not only the preceding
`one.
`We modified the core of the spatial-temporal approach to
`suit application’s conditions better, becoming a temporal-only
`approach. We added test of linear combinations of previous-
`frames temporal vectors as potential ‘seeds’ to overcome the
`scarcity of candidates. In particular, considering the underlying
`physics of the application, ‘physically sound’ candidates have
`been generated, such as constant speed, constant acceleration,
`and so on. After having tested these candidates, an ‘update’
`phase of the algorithm would refine the best among the vectors
`tested so far to see if the prediction could be bettered. A patent
`application for this technique has been filed [17].
`Note that in this case the goal of the estimation is not to
`generate a ‘prediction’ for the current image, but rather to
`compute the shift between the two images to stitch them
`together appropriately. In some respect, this application is
`more difficult than the prediction in video encoding. Even a
`small percentage of errors would be very visible in the
`reconstructed image or movement, whereas a single sub-
`optimal block prediction is not critical in video compression,
`as it is concealed by encoding the block as intra, i.e. without
`prediction.
`
`
`V. SIMULATIVE RESULTS
`
`
`We developed a bit-accurate fixed-point algorithmic model
`and we created a simulation test bench to assess the
`performances. The verification environment comprised two
`different tests, one for the optical sensor, and another for the
`capacitive one. In both cases, inputs of the complete chain
`were full pictures taken with high-resolution sensors. We used
`fingerprint samples captured using a conventional 256 by 360
`sensor, and optical images taken with a Megapixel CMOS
`sensor.
`These images formed the input to a ‘strip images generator’.
`This program takes as input the big image, and a motion
`trajectory, and extracts from the full picture the equivalent
`stream of samples a small optical or stripe capacitive sensor
`would generate, if passed on the big image with the specified
`motion. Trajectories are defined as a sequence of points, or by
`a mathematical formula. In particular, thanks to an internal
`interpolation feature, movements can be programmed to be of
`any fractional amount, for example 0.12 pixels per frame, to
`better test fast and slow motion on the sensor.
`These images were then given as input to the reconstruction
`motion estimator, which would then work on them, compute
`
`IEEE Transactions on Consumer Electronics, Vol. 49, No. 3, AUGUST 2003
`
`the relative movement among successive frames and then
`stitch them one over the other according to the results of the
`
`MV2
`
`MV1
`
`MV0
`
`OVERALL IMAGE
`RECONSTRUCTION
`FRAME 3
`
`FRAME 2
`
`FRAME 1
`
`FRAME 0
`
`
`
`Fig. 6: Method for stitching sensor output images together to
`obtain the reconstructed picture
`
`estimation, as shown in. fig. 6. We could then compare the
`estimated with the original movement given in input to the
`strip image generator, and compute the percentage of correct
`or wrong estimations.
`
`
`
`Fig. 7: Result of fingerprint reconstruction for 0.37 pixels/frame
`vertical motion, no horizontal motion. Starting image is
`particularly dark. White central lines show where picture would
`stop if reconstruction was perfect. Surrounding lines show +/-
`10% stretching error marks
`
`
`Authorized licensed use limited to: UNIV OF SOUTH ALABAMA. Downloaded on December 19,2022 at 19:49:48 UTC from IEEE Xplore. Restrictions apply.
`
`CPC Ex. 2037 – Page 004
`ASSA ABLOY AB v. CPC Patent Technologies Pty Ltd.
`IPR2022-01006
`
`

`

`F. S. Rovati et al.: Spatial-Temporal Motion Estimation for Image Reconstruction and Mouse Functionality with Optical or Capacitive Sensors
`
`715
`
`Results have demonstrated the validity of the algorithm, with
`both optical and capacitive sensors, for mouse and scanner
`application. Reconstructed fingerprints match in height and
`width very accurately the original ones, and trajectory of
`estimated motion follows very closely the original one. Fig. 7
`through 9
`show
`several
`fingerprints with different
`characteristics (bright, dark, noisy, smeared) scanned with a
`range of different speeds in horizontal and vertical directions.
`
`image already acquired to better center them. This is a topic
`for future research. Another obvious solution is to increase the
`sensor resolution to reduce the number of input images
`necessary to cover one full picture.
`
`
`Fig. 8 Result of reconstruction for 0.6 pixels/frame vertical and
`0.37 pixels/frame horizontal motion, starting from a smeared
`image. Grey area on the right is due to padding of the sequence
`generator, to be able to generate horizontally moving images.
`
`
`To avoid stretching or compressing the fingerprint is a
`particularly important feature in order to be able to recognize
`correctly the owner of the fingerprint via automatic matching
`algorithms.
`
`Results are also good when the optical sensor is used. About
`scanning function, even starting from very small (20 pixels by
`20 pixels) input images, results are acceptable, as shown in fig.
`10. A small drift is visible in scanning intersections, because
`estimation errors add up, so even a very small percentage of
`errors can lead to an imperfect crossing. This problem appears
`because we need thousands (2704 theoretical minimum) of
`sensor images to obtain a 1024x1024 resolution image. During
`all these estimations, even a very small percentage of errors
`lead to a visible artifact. Applying additional matches can
`solve the problem when crossing parts of the reconstructed
`
`
`
`
`
`Fig. 9: Result of reconstruction for 0.89 pixels/frame vertical
`motion on a dark, noisy fingerprint image
`
`
`
`
`
`
`Fig. 10: Results of simulation of scanning a printed paper with
`the optical sensor and the combining the pictures with the
`proposed algorithm
`
`Authorized licensed use limited to: UNIV OF SOUTH ALABAMA. Downloaded on December 19,2022 at 19:49:48 UTC from IEEE Xplore. Restrictions apply.
`
`CPC Ex. 2037 – Page 005
`ASSA ABLOY AB v. CPC Patent Technologies Pty Ltd.
`IPR2022-01006
`
`

`

`716
`
`IEEE Transactions on Consumer Electronics, Vol. 49, No. 3, AUGUST 2003
`
`
`
`Fig. 13: Simulation results for optical sensor: background
`image is a mouse pad back
`
`
`
`
`Surface type
`Black plastic desk
`Mouse pad back
`Printed tissue
`Rubber mat
`Black metal desk
`Glossy book cover
`Printed paper
`
`TABLE I
`Percentage of correct
`estimations
`99,983%
`99,983%
`99,983%
`99,975%
`99,971%
`99,967%
`99,942%
`
`
`
`Fig. 11: Simulation results for optical sensor. The black line is
`estimated trajectory; the white one is the reference trajectory.
`The white one is thicker for readability. The two lines are very
`well superimposed, meaning estimation produces correct results.
`In the background the test image feed to the sequence generator.
`In this case, a printed paper
`
`
`Figures 11 through 14 show the simulations results of
`optical sensor moving on various surfaces. For each case, the
`surface tested is shown as background of the picture. Two
`tracks are plotted, a light-gray one is the input movement that
`we want to estimate, and the dark-gray one is the results of the
`estimations. If estimation is perfect, this second line is
`completely superimposed to the first one. As pictures show,
`tracking is good on dark, bright, detail-rich, repeated patterns,
`and uniform images. Original tracks are logged movements of
`a ball mouse, fed to the sequence generator presented above.
`In particular, estimations of movements are correct within
`99.9% overall. Table I shows percentages of correct
`estimations for the surfaces tested
`
`
`VI. IMPLEMENTATION COMPLEXITY
`We have shown that the algorithm is working very well in
`terms of image reconstruction and mouse pointer, for both
`sensor inputs. We will now describe the complexity involved
`
`Fig. 12: Simulation results for optical sensor: background image is
`a piece of printed tissue
`
`
`
`
`Fig. 14: Simulation results for optical sensor: background image is a
`black desk
`
`Authorized licensed use limited to: UNIV OF SOUTH ALABAMA. Downloaded on December 19,2022 at 19:49:48 UTC from IEEE Xplore. Restrictions apply.
`
`CPC Ex. 2037 – Page 006
`ASSA ABLOY AB v. CPC Patent Technologies Pty Ltd.
`IPR2022-01006
`
`

`

`F. S. Rovati et al.: Spatial-Temporal Motion Estimation for Image Reconstruction and Mouse Functionality with Optical or Capacitive Sensors
`
`717
`
`put them via PCI bus into the main memory of a PC. The
`algorithm running in real time in SW on the PC then processed
`the images. The results were used to move the PC screen
`pointer and/or show the reconstructed global image. The goal
`of this prototype was not to demonstrate low complexity or
`low power consumption, already evident from the mathematics
`in the paragraph above, but rather to demonstrate functional
`real-time performances and possibly tune the algorithm. This
`is the reason why we kept the algorithm running in SW on the
`PC, and not in RTL HW. Fig. 16 shows the optical mouse
`demonstrator.
`Results in both the applications were optimal; we were able
`to use the optical sensor as ‘the’ mouse of the PC where the
`algorithm was running, without any problem of incorrect
`tracking or drift. We were also able, with the capacitive sensor
`setup, to capture finger images, feed them to finger recognition
`software and get correct identifications output. Correct means
`both getting identification when comparing two fingerprints of
`the same finger, and denying identification when comparing
`two fingerprints of different fingers.
`
`Fig. 16: real-time optical mouse prototype
`
`
`
`
`
`
`VIII. CONCLUSION
` This paper described an algorithm that enables mouse
`functionality and image reconstruction starting from a stream
`of frames coming from a low-resolutions sensor, optical or
`capacitive. We proposed a modification to the spatial/temporal
`approach to fit applications where only one temporal vector is
`available per frame, and we extended the vector prediction to
`linear combinations of temporal vectors. We showed that this
`algorithm produces high quality reconstructed image at a very
`low computational complexity. Mouse functionality is also
`optimal. This new combined technology can generate new
`consumer devices. One example is an integrated wireless PC
`mouse plus portable scanner. Another promising application is
`a single capacitive stripe sensor mounted on a mobile phone or
`a PDA. The sensor can provide identification functionality
`
`in a silicon implementation. To estimate the movement of each
`frame, we need a grand total of 2000 Sums of Absolute
`Differences (SAD), summing up all the contributions from
`temporal and refine vectors. The pixels from the previous
`frame might need to be interpolated in case we are testing a
`fractional pixel movement. Each interpolation is made by 4
`multiplications and 4 additions. So, 2000*4 = 8000
`multiplications and additions per frame. At, for example, 1000
`frames per second we need to compute 2 Million SAD per
`second, 8 Million sums and 8 Million multiplications. A
`‘processing engine’ composed by one SAD unit, one 8 by 4
`bits multiplier and an adder working at 8 MHz could do the
`job. 16 MHz will allow us to estimate 2000 frames per second,
`and so on. Alternatively, we could replicate HW to achieve
`more frames per second and keep overall frequency low. This
`will keep other operations (e.g. sensor management, memory
`accesses) at lower frequency, so less power consuming.
`Generating an array of 4 ‘processing engines’ at 32 MHz
`would compute estimations for 16,000 frames per second. This
`frame rate is enough to guarantee that no human movement
`will be missed.
`To complete the implementation, we also need two frame
`buffers, less than 1kB total, and some very simple control
`
`RESULTING MV
`
`Frame 0 Frame 1
`
`Sensor
`
`MULTIPLIER
`MULTIPLIER
`MULTIPLIER
`Multiplier
`
`ADDER
`ADDER
`ADDER
`Adder
`
`S.A.D.
`S.A.D.
`S.A.D.
`S.A.D.
`
`‘Processing engines’ array
`Fig. 15: Proposed architecture of the integrated mouse and scanner.
`
`
`
`logic to issue the vectors to test and compare MAE values. The
`proposed architecture is depicted in fig. 15 (control and
`comparing logic not shown).
`If image reconstruction must be added, the memory area
`where to stitch the incoming images can reside in the host PC
`or PDA. The dimension of such area depends on the maximum
`resolution we want to achieve as output.
`
`VII. REAL-TIME PROTOTYPES
`To assess further the validity of the application, we built two
`prototypes, based on available optical1 and capacitive2 sensors.
`We constrained the output of these sensors to the resolution we
`intended to test, 256 by 2 pixels for the capacitive, 20 by 20
`pixels for the optical. We then programmed a custom FPGA
`interface to be able to grab the real-time acquired images, and
`
`
`
`1 STMicroelectronics’ VVL350a
`2 STMicroelectronics’ TCS3A
`
`Authorized licensed use limited to: UNIV OF SOUTH ALABAMA. Downloaded on December 19,2022 at 19:49:48 UTC from IEEE Xplore. Restrictions apply.
`
`CPC Ex. 2037 – Page 007
`ASSA ABLOY AB v. CPC Patent Technologies Pty Ltd.
`IPR2022-01006
`
`

`

`718
`
`IEEE Transactions on Consumer Electronics, Vol. 49, No. 3, AUGUST 2003
`
`(e.g. to gain access to the device, or to select the user profile)
`and be a very compact and precise mouse pad.
`
`ACKNOWLEDGMENT
`The authors wish to thank all colleagues that helped on this
`work on the sensors side for the understanding of the
`underlying physics, the provision of sensor samples and the
`development of the real-time prototypes. We thank Giovanni
`Corradini, Giovanni Gozzini, and Marco Filauro for the
`capacitive sensor, and Carl Dennis, Jeff Raynor, and Jean-Luc
`Jaffard for the optical one.
`
`REFERENCES
`
`
`[1] http://literature.agilent.com/litweb/pdf/5988-4554EN.pdf
`[2] http://www.planetanalog.com/features/OEG20020528S0055
`[3] http://www.eetimes.com/story/OEG20020312S0041
`[4] http://www.fma.fujitsu.com/pdf/mbf300_fs.pdf
`[5] http://www.authentec.com/products/docs/2242_10_Glossy_FingerLoc_
`AFS8500.pdf
`[6] http://www.idex.no/files/Smartfinger.pdf
`[7] B. Furth, J. Greenberg, R. Westwater, “Motion estimation algorithms for
`video compression”, Kluwer academic publishers, 1997
`[8] P. Kahn “Algorithms, complexity analisys and VLSI architectures for
`MPEG-4 motion estimation”, Kluwer Academic Publisher
`[9] B. Natarajan, V. Bhaskaran, and K. Konstantinides, "Low-complexity
`block-based motion estimation via one-bit transforms," IEEE Trans.
`Circuits Syst. Video Technol., vol. 7, pp. 702-706, Aug. 1997.
`[10] J. Chalidabhongse and C. C. Kuo, "Fast motion vector estimation using
`multiresolution spatio-temporal correlations," IEEE Trans. Circuits Syst.
`Technol., vol. 7, pp. 477-488, June 1997.
`[11] F. Kossentini, Y. W. Lee, M. J. T. Smith, and R. K. Ward, "Predictive
`RD optimized motion estimation for very low bit-rate video coding,"
`IEEE J. Select. Areas Commun., vol. 15, pp. 1752-1763, Dec. 1997.
`[12] K. Lengwehasatit and A. Ortega, "A novel computationally scalable
`algorithm for motion estimation," in Proc. VCIP '98, 1998.
`[13] M. Brünig and B. Menser, "Fast full search block matching using sub-
`blocks and successive approximation of the error measure," Proc SPIE,
`vol. 3974, pp. 235-244, Jan. 2000.
`[14] G.de Haan, P.Biezen, H.Huijgen, O.Ojo, “True motion estimation with
`3-D recursive search block matching”, IEEE transactions on circuits and
`systems for video technology, Vol.3, N.5, October 1993.
`[15] F. Rovati, D. Pau, E. Piccinelli, L. Pezzoni, J.-M. Bard, “An innovative,
`high-quality and search window
`independent motion estimation
`algorithm and architecture for Mpeg-2 encoding”, IEEE transactions on
`consumer electronics, August 2000, Vol.46, N.3.
`innovative,
`[16] D. Alfonso, F. Rovati, D. Pau, L. Celetto, ”An
`programmable architecture for ultra-low power motion estimation in
`reduced memory MPEG-4 encoder”, IEEE transactions on consumer
`electronics, August 2002, vol. 48 n.3
`[17] European Patent Application EPA 02425219.9
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`Fabrizio Simone Rovati was born in Monza,
`Italy in 1971. He received electronic engineering
`degree at the Milan Polytechnic University,
`Italy, in 1996. He joined STMicroelectronics
`ltd., Bristol, UK (formerly INMOS ltd.) where
`he contributed to the development of an MPEG-
`2 transport demultiplexer co-processor. He then
`joined STMicroelectronics' Advanced System
`Technologies in 1998 where he worked on an
`MPEG-2 motion estimation co-processor and on
`MPEG video encoders system architectures. He
`authored or co-authored 8 granted patents and 6
`publications. He has been contract professor at Pavia Polytechnic University
`during 2001/2002. His current interests are in digital video signal processing
`and robust delivery through IP-based networks, and related system-level and
`processors architectures.
`
`
`
`
`Pierluigi Gardella was born in Caracas,
`Venezuela
`in
`1973. He
`received
`telecommunications engineering degree at
`the Milan Polytechnic University, Italy, in
`2002. He joined STMicroelectronics, Agrate,
`Italy where
`he
`contributed
`to
`the
`development of an optical mouse based on
`motion estimation techniques. His main
`interests are in 2D and 3D low-power
`graphics.
`
`
`
`
`
`
`
`Paolo Sergio Zambotti was born in
`Milan, Italy on 1974. In February 1999 he
`received electronic engineering degree at
`Milan Polytechnic University, Italy. He has
`been working for STMicroelectronics since
`1998. He worked at Terminals Strategic Unit
`laboratory based in Agrate Brianza on a
`“wireless pointer for interactive television”.
`Since 1999, he joined the Advanced System
`Technology – Digital Video Technology
`group as system engineer, working on
`emulation platforms for fast prototyping.
`Presently, he is involved in the “3G Integrated Multimedia Terminal” and
`“VLIW on FPGA” project

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket