throbber
Page 1 of 10
`
`SAMSUNG EXHIBIT 1005
`Samsung v. Image Processing Techs.
`
`

`

`& IEEE Conference on
`INTELLIGENT
`TRANSPORTATION SYSTEMS
`
`GER RES finpany
`oER. REG. LIBRARY
`
`1] i oe
`tet
`:
`
`U.C. DAVIS
`
`
`
`Boston Park Plaza Hotel
`Boston, Massachusetts
`November 9-12, 1997
`
` 005
`
`Page 2 of 10
`
`SAMSUNG EXHIBIT 1005
`Page 2 of 10
`
`

`

`97TH8331
`IEEE Catalog Number
`ISBN: 0-7803-4269-0 (Softbound)
`ISBN: 0-7803-4271-2 (Microfiche)
`ISBN: 0-7803-4271-2 (CD-Rom)
`Library of Congress:
`97-80147
`
`Copyright and Reprint Permission
`Abstracting is permitted with credit to the source.Libraries are permittedto photocopybey
`the limit of U.S. copyright law for private use of patrons those articles in this volume that cat ,
`a codeat the bottomofthefirst page, provided the per-copyfee indicated in the code is sai
`through Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923. For other . J
`ing, reprint or republication permission, write to IEEE Copyrights Manager, IEEE Service Center
`445 Hoes Lane, P.O. Box 1331, Piscataway, NJ 08855-1331.All rights reserved. Copyright ©199
`by the Institute of Electrical and Electronics Engineers, Inc,

`’
`
`ond
`
`IEEE Catalog Number: 97TH833 1
`Library of Congress: 97-80147
`ISBN:0-7803-4269-0
`
`SAMSUNG EXHIBIT 1005
`Page 3 of 10
`
`SAMSUNG EXHIBIT 1005
`Page 3 of 10
`
`

`

`Welcome from the Conference General Chair
`welcome to the 1997 IEEE Conference onIntelligent Transportation
`Systems (TSC'97).
`Hosted by 17 IEEE societies and supported by a number ofinterna-
`tional organizations, ITSC'97 ts thefirst of a newseries focusing on
`the broad technology aspects of Intelligent Transportation Systems
`(ITS). ITSC'97 incorporates the very successful Vehicle Navigation and
`Information System (VNIS) andIntelligent Vehicle Systems (IVS) con-
`ferencesthat bave been held by individual IEEEsocieties in past
`ars. In the technical program onthe following pages, you willfind
`that ITSC‘97 includes a broad range of important andtimely techni-
`cal topics now active in various ITS projects.
`The Conference Committee and the many additional individuals who
`pave arranged ITSC'97 are excited aboutthe technical papers that
`will be presented, All papers received in response to the I 996 Cailfor
`Papers have been peer-reviewed. We believe you will agree their quali-
`ty is excellent. These papers, plus several sessions arranged around
`invited papers, bave been organized into five parallel tracks of ses-
`sions running the full three days of the conference. One highlight ts
`a series of sessions devoted to the rapidly evolving area ofApplied
`ComputerVision.
`The conference also features three plenary sessions—one each on
`the three days of the technical program. In the opening plenary on
`Monday, Mr. Richard Weiland andMr. Peter Zuk, well-
`recognized leaders in current ITS programs, provide someof their
`perspectives on progress in developing critically needed ITS standards.
`Onthe secondday, four internationally knownspeakers will provide
`their perspectives on four keyITS technologies—navigation andposi-
`tion location, communication, automated vehicle control, and in-
`vebicle information systems. In the closing plenary on the third day,
`we invited Dr. Kan Chen, one of the early IVHS leadersto provide his
`views on the past, present and future of ITS. Our banquet speaker,
`Mr. Hans-Georg Metzler, Vice President of Research for Daimler
`Benz, comes to us from Stuttgart, Germany and promises an informa-
`tive presentation on evolving intelligent vebicle technology and relat-
`ed issues. We hope you won't miss any of these excellent presentations.
`Once again, welcome. We trust ITSC'97 will be a stimulating and tech-
`nically rewarding conference for you.
`
`IEEE Conference on Intelligent Transportation Systems
`
`a L
`
`yle Saxton
`Conference General Chair
`
`CJ
`
`IEEE
`
`SAMSUNG EXHIBIT 1005 iii
`Page 4 of 10
`
`SAMSUNG EXHIBIT 1005
`Page 4 of 10
`
`

`

`EYE-TRACKING FOR DETECTION OF DRIVER FATIGUE
`
`Martin Eriksson
`
`Nikolaos P. Papanikolopoulos
`
`Artificial
`
`Intelligence, Robotics, and Vision Laboratory
`
`Department of Computer Science, University of Minnesota,
`
`Minneapolis, MN 55455
`
`E-mail: {eriksson, npapas}@cs.umn.edu
`
`Keywords: driver
`Abstract
`
`SAMSUNG EXHIBIT 1005
`0-7803-4267.898725 P1.00 © 1998
`
`template matching.
`fatigue, eye-tracking,
`[3]. There
`identification and database retrieval
`are
`also many
`real-time
`systems,
`being
`developed
`in order
`to
`track
`face
`features
`[15,13,17].<~These kinds of real-time systems
`generally consist of three components:
`a) Localization of the eyes (in the first frame),
`b) Tracking the eyes in the subsequent frames,
`c) Detection of failure in tracking.
`
`locates
`In this paper, we describe a system that
`and tracks the eyes of a driver. The purpose of
`such a systemis to perform detection of driver
`fatigue. By mounting a small camera inside the
`car, we can monitor the face of the driver and
`look for eye-movements which indicate that the
`driver is no longer in condition to drive. In such
`a case, a warning signal should be issued. This
`paper describes howto find and track the eyes.
`the
`Localization of the eyes involves looking at
`We also describe a method that can determine if
`entire image of the face and determining the
`the
`eyes are open or closed. The primary
`eye-envelopes
`(the
`areas
`around
`the
`eyes).
`criterion for
`the successful
`implementation of
`During
`tracking
`in
`subsequent
`frames,
`the
`this system is
`that
`it must be highly non-
`
`search-space the—areais reduced to
`
`
`
`intrusive. The system should start when the
`corresponding
`to
`the
`eye-envelopes
`in_
`the
`ignition is turned on without having the driver
`current
`frame. This tracking can be done at
`Initiate the system. Nor should the driver be
`relatively low computational effort,
`since the
`responsible for providing any feedback to the
`search-space is significantly reduced. In order to
`system. The system must also operate regardless
`detect failure in the tracking, general constraints
`of the texture and the color of the face. It must
`such as distance between the eyes and horizontal
`also be able to handle diverse conditions, such as
`alignment of the two eyes can be used.
`changesin light, shadows, reflections, etc.
`the next
`This paper is organized as follows: In
`section, we describe some of the previous work
`in
`this
`area. Afterwards, we describe
`the
`
`experimental how the=systemsetup, and
`
`
`INTRODUCTION
`operates. Then, we proceed to the description of
`the algorithm for
`the detection of
`fatigue.
`Driver fatigue is an important factor in a large
`Finally, we present results and future work,
`number of accidents. Lowering the number of
`fatigue-related accidents would not only save
`PREVIOUS WORK
`society a significant amount financially, but also
`for
`proposed
`Many methods
`have
`been
`reduce personal suffering. We believe that by
`
`
`localizing facial_—_features in images
`
`monitoring the eyes,
`the symptoms of driver
`[2,4,5,6,8,10,11,12,19]. These methods
`can
`fatigue in our proposed system can be detected
`roughly two_categories:be divided into
`
`
`
`
`early enough to avoid several of these accidents.
`Template-based matching and Feature-based
`Detection of
`fatigue involves
`a
`sequence of
`matching. These techniques are compared by
`images of
`a
`face, and observation of eye-
`Poggio and Brunelli [1]. One popular template-
`movements and blink patterns.
`matching
`technique
`for
`extraction
`of
`face
`The analysis of face images is a popular research
`features is to use deformable templates
`(5.16),
`area with applications such as face recognition,
`which are similar to the active snakes introduced
`virtual tools and handicap aids [9,14], human
`by Kass [7],
`in the sense that they apply energy
`
`
`
`
`SAMSUNG EXHIBIT 1005
`Page 5 of 10
`
`

`

`tracking, we also perform the detection of
`fatigue. At
`this point, we count consecutive
`frames during which the eyes are closed. If this
`numbergets too large, we issue a warning signal.
`Experimental setup
`a camera
`The final
`system will consist of
`pointing at
`the driver. The camera is to be
`mounted on the dashboard inside the vehicle.
`For the system we are developing, the camera is
`Stationary and will not adjust
`its position or
`zoom during operation. For experimentation, we
`are using a JVC color video camera, sending the
`frames
`to
`a Silicon Graphics
`Indigo. The
`grabbed frames are represented in RGB-space
`with 8-bit pixels (256 colors). We do not use any
`specialized hardware for image processing.
`
`Localization of the eyes
`We localize the eyes in a top-down manner,
`reducing the search-space at each step. The steps
`are:
`
`1. Localization of the face.
`2. Computation of the vertical
`eyes.
`3. Computation of the exact
`eyes,
`4. Estimation of the position of the iris.
`
`location of the
`
`location of
`
`the
`
`Localization of the face. Since the face of a
`driver is symmetric, we use a symmetry-based
`approach, similar to [18]. We found that in order
`for this method to work,
`it
`is enough to use a
`subsampled, gray-scale version of the image. A
`symmetry-value is
`then computed for every
`
`‘s
`
`segtrtts
`
`‘{
`7
`
`oa EE
`Ee
`cal
`Pe
`redtia
`
`eyes
`ers
`
`
`
`.
`on
`
`i
`rk
`
`|
`t
`
`‘
`
`Mk
`
`-
`
`cold
`
`.F
`
`“ar
`;
`
`P
`-
`,
`«(TD ips
`
`=.
`
`eye
`
`7
`als
`
`nice
`
`e
`.
`
`;
`Pas
`
`pe
`
`af
`hues
`
`een
`
`iS ae
`F ey
`a ca
`
`t
`
`=
`
`the computation of
`minimization based on
`image-forces.
`In feature-based matching,
`the
`system uses knowledge about some geometrical
`constraints. For example, a face has two eyes,
`one mouth and one nose in specific relative
`locations.
`
`Oneinteresting application for face recognition
`was developed by Stringa [12]. He used the
`observation that the eyes are regions of rapidly
`changing intensity. We use a similar approach
`on a reduced version of
`the image. Another
`approach, developed by Steifelhagen et al. [13]
`uses connected regions in order to extract
`the
`dark disks corresponding to the pupils. Rather
`than looking for the pupils, we used the fact that
`the entire eye-regions
`are darker
`than their
`surroundings,
`again allowing us
`to use
`the
`reduced image in order to extract these rough
`regions at a reduced computational cost. The
`systems described in [15] and [13] use color
`information in order to extract the head from
`the background.
`In order to avoid dependence
`on a fairly colorless background, we decided to
`again use the reduced image and localize the
`symmetry axis [18]. Since the driver will be
`looking almost straight ahead,
`there will be a
`well defined vertical symmetry line between the
`eyes,
`
`templates have been described
`Many different
`for finding the shape of an eye. Xie et al. [16]
`developed a deformable template consisting of
`10 cost equations, based on image intensity,
`image gradient
`and
`internal
`forces of
`the
`template. Since we are greatly concerned about
`computational speed, we decided to use only the
`two cost equations dealing with image intensity.
`Once the eyes are found, the search-space in the
`subsequent
`frames
`is
`limited
`to
`the
`area
`surrounding
`the
`found
`eye-regions.
`In the
`system by Stiefelhagen et al., the darkest pixel
`(whichis likely to be a pixel inside the pupil) is
`used for tracking, allowing high computational
`speed. Another approach [5] is to perform edge-
`detection on the region of interest and then track
`the region with a high concentration of edges.
`
`THE SYSTEM
`Whenthe system starts, frames are continuously
`fed from the camera to the computer. We use the
`initial
`frame
`in order
`to
`localize
`the eye-
`positions. Once the eyes are localized, we start
`the tracking process by using information in
`previous frames in order to achieve localization
`in subsequent frames. During tracking, error-
`detection is performed in order to recover from
`possible tracking failure. When a tracking failure
`is detected,
`the eyes are relocalized. During
`
`Figure 1. The symmetry histogram.
`
`SAMSUNG EXHIBIT 1005 315
`Page 6 of 10
`
`SAMSUNG EXHIBIT 1005
`Page 6 of 10
`
`

`

` LU
`
`t
`
`S(x) = y >[abs(U(x.y ~—w)~I(x,y+ w))].
`
`wel
`
`y=l
`
`Figure 2. The original image, the edges and the histogram of projected edges.
`In
`Find the exact
`location of the eyes.
`pixel-column in the reduced image. If the image
`the
`order
`to
`find
`the
`eye-regions
`given
`is represented as
`/(x,y) then the symmetry-
`that
`proceeding processing, we rely on the fact
`value for a pixel-column is given by
`the eyes correspond to intensity-valleys in the
`ysize
`image. Given that, we can threshold the image
`and then extract the connected regions. We used
`a raster-scan algorithm on the reduced image in
`order to extract
`these regions.
`In general, our
`raster-scan algorithm found 4-5
`regions.
`In
`order
`to
`resolve which
`of
`these
`regions
`correspond to the eyes, we use the information
`in H(y). We try to find a peak corresponding to
`a row in the image with two connected regions
`on. The three best peaks in A(y) are considered.
`Wealso use general constraints, such that both
`eyes must be located "fairly close" to the center
`of the face,
`
`where k
`S(x) is computed for x €[k,xsize—k]
`is the maximum distance from the pixel-column
`that symmetry is measured, and xsize
`is
`the
`width of the image. The x corresponding to the
`lowest value of S(x) is the center of the face. The
`result from this process is shown in Figure 1.
`The search-space is now limited to the area
`around this line, which reduces the probability
`of having distracting features in the background.
`Computation of the vertical
`location of
`the eyes. As suggested by Stringa [12], we use
`the observation that eye-regions correspond to
`regions of high spatial frequency. Again weare
`working with the reduced image. Wecreate the
`gradient-map, G(x,y), by
`applying
`an
`edge
`detection algorithm on the reduced image. Any
`edge-detection method
`could
`be
`used. We
`choose to use a very simple and fast method
`called pixel-differentiation, that assigns G(x,y) =
`(x,y) - I(x-1,y). We selected this method since it
`does not
`involve any convolution. G(x,y) will
`now reveal areas of high spatial frequency. By
`projecting G(x,y) onto its vertical axis, we get a
`histogram A(y):
`Wize
`
`H(y) = .GGy).
`
`Since both eyes are likely to be positioned at the
`same row, H(y) will have a strong peak on that
`row. However,
`in order
`to reduce the risk of
`error, we consider the best three peaks in H(y)
`for further search rather than just the maximum.
`This process is illustrated in Figure 2.
`
`to find a
`The difficulty with this method is
`threshold that will generate the correct eye-
`regions. We used a method called adaptive
`thresholding [13]
`that
`starts out with a low
`threshold. If two good eye-regions are found,
`that threshold is stored, and used the next
`time
`the eyes have to be localized. If no good eye-
`regions are
`found,
`the
`system automatically
`attempts with a higher
`threshold,
`until
`the
`regions are found.
`Estimation of the position of the iris. Once
`the eye-regions are localized, we can apply a
`very simple template in orderto localize the iris.
`
`al
`
`Figure 3. The eye-template.
`two
`We constructed a template consisting of
`circles, one inside the other. A good match
`
`SAMSUNG EXHIBIT 1005
`Page 7 of 10
`
`A
`
`SAMSUNG EXHIBIT 1005
`Page 7 of 10
`
`

`

`
`
`Figure 4, Snapshots from the system during tracking. Note that in the second image, the system
`missed tracking of one eye.
`
`in many dark pixels in the area
`would result
`inside the inner circle, and many bright pixels in
`the area between the twocircles. The template is
`shown in Figure 3. This match occurs when the
`inner circle is centered on the iris and the
`outside circle covers the sclera.
`
`The match M(al,a2) is computed as
`M(al,a2)= Si l(p.q)- Si 1(p,4).
`(p,q)eal
`(p.q)ea2
`
`A low value for M(al,a2) corresponds to a good
`match. The template is matched across
`the
`predicted eye-region, and the best match is
`reported.
`
`Tracking the eyes
`the darkest
`We track the eye by looking for
`In order to
`pixel in the predicted region [13].
`recover from tracking errors, we make sure that
`none of the geometrical constraints are violated.
`If they are, we relocalize the eyes in the next
`frame. To find the best match for
`the eye-
`template, we initially center
`it at
`the darkest
`
`in
`pixel, and then perform a gradient descent
`order to find a local minimum. In Figure 4, we
`show a few snapshots during tracking.
`
`DETECTION OF FATIGUE
`Asthe driver becomes more fatigued, we expect
`the
`eye-blinks to last
`longer. We
`count
`the
`number of consecutive frames that the eyes are
`closed in order to decide the condition of the
`driver. For
`this, we need a robust way to
`determine if the eyes are open or closed; so we
`developed a method that looks at the horizontal
`histogram across the pupil.
`During initialization (the first frames after the
`driver has settled down), an average match over a
`numberof frames is calculated. When the match
`in a frame is
`“significantly” lower than the
`average, we call that frame a closed frame. If the
`match is close to the average, we call
`that an
`open frame. After C consecutive closed frames,
`we issue a warning signal, where Cis the
`number
`of
`frames
`corresponding
`to
`approximately 2 to 2.5 seconds (the time when
`the eyes have been closed for too long).
`
`SAMSUNG EXHIBIT 1005
`Page 8 of 10
`angeeeeee
`
`SAMSUNG EXHIBIT 1005
`Page 8 of 10
`
`

`

`
`
`
`PerTa[SagesCa[1DeoCa
`30 Degrees Down
`
`
`
`eee
`non
`
`e
`
`I. Results fromthe tests.
`
`10
`
`Ta
`
`10
`
`0
`
`the
`
`histogram across
`
`Horizontal
`pupil
`We use the characteristic curve generated by
`plotting the image-intensities along the
`line
`going through the pupil from left
`to right, as
`shown in Figure 5. The pupil
`is always the
`darkest point. Surrounding the pupil, we have
`the iris, which is also very dark. To the right and
`left of the iris is the white sclera. In Figure 5 we
`show two curves, one corresponding to an open
`eye, and one corresponding to a closed eye.
`Note that the curve corresponding to the closed
`eye is very flat.
`We compute the matching function M(x,y) as
`
`M(x, y) = I(x, y)/min{I(x —r,y), I(x +r,y)}
`
`is the computed center of the pupil
`where (x,y)
`and r is the radius ofthe iris. /(x,y)
`is the image
`intensity at (x,y). When the eye is open,
`the
`valley in the intensity-curve corresponding to
`the pupil will be surrounded by two large peaks
`corresponding to the sclera. When the eye is
`closed,
`this curve is usually
`very flat
`in the
`center. However,
`in the latter case there is no
`pupil to center the curve on, which can lead to a
`very unpredictable shape. In order to minimize
`the risk of having one big peak nearby (due to
`noise), we always use the minimum peak at the
`distance r
`from the pupil. This will lead to a
`good match when the eye is open, and very
`likely to a bad match when the eyeis closed.
`
`RESULTS AND FUTURE WORK
`We simulated three
`‘“test-drives” where we
`
` IMl
`
`Figure 5, Histograms corresponding to an
`open and a closed eye, respectively.
`
`
`
`ene
`———
`
`the detection of
`measured the accuracy of
`opened/closed
`eyes.
`In
`each
`test-drive, we
`simulated 10 long eye-blinks, and recorded how
`many were computed by the system.
`In each
`test-drive, the driver had the head turned in a
`different angle. The results are shown in Table
`
`For this test, we did not allow for any rapid head
`movements,
`since we wanted to simulate the
`situation when the driver is tired. For small head-
`movements, the system rarely loses track of the
`eyes, as we can see from the results. We can also
`see that when the head is
`turned too much
`sideways, we had some false alarms. However, in
`the case where the head is tilted forward (which
`is the most
`likely posture when the driver
`is
`tired), the system operated perfectly.
`When weperform the detection of driver fatigue,
`we operate on frames of size 640 by 320. This
`frame-size allows us to operate at approximately
`5 frames per second. In order to track the eyes,
`without detecting fatigue,
`it is enough to use
`frames of size 320 by 160, which allows a
`frame-rate of approximately 15 frames / second.
`At this point, the system has problems localizing
`eyes whenthe person is wearing glasses, or has a
`large amount of facial hair. We believe that by
`using a small set of face templates, similar to
`[15], we will be able to avoid this problem,
`without losing anything in performance. Also,
`we are not using any color-information in the
`image. By using techniques described in [13],
`we can further enhance robustness.
`
`Currently, we do not adjust zoom or direction of
`the camera during operation. Future work may
`be to automatically zoom in on the eyes, once
`they are localized. This would avoid the trade-
`off between having a wide field of view in order
`to locate the eyes, and a narrow field of view in
`order to detect fatigue.
`
`of
`number
`the
`at
`looking
`only
`are
`We
`consecutive frames where the eyes are closed. At
`that point, it may be too late to issue the signal.
`By study the eye-movement patterns, we are
`hoping to find a method to generate the alert
`signal at an earlier stage.
`
`318
`
`SAMSUNG EXHIBIT 1005
`Page 9 of 10
`
`SAMSUNG EXHIBIT 1005
`Page 9 of 10
`
`

`

`
`
`hs
`
`CONCLUSIONS
`localizes and
`We have developed a system that
`tracks the eyes of a driver in order to detect
`fatigue. The system uses
`a combination of
`template-based matching
`and
`feature-based
`matching in order to localize the eyes. During
`tracking, the system is able to decide if the eyes
`are open or closed. When the eyes have been
`closed too long,
`a warning signal
`is
`issued.
`Several experimental results are presented.
`
`ACKNOWLEDGMENTS
`This work has been supported by the ITS
`Institute at
`the University of Minnesota,
`the
`Minnesota
`Department
`of
`Transportation
`through
`Contracts
`#71789-72983-169
`and
`#71789-72447-159,
`the National
`Science
`Foundation through Contracts #IRI-9410003
`and
`#IRI-9502245,
`and
`the Center
`for
`Transportation Studies.
`
`REFERENCES
`
`[3]
`
`[4]
`
`“Face
`Poggio,
`T.
`and
`Brunelli
`[1] R.
`recognition: Features versus Templates,”
`IEEE Transactions on Pattern Analysis and
`Machine Intelligence, Vol. 15, No. 10, pp.
`1042-1052, 1993.
`[2] G. Chow and X. Li, “Towards a System for
`Automatic
`Feature Detection,”
`Pattern
`Recognition, Vol. 26, No. 12, pp. 1739-
`1755, 1993,
`IJ. Cox, J. Ghosn, P.N. Yianilos, "Feature-
`Based
`Recognition
`Using Mixture-
`Distance,"
`NEC
`Research
`Institute,
`Technical Report 95 - 09, 1995.
`I. Craw, H. Ellis
`and
`J.R. Lishman,
`“Automatic Extraction of Face-Features,”
`Pattern Recognition Letters, 5, pp. 183-187,
`1987.
`[5] L.C. De Silva, K. Aizawa and M. Hatori,
`“Detection and Tracking of Facial Features
`by Using Edge
`Pixel Counting
`and
`Deformable Circular Template Matching, ”
`IEICE Transaction on
`Information and
`Systems, Vol. E78-D No. 9, pp. 1195-1207,
`September 1995.
`[6] C. Huang and C. Chen, “Human Facial
`Feature Extraction for Face Interpretation
`and Recognition,” Pattern Recognition,
`Vol. 25, No. 12 pp. 1435-1444, 1992.
`[7] M. Kass, A. Witkin and D. Terzopoulos,
`“Snakes:
`Active
`Contour Models,”
`
`International Journal of Computer Vision,
`pp. 321-331, 1988.
`“Face Contour
`[8] X. Li and N. Roeder,
`Extraction
`from Front-View
`Images,”
`Pattern Recognition, Vol. 28, No. 8, pp.
`1167-1179, 1995.
`J.M.
`and
`[9] K.P. White, T.E. Hutchinson
`Carley, “Spatially Dynamic Calibration of
`an
`Eye-Tracking
`System,”
`IEEE
`Transactions
`on
`Systems, Man,
`and
`Cybernetics, Vol. 23, No. 4, pp. 1162-1168,
`1993.
`“Accuracy
`and Xiabo Li,
`[10] N. Roeder
`Analysis
`for Facial Feature Detection,”
`Pattern Recognition, Vol. 29, No.
`1, pp.
`143-157, 1996.
`Sakai, T. Endoh, K.
`[11] Y. Segawa, H.
`Murakami, T. Toriu and H. Koshimizu,
`“Face
`Recognition
`Through
`Hough
`Transform for
`Irises
`Extraction
`and
`Projection
`Procedures
`for
`Parts
`Localization,”
`Pacific Rim International
`Conference on Artificial
`Intelligence, pp.
`625-636, 1996.
`Face
`for
`[12] L. Stringa,
`“Eyes Detection
`Artificial
`Recognition,”
`Applied
`Intelligence, No. 7, pp. 365-382, 1993.
`[13] R. Stiefelhagen, J. Yang and A. Waibel, “ A
`Model-Based
`Gaze-Tracking
`System,”
`International
`IEEE Joint Symposia
`on
`Intelligence
`and Systems,
`pp.
`304-310,
`1996.
`[14] E.R. Tello, “Between Man and Machine,”
`BYTE, September, pp. 288-293, 1983.
`and
`[15] D. Tock and
`I. Craw, “Tracking
`Measuring Drivers Eyes,”
`Real-Time
`Computer Vision, pp. 71-89, 1995.
`[16] X. Xie, R. Sudhakar and H. Zhuang, “On
`Improving Eye Features Extraction Using
`Deformable
`Templates,”
`Pattern
`Recognition, Vol. 27, No. 6, pp. 791-799,
`1994.
`[17] X. Xie, R. Sudhakar and H. Zhuang, "Real-
`Time Eye Feature Tracking from a Video
`Image Sequence Using Kalman
`Filter,"
`IEEE Transactions on Systems, Man, and
`Cybernetics, Vol. 25, No.
`12, pp.
`1568-
`1577, 1995.
`[18] T. Yoo and I. Oh, "Extraction of Face
`Region and Features Based on Chromatic
`Properties of Human Faces," Pacific Rim
`International Conference
`on Artificial
`Intelligence, pp. 637-645, 1996.
`[19] H. Wu, Q. Chen, and M. Yachida, "Facial
`Feature Extraction and Face Verification,"
`International
`Conference
`on
`Pattern
`Recognition, pp. 484-488, 1996.
`
`SAMSUNG EXHIBIT 1005 319
`Page 10 of 10
`
`SAMSUNG EXHIBIT 1005
`Page 10 of 10
`
`

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket