`
`SAMSUNG EXHIBIT 1005
`Samsung v. Image Processing Techs.
`
`
`
`@ IEEE Conference on
`
`INTELLIGENT
`TRANSPORTATION SYSTEMS
`
`_
`-
`.: W t" x
`_.
`shy-«e
`\
`g M
`f
`gm e” a":
`fig,“ 4.)
`was? IE;- i‘imm a”; ga-sa’ I.“ 3.3
`
`.
`
`Unfit Dfifis‘fiié‘ifi?
`
`
`
`
`
`Boston Park Plaza Hotel
`Boston, Massachusetts
`November 9-12, 1997
`
` 005
`
`Page 2 of 10
`
`SAMSUNG EXHIBIT 1005
`Page 2 of 10
`
`
`
`
`
`
`
`IEEE Catalog Number
`ISBN: 0—7803—4269—0
`
`97TH8331
`(Softbound)
`
`ISBN: 0—7803~4271—2 (Microfiche)
`
`ISBN: 0—7803—4271—2
`
`(CD—Rom}
`
`Library of Congress;
`
`97—80147
`
`Ah
`(opyright and Reprint Permission
`I
`I
`_
`I
`stractmg :5 permitth W1th credit to the source. Libraries are permitted to photocopy hey
`31:52:::tciiélibioémrfllttt] lax: for private usc of patrons those articles in this volume that carry
`mm 1 C
`‘
`‘
`y
`16 11'St page, pmvrded the per—copy fee indicated in the code is paid
`_
`11g 1‘ .opyrlght Clearance Center, 222 Rosewood Drive, Danvcrs, MA 01923 For othc
`mg, reprint or republication permission, write to IEEE Copyrights Manager, IEEIé Service
`445 Hoes L'tnc PO Box 1331 Plscat
`.
`It
`,
`.
`.
`‘
`,
`away, NJ 08855-1331.All rights reserved. Co
`-' r
`by the Institute of Electrical and Electronics Engineers, Inc.
`FYI 15m ©1997
`
`0nd
`
`[BEE Catalog Number: 97TH833]
`Library of Congress: 97-80147
`ISBN: 0-7803-4269-0
`
`SAMSUNG EXHIBIT 1005
`
`Page3of10
`
`SAMSUNG EXHIBIT 1005
`Page 3 of 10
`
`
`
`wwwme m {be 1'99 ’7 IEEE CW1???"Gillie an intelligent Transportation
`Systems 013097),
`Hosted by 17 IEEE societies and swaportear by a number of interna-
`tional organizations, [T5097 is the first ofa new seriesfocusing on
`the broad technology aspects of intelligent Transportation Systems
`(1T5). ITSC‘9? incorporates the very successful Vehicle Navigation and
`lnfbrn-iation System (WIS) anti Intelligent Vehicle Systems (IVS) con-
`ferences that have been held by individual [BEE societies in past
`years. In the technical program on the following pages, you willfind
`mat lTSC’97 includes a broad range of important and. timely techni-
`cal topics now active in various ITS projects.
`
`The Conference Committee anti the many atlaitional individuals who
`have arranged 11196797 are excited about the technical papers that
`will he presentedAll papers receivear in response to the l996 Callfor
`Papers have been peenreviewea. We believe you until agree their quali-
`ty is excellent. These papers, plus several sessions arranged around
`invited papers, have been organized into five parallel tracks of ses-
`sions running thefull three days of the conference. One highlight is
`a series of sessions tievotea to the rapidly evolving area ofApplied
`Computer Vision.
`
`Once again, welcome. We trust 1119097 will he a stimulating and tech-
`nically rewarding conference for you.
`
` Welcome from the Conference General Chair
`IEEE Conference on Intelligent Transportation Systems
`
`’1”he cory’erence also features three plenary sessi'onsmone each on
`the three clays of the technical progra-n-i. in the opening plenary on
`gllona'ay, Mn Richard Weiland and Mr. Peter Zak, well-
`recognized leaders in current ITS programs, provide some of their
`perspectives on progress in developing critically needed ITS standards.
`On the second day, four internationally known speakers will provide
`their perspectives on four key ITS technologies—navigation and posi—
`tion location, communication, automated vehicle control, and in-
`vehicle information systems. In the closing plenary on the third day,
`we invited Dr. Kan Chen, one Of the early M15 leaders to provide his
`views on the past, present and future of US. Our banquet speaker;
`Mr. Hans-Georg Metzlet: Vice President of Research for Daimler-
`Benz, comes to us from Stuttgart, Germany and promises an informa—
`tive presentation on evolving intelligent vehicle technology and relat-
`ecl issues. We hope you won ’t miss any of these excellent presentations.
`
`Lyle Saxton
`Conference General Chair
`
`IEEE
`
`SAMSUNG EXHIBIT 1005 iii
`
`Page 4 of 10
`
`
`
`SAMSUNG EXHIBIT 1005
`Page 4 of 10
`
`
`
`EYE-TRACKING FOR DETECTION OF DRIVER FATIGUE
`
`Martin Eriksson
`
`Nikolaos P. Papanikolopoulos
`
`Artificial
`
`Intelligence, Robotics, and Vision Laboratory
`
`Department of Computer Science, University of Minnesota,
`
`Minneapolis, MN 55455
`
`E-mail: {eriksson, npapas}@cs.umn.edu
`
`Keywords: driver
`
`fatigue, eye-tracking,
`
`template matching.
`
`Abstract
`
`locates
`In this paper, we describe a system that
`and tracks the eyes of a driver. The purpose of
`such a system is to perform detection of driver
`fatigue. By mounting a small camera inside the
`car, we can monitor the face of the driver and
`look for eye-movements which indicate that the
`driver is no longer in condition to drive. In such
`a case, a warning signal should be issued. This
`paper describes how to find and track the eyes.
`We also describe a method that can determine if
`
`eyes are open or closed. The primary
`the
`criterion for
`the successful
`implementation of
`this system is
`that
`it must be highly non—
`intrusive. The system should start when the
`ignition is turned on without having the driver
`initiate the system. Nor should the driver be
`responsible for providing arty feedback to the
`system. The system must also operate regardless
`of the texture and the color of the face. It must
`
`also be able to handle diverse cenditions, such as
`changes in light, shadows, reflections, etc.
`
`INTRODUCTION
`
`Driver fatigue is an important factor in a large
`number of accidents. Lowering the number of
`fatigue—related accidents would not only save
`society a significant amount financially, but also
`reduce personal suffering. We believe that by
`monitoring the eyes,
`the symptoms of driver
`fatigue in our proposed system can be detected
`early enough to avoid several of these accidents.
`Detection of
`fatigue involves
`a
`sequence of
`images of
`a
`face. and observation of eye
`movements and blink patterns.
`
`The analysis of face images is a popular research
`area With apphcations such as face recognition,
`virtual tools and handicap aids [9,14], human
`
`[3]. There
`identification and database retrievai
`are
`also many
`real—time
`systems,
`being
`developed
`in order
`to
`track
`face
`features
`{15,13,l?].-—. These kinds of real—time
`systems
`generally consist of three components:
`
`a) Localization of the eyes (in the first frame},
`b) Tracking the eyes in the subsequent frames,
`c) Detection of failure in tracking.
`
`the
`Localization of the eyes involves looking at
`entire image of the face and determining the
`eye—envelopes
`(the
`areas
`around
`the
`eyes).
`During
`tracking
`in
`subsequent
`frames.
`the
`search-space
`is
`reduced
`to
`the
`area
`in
`corresponding
`to
`the
`eye-envelopes
`the.
`current
`frame. This tracking can be done at
`retativeiy low computational effort,
`since the
`search-space is significantly reduced. In order to
`detect failure in the tracking, general constraints
`such as distance between the eyes and horizontal
`alignment of the two eyes can be used.
`
`the next
`This paper is organized as follows: In
`section. we describe some of the previous work
`in
`this
`area. Afterwards, we describe
`the
`experimentai
`setup,
`and
`how the
`system
`operates. Then, we proceed to the description of
`the algorithm for
`the detection of
`fatigue.
`Finally, we present results and future work.
`
`PREVIOUS WORK
`for
`preposed
`Many methods
`have
`been
`in
`images
`iocalizing
`facial
`features
`[2,4,5,6,8,10,11,12,i9]. These methods
`can
`roughly
`be
`divided
`into
`two
`categories:
`Teriiplri{tr-braved matching and Feature—based
`marching. These techniques are compared by
`Poggio and Brunelli
`[1}. One pOpular template—
`matching
`technique
`for
`extraction
`of
`face
`features is to use rft-ffbi‘nirtbft'
`it’llipffifr’x
`lilo],
`which are similar to the active snakes introduced
`
`by Kass [7],
`
`in the sense that they apply energy
`
`
`
`314
`
`SAMSUNG EXHIBIT 1005
`
`0-7303-4263mamnflo i993
`
`SAMSUNG EXHIBIT 1005
`Page 5 of 10
`
`
`
`tracking, we also perform the detection of
`fatigue. At
`this point, we count consecutive
`frames during which the eyes are closed. If this
`number gets too large, We issue a warning signal.
`
`Experimental setup
`a camera
`The final
`system will consist of
`pointing at
`the driver. The camera is to be
`mounted on the dashboard inside the vehicle.
`
`For the system we are developing, the camera is
`stationary and will not adjust
`its position or
`zoom during operation. For experimentation, we
`are using a JVC color video camera, sending the
`frames
`to
`a Silicon Graphics
`Indigo. The
`grabbed frames are represented in RGB-space
`with 8-bit pixels (256 colors). We do not use any
`specialized hardware for image processing.
`
`Localization of the eyes
`We localize the eyes in a top—down manner,
`reducing the search-space at each step. The steps
`are:
`
`1. Localization of the face.
`
`location of the
`
`2. Computation of the vertical
`eyes.
`3. Computation of the exact
`eyes,
`4. Estimation of the position of the iris.
`
`location of
`
`the
`
`Localization of the face. Since the face of a
`
`driver is symmetric, we use a symmetry—based
`approach, similar to [18]. We found that in order
`for this method to work,
`it is enough to use a
`subsampled, gray—scale version of the image. A
`symmetry-value is
`then computed for every
`.fil.
`
`
`
`Figure l. The symmetry histogram.
`
`SAMSUNG EXHIBIT 1005 315
`Page60f10
`
`the computation of
`minimization based on
`image—forces.
`In feature—based matching,
`the
`system uses knowledge about some geometrical
`constraints. For example, a face has two eyes,
`one mouth and one nose in specific relative
`locations.
`
`One interesting application for face recognition
`was developed by Stringa [12]. He used the
`observation that the eyes are regions of rapidly
`changing intensity. We use a similar approach
`on a reduced version of
`the image. Another
`approach, developed by Steifelhagen et al. [13]
`uses connected regions in order to extract
`the
`dark disks corresponding to the pupils. Rather
`than looking for the pupils, we used the fact that
`the entire eye-regions
`are darker
`than their
`surroundings,
`again allowing us
`to use
`the
`reduced image in order to extract these rough
`regions at a reduced computational cost. The
`systems described in [15] and [13] use color
`information in order to extract the head from
`
`In order to avoid dependence
`the background.
`on a fairly colorless background, we decided to
`again use the reduced image and localize the
`symmetry axis [18]. Since the driver will be
`looking almost straight ahead,
`there will be a
`well defined vertical symmetry line between the
`eyes.
`
`templates have been described
`Many different
`for finding the shape of an eye. Xie et al. [16]
`developed a deformable template consisting of
`10 cost equations, based on image intensity,
`image gradient
`and
`internal
`forces of
`the
`template. Since we are greatly concerned about
`computational speed, we decided to use only the
`two cost equations dealing with image intensity.
`Once the eyes are found, the search-space in the
`subsequent
`frames
`is
`limited
`to
`the
`area
`surrounding
`the
`found
`eye-regions.
`In the
`system by Stiefelhagen et al., the darkest pixel
`(which is likely to be a pixel inside the pupil) is
`used for tracking, allowing high computational
`speed. Another approach [5] is to perform edge—
`detection on the region of interest and then track
`the region with a high concentration of edges.
`
`THE SYSTEM
`
`When the system starts, frames are continuously
`fed from the camera to the computer. We use the
`initial
`frame
`in order
`to
`localize
`the
`eye—
`positions. Once the eyes are localized, we start
`the tracking process by using information in
`previous frames in order to achieve localization
`in subsequent frames. During tracking, error-
`detection is performed in order to recover from
`possible tracking failure. When a tracking failure
`is detected,
`the eyes are relocalized. During
`
`SAMSUNG EXHIBIT 1005
`Page 6 of 10
`
`
`
` ul
`
`t
`
`Figure 2. The original image, the edges and the histogram of projected edges.
`
`pixel-column in the reduced image. If the image
`is represented as 1(x,y) then the symmetry-
`value for a pixel—column is given by
`.\arizL‘
`
`5(x) :
`
`Z[abs(1(x,y — w) — I(x,y +
`
`w:l
`
`\':l
`
`In
`location of the eyes.
`Find the exact
`the
`the
`eye-regions
`given
`order
`to
`find
`that
`proceeding processing, we rely on the fact
`the eyes correspond to intensity—valleys
`in the
`image. Given that, we can threshold the image
`and then extract the connected regions. We used
`a raster-scan algorithm on the reduced image in
`order to extract
`these regions.
`In general, our
`raster—scan
`algorithm found 4-5
`regions.
`In
`order
`to
`resolve which
`of
`these
`regions
`correspond to the eyes, we use the information
`in H(y). We try to find a peak corresponding to
`a row in the image with two connected regions
`on. The three best peaks in H(y) are considered.
`We also use general constraints, such that both
`eyes must be located "fairly close" to the center
`of the face.
`
`to find a
`The difficulty with this method is
`threshold that will generate the correct
`eye—
`regions. We used a method called adaptive
`thresholding [13]
`that
`starts out with a low
`threshold. If two good eye—regions are found,
`that threshold is stored, and used the next
`time
`the eyes have to be localized. If no good eye-
`regions‘ are
`found,
`the
`system automatically
`attempts with a higher
`threshold,
`until
`the
`regions are found.
`
`Estimation of the position of the iris. Once
`the eye-regions are localized, we can apply a
`very simple template in order to localize the iris.
`
`(if
`
`Figure 3. The eye—template.
`
`two
`We constructed a template consisting of
`circles, one inside the other. A good match
`
`5(x) is computed for x E [k,xsize—k] where k
`is the maximum distance from the pixel-column
`that symmetry is measured, and xsize
`is
`the
`width of the image. The x corresponding to the
`lowest value of S(.r) is the center of the face. The
`result from this process is shown in Figure 1.
`The search-space is now limited to the area
`around this line, which reduces the probability
`of having distracting features in the background.
`
`location of
`Computation of the vertical
`the eyes. As suggested by Stringa [12], we use
`the observation that eye-regions correspond to
`regions of high spatial frequency. Again we are
`working with the reduced image. We create the
`gradient—map, G(x,y), by
`applying
`an
`edge
`detection algorithm on the reduced image. Any
`edge-detection method
`could
`be
`used. We
`choose to use a very simple and fast method
`called pixel—differentiation, that assigns (Waxy) =
`l(x,y) - [Dc-Ly). We selected this method since it
`does not
`involve any convolution. G(x,_v) will
`now reveal areas of high spatial frequency. By
`projecting G(x,y) onto its vertical axis, we get a
`histogram H(y):
`
`Htv) = 2602)».
`
`Since both eyes are likely to be positioned at the
`same row, H(y) will have a strong peak on that
`row. However,
`in order
`to reduce the risk of
`error, we consider the best three peaks in H(y)
`for further search rather than just the maximum.
`This process is illustrated in Figure 2.
`
`316
`
`SAMSUNG EXHIBIT 1005
`
`Page 7 of 10
`
`SAMSUNG EXHIBIT 1005
`Page 7 of 10
`
`
`
`SAMSUNG EXHIBIT 1005
`Page 8 of 10
`
`
`
`10
`
`
`
`False Alerts
`
`Lost trac
`
`-—-
`-'--'--
`'—
`u e
`. Resu ts
`rum tie tests.
`
`
`
`Horizontal
`
`histogram across
`
`the
`
`pupil
`We use the characteristic curve generated by
`plotting the image-intensities along the
`line
`going through the pupil from left
`to right, as
`shown in Figure 5. The pupil
`is always the
`darkest point. Surrounding the pupil, we have
`the iris, which is also very dark. To the right and
`left of the iris is the white sclera. In Figure 5 we
`show two curves, one corresponding to an open
`eye, and one corresponding to a closed eye.
`
`Note that the curve corresponding to the closed
`eye is very flat.
`
`We compute the matching function M(x,y) as
`
`M(x,y) = I(x,y)/min{1(x- r,y),l(x+ r,y)}
`
`is the computed center of the pupil
`where (x,y)
`and r is the radius of the iris. I(x,y)
`is the image
`intensity at (x,y). When the eye is open,
`the
`valley in the intensity-curve corresponding to
`the pupil will be surrounded by two large peaks
`corresponding to the sclera. When the eye is
`closed,
`this curve is usually
`very flat
`in the
`center. However,
`in the latter case there is no
`pupil to center the curve on, which can lead to a
`very unpredictable shape. In order to minimize
`the risk of having one big peak nearby (due to
`noise), we always use the minimum peak at the
`distance r
`from the pupil. This will lead to a
`good match when the eye is open, and very
`likely to a bad match when the eye is closed.
`
`RESULTS AND FUTURE WORK
`We simulated three
`“test-drives” where we
`
`llllllll llllllli will
`
`illll
`
`||ll|
`
`Figure 5. Histograms corresponding to an
`open and a closed eye, respectively.
`
`the detection of
`measured the accuracy of
`opened/closed
`eyes.
`In
`each
`test-drive, we
`simulated 10 long eye-blinks, and recorded how
`many were computed by the system.
`In each
`test-drive, the driver had the head turned in a
`different angle. The results are shown in Table
`1.
`
`For this test, we did not allow for any rapid head
`movements,
`since we wanted to simulate the
`situation when the driver is tired. For small head-
`movements, the system rarely loses track of the
`eyes, as we can see from the results. We can also
`see that when the head is
`turned too much
`sideways, we had some false alarms. However, in
`the case where the head is tilted forward (which
`is the most
`likely pOSture when the driver
`is
`tired), the system operated perfectly.
`
`When we perform the detection of driver fatigue,
`we operate on frames of size 640 by 320. This
`frame-size allows us to operate at approximately
`5 frames per second. In order to track the eyes,
`without detecting fatigue,
`it is enough to use
`frames of size 320 by 160, which allows a
`frame-rate of approximately 15 frames / second.
`
`At this point, the system has problems localizing
`eyes when the person is wearing glasses, or has a
`large amount of facial hair. We believe that by
`using a small set of face templates, similar to
`[15], we will be able to avoid this problem,
`without losing anything in performance. Also,
`we are not using any color-information in the
`image. By using techniques described in [13],
`we can further enhance robustness.
`
`Currently, we do not adjust zoom or direction of
`the camera during operation. Future work may
`be to automatically zoom in on the eyes, once
`they are localized. This would avoid the trade—
`off between having a wide field of view in order
`to locate the eyes, and a narrow field of view in
`order to detect fatigue.
`
`of
`number
`the
`at
`looking
`only
`are
`We
`consecutive frames where the eyes are closed. At
`that point, it may be too late to issue the signal.
`By study the eye-movement patterns, we are
`hoping to find a method to generate the alert
`signal at an earlier stage.
`
`318
`
`SAMSUNG EXHIBIT 1005
`
`Page 9 of 10
`
`
`
`
`
`
`SAMSUNG EXHIBIT 1005
`Page 9 of 10
`
`
`
`
`
`CONCLUSIONS
`We have developed a system that focalizes and
`tracks the eyes of a driver in order to detect
`fatigue. The system uses
`a combination of
`template-based matching
`and
`feature-based
`matching in order to localize the eyes. During
`tracking, the system is able to decide if the eyes
`are open or closed. When the eyes have been
`closed too long,
`a warning signal
`is
`issued.
`Several experimental results are presented.
`
`ACKNOWLEDGMENTS
`This work has been supported by the ITS
`Institute at
`the University of Minnesota,
`the
`Minnesota
`Department
`of
`TranSportation
`through
`Contracts
`#7l289-22983-169
`and
`#71789-22447-159,
`the National
`Science
`Foundation through Contracts #IRI—9410003
`and MRI-9502245,
`and
`the Center
`for
`Transportation Studies.
`
`B E F E R E N C E S
`
`“Face
`Poggio,
`T.
`and
`Brunelli
`[l] R.
`recognition: Features versus Templates, "
`[BEE Transactions on Pattern Anaivris and
`Machine Inteiiigence, Vol. 15, No. 10, pp.
`1042-1052, 1993.
`{2] G. Chow and X. Li, “Towards a System for
`Automatic
`Feature Detection, ”
`Pattern
`Recognition. Vol. 26, No.
`[2, pp.
`l?39-
`1755, 1993.
`[3} U. Cox, J. Ghosn, P.N. Yianilos, "Feature—
`Based
`Recognition
`Using Mixture-
`Distance, ”
`NEC
`R esea rch
`institute,
`Technicoi Report 95 - 09, 1995.
`I. Craw, H. Ellis
`and LR. Lishman,
`“Automatic Extraction of Face—Features,”
`Pattern Recognition Letters, 5, pp. 183-182,
`1987.
`
`[4]
`
`[5} LC. De Silva, K. Aizawa and M. Hatori,
`“Detection and Tracking of Facial Features
`by Using Edge
`Pixel Counting
`and
`Deformable Circular Template Matching, ”
`iEiCE Transaction on
`iitformation and
`Systems, Vol. E78-D No. 9, pp.
`1 1954202,
`September 1995.
`[6] C. Huang and C. Chen, “Human Facial
`Feature Extraction for Face Interpretation
`and Recognition," Pattern Recognition,
`Voi. 25, No. 12 pp. 1435—1444, I992.
`[i] M. Kass, A. Witkin and D. Terzopouios,
`“Snakes:
`Active
`Contour Models, "
`
`internationai Jaio‘nai of Computer Vision,
`pp. 321—331, I988.
`“Face Contour
`[8] X. Li and N. Roeder,
`Extraction
`from Front—View
`Images,"
`Pattern Recognition. Vol. 28. N0. 8, pp.
`ll6i-ll'i9,
`i995.
`J.M.
`and
`[9] KP. White, T.E. Hutchinson
`Carley, “Spatiatly Dynamic Calibration of
`an
`Eye-Tracking
`System, "
`{EEE
`Transactions
`on
`Systems, Man.
`and
`Cybernetics, Vot. 23, N0. 4, pp.
`1 162-] 168,
`1993.
`
`“Accuracy
`and Xiabo Li,
`[10] N. Roeder
`Analysis
`for Facial Feature Detection,"
`Pattern Recognition, Vol. 29, No.
`1, pp.
`143-157, 1996.
`Sakai, T. Endoh, K.
`[ll]Y. Segawa, H.
`Murakami, T. Torin and H. Koshitnizu,
`“Face
`Recognition
`Through
`Hough
`Transform for
`Irises
`Extraction
`and
`
`Parts
`for
`Procedures
`Projection
`Pacific Rim internationoi
`Localization,"
`Conference on Arn'jiciai
`inreiiigence, pp.
`625—636. 1996.
`
`Face
`for
`“Eyes Detection
`[12} L. Stringa,
`Artificioi
`Applied
`Recognition,“
`tnteiiigence, No. 1’, pp. 365—382, I993.
`[i3] R. Stiefeihagen, J. Yang and A. Waibel, “ A
`Model—Based
`Gaze~Tracking
`System,”
`international
`1138;?
`Joint Symposio
`on
`inteiiigence
`and Systems,
`pp.
`304—310.
`1996.
`
`[14] ER. Teilo, “Between Man and Machine,"
`BYTE, September, pp. 288-293, 1983.
`and
`[15] D. Tock and
`I. Craw, “Tracking
`Measuring Drivers Eyes,"
`Real-Tiiiie
`Computer Vision, pp. 7l-89, 1995.
`{16] X. Xie, R. Sudhakar and H. Zhuang, “On
`Improving Eye Features Extraction Using
`Deformable
`Templates,"
`Pattern
`Recognition, V01. 27, No. 6, pp. 791—799,
`1994.
`
`[1?] X. Xie, R. Sudhakar and H. Zhuang, "Real-
`Tirne Eye Feature Tracking from a Video
`Image Sequence Using Kalman
`Fitter,"
`iEEE Transactions on Systems. Man, and
`Cybernetics, Vol. 25. No.
`12, pp.
`[568—
`1571 1995.
`[[8] T. Y00 and I. Oh, "Extraction of Face
`Region and Features Based on Chromatic
`Properties of Human Faces, " Pacific Rim
`internationai Conference
`on Artificiai
`Inteiiigence, pp. 632-645, 1996.
`[19] H. Wu, Q. Chen, and M. Yachida, "Facial
`Feature Extraction and Face Verification."
`internationai
`Conference
`on
`Pattern
`Recognition, pp. 484-488, 1996.
`
`SAMSUNG EXHIBIT 1005 319
`
`Page10of1O
`
`SAMSUNG EXHIBIT 1005
`Page 10 of 10
`
`