throbber
Page 1 of 21
`
`SAMSUNG EXHIBIT 1006
`Samsung v. Image Processing Techs.
`
`

`

`I
`
`APPLIED
`ARTIFICIAL
`/
`INTELLIGENCE
`AN INTERNATIONAL JOURNAL
`
`EDITOR-lN—CHIEF:
`
`Robert Trappl
`Austrian Research Institute for Artificial
`
`Intelligence and University of Vienna
`EDITORIAL ASSISTANT:
`Gerda Helscher
`
`ASSOCIATE EDITORS:
`Howard Austin Concord, MA, USA
`Ronald Brachman AT&T Bell Laboratories,
`Murray Hill. NJ, USA
`Stefano Gerri Mario Negri Institute, Milan, Italy
`Larry Fl. Harrie Artificial Intelligence
`Corporation. Waltham, MA, USA
`Makoto Nageo Kyoto University, Japan
`Germogen S. Pospelov Academy of Sciences,
`Moscow, Russia
`Wolfgang Wehtster University of the
`Saarlandes, Saarbruecken. Germany
`William A. Woods Applied Expert Systems,
`lnc., Cambridge, MA, USA
`
`EDITORIAL BOARD:
`Lulgia Carluccl Alello, University of Rome, Italy;
`Leonard Bole. University of Warsaw. Poland; Ernst
`Buchberger. University of Vienna, Austria; Jaime
`Carbonell. Carnegie-Mellon University, Pittsburgh,
`PA, USA; Mario-Ddlle Cordier. lRlSA, University
`of Rennes, France; Holder Coelho. LNEC, Lisbon,
`Portugal; Herve Gellelre. ECRC. Munich, Germany;
`Tatsuya Hayashi. Fujitsu Laboratories Ltd,
`Kawasaki, Japan; Werner Horn, University of
`Vienna, Austria; Margaret King. Geneva Unchrsity,
`Switzerland; Dana 5. Non. University of Maryland,
`College Park, MD, USA; Setsuo Ohsuga,
`University of Tokyo, Japan; Tim O'Shea. Open
`University, Milton Keynes, UK; Ivan Plander.
`Slovak Academy of Sciences, Bratislava,
`Czechoslovakia; Johannes Retti. Siemens A. G.
`Oesterreich, Vienna, Austria; Erik Sandewall.
`Linkoping University, Sweden; Lue Steele. Free
`University of Bruseels, Belgium: Oliviero Stock.
`lRST. Trento. Italy: Harald Troet. University of
`Vienna, Austria; Bernard Zeigler, University of
`Arizona, Tucson, AZ, USA.
`
`AIMS AND SCOPE: Applied Artificial Intelligence
`addresses concerns in applied research and applications
`of artificial intelligence (All. The journal acts as a me-
`dium for exchanging ideas and thoughts about impacts of
`Al research. Papers should highlight advances in uses of
`expert systems for solving tasks in management. indus-
`try, engineering. administration. and education; evalua-
`tions of existing Al systems and tools. emphasizing cem-
`parativc studies and user experiences; andlor economic,
`social. and cultural impacts of AI. Information on key
`applications. highlighting methods,
`time-schedules,
`la-
`bor. and other relevant material is welcome.
`Abstracted andlor Indexed in: Engineering Infor-
`mation, Inc. and by INSPEC.
`
`Editorial Office: Robert Trappl, Austrian Research In-
`stinitc for Artificial Intelligence, Schottengasse 3, A-1010.
`Vienna, Austria; and University of Vienna.
`Publishing, Advertising. and Production Offices:
`Taylor 81. Francis, 1101 Vermont Ave, Suite 200, Wash-
`ington, DC 20005, phone (202)289-2174, fax (202}289-
`3665. Susan Connell, Production Editor; or Taylor &
`Francis Ltd, Rankine Rd, Basingstolre, Hampshire RG24
`OPR, UK. Subscriptions Office: 1900 Frost Rd, Suite
`101, Bristol, PA 19007, phone (215)785—5800,
`fax
`(215)785-5515; or Taylor & Francis Ltd., Rankine Rd,
`Basingstoke, Hampshire RG24 OPR. UK, phone +44-
`256—840366. fax +44—256—479433.
`Applied Artificial Intelligence {ISSN 0883—9514} is
`published quarterly by Taylor & Francis Ltd., 4 John
`Street, tendon WCl N ZET UK. Annual 1993 institutional
`subscription UK £111, US $190. Personal Subscription
`rate UK £56, US $91; available to home address only and
`must be paid for by personal check or credit card.
`Second-class postage paid at Jamaica, NY 11431. U.S.
`Postmaster: Send address changes to Applied Artificial In-
`telligence, Publications Expediting Inc., 200 Meacham
`Avenue, Elmont, NY 11003.
`Dollar rates apply to subscribers in all countries except
`the UK and the Republic of Ireland, where the sterling
`price applies. All subscriptions are payable in advance and
`all rates include postage. Subscriptions are entered on an
`annual basis, i.e., January to December. Payment may be
`made by sterling chock, dollar check. international money
`order, National Giro, or credit card (AMEX. VISA.
`Mastercardeccess).
`Orders originating in the following territories should be
`sent directly to: Australia—R. Hill & Son. Ltd.. Suite 2.
`“9 Gardenvele Road. Gardcnvale. Victoria. Australia
`3185. Indie—Universal Subscription Agency Pvt. Ltd..
`101-102 Community Centre, Malviya Nagar Extn., Post
`Bag No. 8. Saket. New Delhi. Jepan—Kinokuniya Com-
`pany. Ltd.. Journal Department, PO. Box 55. Chitosc.
`Tokyo l56. New Zealand~R. Hill & Son, Ltd, Private
`Bag. Newmarltct. Auckland 1. USA. Canada, and
`Mexico—Taylor 8:. Francis. I900 Frost Road. Suite lOI,
`Bristol. RA l9007, USA. UK and all other territories~
`Taylor & Francis Ltd., Rankine Road, Basingstoke,
`Hampshire R024 OPR. England.
`Copyright © I993 by Taylor tit Francis. All rights re—
`served. Printed in the United States of America. Authori-
`zation to photocopy items for internal or personal use. or
`the internal or personal use of specific clients. is granted
`by Taylor & Francis for libraries and other users registered
`with the Copyright Clearance Center (CCC) Transactional
`Reporting Service, provided that the base fee of $10.00
`per copy. plus .03 per page is paid directly to CCC. 27
`Congress 81., Salem. MA 01970. USA.
`The publisher assumes no responsibility for any state
`merits of fact or opinion expressed in the published papers
`or in the advertisements. Applied Artificial Intelligence
`is owned by Taylor & Francis.
`
`
`
`Printed on acid-free paper, effective with Volume 7, Number I. 1991.
`
`SAMSUNG EXHIBIT 1006
`
`
`
`Page 2 of21
`
`SAMSUNG EXHIBIT 1006
`Page 2 of 21
`
`

`

`
`
` APPLIED
`
`iii
`
`15
`
`29
`
`39
`
`59
`
`87
`
`SOME SEMIOTIC REFLECTIONS ON THE FUTURE OF ARTIFICIAL
`
`INTELLIGENCE CI
`
`Julian Hilton
`
`MODELING OF DEEDS IN ARTIFICIAL INTELLIGENCE SYSTEMS
`
`I:I Dimitri A. Pospclov
`
`THE HIDDEN TREASURE U EmstBuchbcrger
`
`A DEEPER UNITY: SOME FEYERABENDIAN THEMES IN
`
`NEUROCOMPUTATIONAL FORM CI Paul M. Churchland
`
`HOW CONNECTIONISM CAN CHANGE AI AND THE WAY WE THINK
`
`ABOUT OURSELVES CI Georg Dorffncr
`
`THE FUTURE MERGING OF SCIENCE, ART, AND PSYCHOLOGY
`
`I7 Marvin Minsky
`
`109
`
`APPLIED ARTIFICIAL INTELLIGENCE CALENDAR
`
`SAMSUNG EXHIBIT 1006
`
`Page 3 of21
`
`AN INTERNATIONAL JOURNAL
`
`
`
`II‘ UNme-im
`MFR ,.
`..
`“CD?
`I
`I}
`I‘JJJ
`*
`
`II
`
`4993
`
`Special Issue
`Artificial Intelligence: Future, Impacts, Challenges
`Part 3
`
`CONTENTS
`
`Robert Trappl
`
`
`
`
`CHALLENGES IPART 3I
`
`SPECIAL ISSUE ON ARTIFICIAL INTELLIGENCE: FUTURE, IMPACTS,
`
`SAMSUNG EXHIBIT 1006
`Page 3 of 21
`
`

`

`El
`
`EYES DETECTION FOR FACE
`
`RECOGNITION
`
`LUIGI STRINGA
`
`lstituto per la Ricerca Scientifica e Tecnologica. 38100
`Trento, Italy
`
`A correlation-based approach to automatic face recognition requires adequate normalization
`techniques. if the positioning of the face in the image is accurate, the need for shifting to ob-
`tain the best matching between the unknown subject and a template is drastically reduced.
`with considerable advantages in computing costs. ln this paper, a novel technique is presented
`based on a very eflricient eyes localization algorithm. The technique has been implemented as
`part of the "electronic librarian” ofMAlA. the experimental platform of the integrated Al
`project under development at lRST. Preliminary experimental results on a set of 220 facial im-
`ages of55 people disclose excellent recognition rates and processing speed.
`
`INTRODUCTION
`
`
`
`There is a growing interest in face-processing problems (YOung and Ellis, 1989).
`The recognition of human faces is in fact a specific instance of 3D object recognition—
`possibly the most important visual task—and provides a most interesting example of
`how a 3D structure can be learned from a small set of 20 perspective views. Moreover,
`among several practical reasons for developing automatic systems capable of recogniz-
`ing human faces, faces provide a natural and reliable means for identifying a persori.
`The first examples of computer-aided techniques for face recognition date back
`to the early 19705 and were based on the computation of a set of geometrical features
`from the picture of a face (Goldstein et al., 1971, 1972; Harmon, 1973). More
`recently the topic has undergone a revival (Samal and lyengar, 1992), and different
`applications have been developed based on various techniques, such as template
`matching (Baron, 1981; Yuille, 1991), isodensity maps (Nakamura et al., 1991;
`Sakaguchi et al., 1989), or feature extraction by neural and Hopfieldetype networks
`(Abdi, 1988; Cottrell and Fleming, 1990; O’Toole and Abdi, 1989). At present it is
`still rather difficult to assess the state of the art. However, a first significant
`evaluation is reported in (Brunelli and Poggio, 1991), where a comparison of
`different techniques is performed on a common database—the best results were
`obtained with a template matching type technique.
`Following a correlation-based approach, excellent results have also been ob-
`tained with a procedure recently developed for the “electronic librarian” of MAIA,
`the experimental platform of the integrated AI project under development at IR ST
`(Poggio and Stringa, 1992; Stringa, 1991a). The procedure is based on the analysis
`of filtered edges and grey-level distributions to allow a comparison of the directional
`
`The author’s e-mail address is stringa®irst.it
`
`Applied Artificial Intelligence: 7:365—382. 1993
`Copyright © 1993 Taylor 8: Francis
`0383-9514t93 $10.00 + .00
`
`355
`
`SAMSUNG EXHIBIT 1006
`
`Page 4 of21
`
`SAMSUNG EXHIBIT 1006
`Page 4 of 21
`
`

`

`366
`
`L. Stringa
`
`derivatives of the entire image (Stringa, 1991(1). On a set of 220 frontal facial images
`of 55 people, a recognition rate of 100% was obtained, at a processing Speed of about
`1.25 sec per face on an HP 350 workstation. A second set of experiments, using
`
`improvements in computing time: with a
`binary derivatives, disclosed excellent
`two-layer S_Net [see Stringa (1990)] the processing speed was reduced to less than
`0.05 sec per face (Stringa, 1991e).
`Such performance provides evidence for the validity of the approach. Moreover,
`the procedure proved very efficient with respect to the task of rejecting “unknown”
`faces, i.e., faces of subjects that are not included in the database. Apart from high
`recognition rates, low processing costs, and good flexibility under variable condi-
`tions, this is another important feature for a real (i.e., industrially applicable) face
`recognition system.
`
`It must also be stressed, however, that such performance depends on the use of very
`effective normalization, registering, and rectification techniques- This is in fact a general
`requirement for any correlation-based approach to face recognition (and more generally
`to 3D object recognition), particularly when the image to be recognized is freshly
`captured with a video camera rather than scanned from a standardized photograph. In
`general it is rather natural to expect the user to look straight into the camera, for even
`in human interaction people tend to turn their heads so as to look at each other in the
`eyes. However, a certain flexibility must be tolerated conceming such variable factors
`as the distance and position of the user’s face from the camera. Hence, some adjustment
`and normalization is necessary before the system can proceed to the recognition step
`by comparing the input image with the available set of prototypes.
`
`
`
`In our procedure, the normalization of the image to be recognized is obtained
`by first locating the position of the eyes and then rotating the image so as to align
`them horizontally. As a result, the need for shifting to obtain the best matching
`between the unknown face and a template is drastically reduced, with considerable
`advantages in computing time. In particular, the eyes localization algorithm de-
`veloped for this purpose (S hinga, 1991c) proves very sensible, allowing very precise
`positioning of both pupils for each facial image included in the data base.
`The purpose of this paper is to illustrate this algorithm in detail. To emphasize
`better its crucial role for correlation—based facial recognition tasks, a brief outline
`of the system developed at IRST is first given, along with the experimental scenario
`that led to its formulation. The eyes localization algorithm is then fully described.
`The final sections report on the current experimental results obtained and offer some
`general remarks on the algorithm’s performance.
`
`OUTLINE OF THE SYSTEM
`
`General Background: The MAIA Electronic Librarian
`
`As already mentioned, the reported work is part of a more general AI project
`(labeled MAIA, acronym for “Modello Avanzato di Intelligcnza Artificiale") pres-
`
`SAMSUNG EXHIBIT 1006
`
`Page 5 of21
`
`SAMSUNG EXHIBIT 1006
`Page 5 of 21
`
`

`

`Eyes Detection for Face Recognition
`
`367
`
`ently under development at IRST. Schematically, the goal of the project is to develop
`an integrated experimental platform whose main “tentacles” include mobile robots
`
`capable of navigating in the corridors of IRST, an automatic “concierge” answering
`visitors’ questions about the Institute, and an electronic “librarian” capable of
`managing book loans and returns and/or locating volumes requested by the user (or
`indicating in which office they may be found).
`In this context, a system for automatic faée recognition is required specifically
`with respect to the librarian’s first task, i.e. managing loans and returns (a similar
`system will later be implemented in the automatic concierge). The electronic
`librarian must in fact be capable of identifying any user that might wish to borrow
`or to return a book so as to ensure that only registered personnel can have access to
`the IRST library. And for this purpose, the user is simply expected to stand in front
`of the system and look into a video camera. (In fact, our project is to use both face
`and speaker recognition techniques, so as to further improve the system’s reliability.
`In the following, however, the focus will be exclusively on the vision component.)
`The experimental scenario is therefore very unconstrained. No particular effort
`is required to ensure perfectly frontal images, and the distance of the subject from
`the camera as well as the location of his/her face in the image are only approximately
`
`to:
`
`........................i
`
`Bibliographic
`Consultancy
`
`Updating
`
` Catalogue
`
`
`
`
`
`Rem-n:
`
`,....._...
`
`Book
`Recognition
`
`Person
`Identification
`
`J,
`Speaker
`Recognition
`
`Face
`Recognition
`
`Detection
`
`'
`
`3-
`
`'"' Recognition
`
`FIGURE 1. Functional diagram of the MAIA system and some of its tasks. Shaded blocks [con-
`nected by black arrows} indicate the contextual background of the application described in
`the paper.
`
`SAMSUNG EXHIBIT 1006
`
`Page 6 of21
`
`SAMSUNG EXHIBIT 1006
`Page 6 of 21
`
`

`

`368
`
`L. Stringa
`
`fixed. This means that the system must be highly tolerant against variations of the
`head size and orientation. Moreover, background and illumination are not assumed
`
`to be constant: artificial light is used to illuminate the user ’5 face from the front, but
`the experimental environment is also exposed to sun light through numerous
`windows.
`
`Face Detection and Eyes Localization
`
`As is clear from the above, a most important feature of the system is that it must
`be capable of recognizing dynamic facial images, i.e. “live” images acquired by the
`librarian through a video camera. This is a general requirement of the MAIA project
`and a compelling prerequisite for most industrial applications.
`To detect the user’s face from the background, the system makes use of a motion
`detection algorithm originally introduced in (Stringa, 1991b) and refined in (Mes-
`selodi, 1991). This is based on the general fact that a basic stimulus in the analysis
`
`of a dynamic scene lies in detecting “differences” between successive images; the
`algorithm proceeds by comparing pairs of sequential images captured by the camera
`and segments from the background those objects that determine a significant
`variation in the images’ matrices. Despite its simplicity, it performs well, allowing]
`detection of faces in almost real time (about 3 images/sec) and showing a remarkable
`
`independence from background and illumination conditions.
`is then adjusted and
`The image of the face, detected from the background,
`normalized before the system can proceed to the recognition algorithm. As we
`mentioned, in fact, this follows a template matching strategy and is formalized as a
`distance-based comparison between the directional derivatives of the input image
`
`
`
`
`
`FIGURE 2. Face detection: the algorithm detects the "differences" between successive input
`images {left} to extract the edges of the face lrighti.
`
`SAMSUNG EXHIBIT 1006
`
`Page 7 of21
`
`SAMSUNG EXHIBIT 1006
`Page 7 of 21
`
`

`

`Eyes Detection for Face Recognition
`
`369
`
`(the face to be recognized) and those memorized in a data-base of templates or
`prototypes covering each subject known to the system. It is therefore important—as
`in any correlation-based approach——that the face be accurately positioned in the
`image. Otherwise the need for shifting required to obtain the best matching between
`the subject and a template could increase considerably, with obvious disadvantages
`in terms of computing costs.
`The solution adopted in the present application is based on a technique that
`localizes the position of the eyes and then align them horizontally. Eyes are in fact
`a most prominent facial feature and can be detected with fair accuracy. Their
`localization is then used to register, rectify, and normalize the image with respect to
`the distance of the pupils, yielding a “standar ” matrix that can easily be compared
`with the templates.
`Various approaches can be used for the purpose of locating the position of the
`eyes. For instance, in Baron (1981) a procedure is used whereby a certain number
`of eye templates are correlated against suitable subimages of the input image; a
`
`correlation value greater than a fixed threshold is then taken to indicate that an eye
`is successfully located. Our approach is different. It does not proceed by eye template
`matching. Rather, an algorithm is used based on the exploitation of (a priori)
`anthropometric information combined with the analysis of suitable grey—level
`distributions, allowing direct localization of both eyes.
`On the one hand, there exists a sort of “grammar” of facial structures that
`provides some very basic a priori information used in the recognition of faces. Every
`human face presents a reasonable symmetry, and the knowledge of the relative
`position of the main facial features (nose between eyes and over mouth, etc.) proves
`very useful to discriminate among various hypotheses. These guidelines can be
`derived from anthropometric data corresponding to an average face and refined
`through the analysis of real faces. Some typical examples [based on studies on face
`animations and reported in Brunelli (1990)] are:
`
`-
`
`-
`-
`-
`
`the eyes are 10cated halfway between the top of the head and the bottom of the
`chin;
`
`the eyes are about one eye width apart;
`the bottom of the nose is halfway between the eyebrows and the chin;
`the mouth is typically located one third of the way from the bottom of the nose
`to the bottom of the chin.
`
`
`
`On the other hand, the algorithm exploits the discriminating power of the
`distribution of the image ’3 edges (specifically leading edges, i.e. transitions from
`dark to bright) upon adequate filtering. The primary motivation is that edge densities
`
`convey most of the information required to identify those facial zones that are
`characterized by abrupt changes in image brightness Lace, e.g., (Kanade, 1973)]. In
`particular, eyes are typically the most structured area of a face, and their location in
`
`SAMSUNG EXHIBIT 1006
`
`Page 8 of21
`
`SAMSUNG EXHIBIT 1006
`Page 8 of 21
`
`

`

`370
`
`L. Stringa
`
`the image is therefore characterized by a high number of edges. These determine a
`prominent peak in the edges’ vertical projection that can be indicative of the rough
`localization of the eyes.
`The approach is schematically illustrated in Figure 3 , where Rsy and Rsx are the
`vertical and horizontal resolution (number of lines and columns in the image)
`
`respectively.
`
`The localization of the eyes proceeds from the preliminary approximate local-
`
`ization of the eyes-connecting line, centered on the maximum (Z) of the filtered
`
`vertical histogram, and of the face ’s main traits, Specifically the face’s side limits
`(X, and X.) and the nose axis (X,). These are not searched for in the entire image but
`
`only on those areas that correspond to appropriate ‘expectation zones ’. On this basis,
`the search areas for the two eyes can be estimated with reasonable accuracy. Their
`
`exact localization is then obtained by computing the horizontal and vertical coor-
`dinates of a pixel belonging to the corresponding pupils.
`
`EYES LOCALIZATION: DETAILED DESCRIPTION
`
`Preliminaries
`
`A detailed description of the eyes localization algorithm is given in the following
`paragraphs. For convenience, we first introduce some general notions that will be
`
`used throughout.
`
`
`
`Htx)
`
`Rey-l
`
`
`
`FIGURE 3. The eyes localization algorithm exploits the discriminating power of filtered edges
`and grey-level distributions.
`
`SAMSUNG EXHIBIT 1006
`
`Page 9 of21
`
`SAMSUNG EXHIBIT 1006
`Page 9 of 21
`
`

`

`
`
`Eyes Detection for Face Recognition
`
`371
`
`
`
`FIGURE 4. The search areas for the eyes [indicated by two rectangular regions] are based on
`the approximate location of the eye~connecting line, of the face sides. and of the nose axis.
`
`Let 1(x.y) be the input image, digitized at 256 levels of grey into a matrix of size
`Rsx (wide) by Rsy (high) pixels. The binary matrix E(x,y) describing the horizontal
`leading edges of [(x,y) is computed using a thresholded directional derivative. This
`is defined as
`
`E(x,y)
`
`1
`
`if I i(x,y) —I(x — l,y)| > Inf
`
`(1)
`
`0 otherwise
`
`where I,H is the average value of I(x,y) and C is a constant parameter. In general, the
`exact value of C will depend on the resolution of I(x,y).
`
`
`
`FIGURE 5. Once the eyes have been exactly localized, the image is paremetrically registered,
`rectilied, and normalized with respect to the distance of the pupils to produce a "standard"
`matrix.
`
`SAMSUNG EXHIBIT 1006
`
`Page 10 of21
`
`
`
`SAMSUNG EXHIBIT 1006
`Page 10 of 21
`
`

`

`372
`
`L. Stringa
`
`The projection of the horizontal leading edges along the vertical axis defines
`the vertical histogram HQ»). This is cornputed from E(x,y) by taking the value of
`H(y) at any point y on the vertical axis as the sum of all the leading edges of the
`corresponding horizontal line:
`
`He») =25tx,y)
`
`Finally, given a function fix), the following filteredfunctions are defined:
`
`1
`J
`an = a}: - Z flx—n
`
`fm(x) = fnlx) - fm(x)
`
`
`
`(2)
`
`(3)
`
`(4)
`
`Here j, m, and n are constant parameters whose specific values depend on what f is
`meant to extract: fix) is the result of filtering fix) on 2 j+l samples, and f.,,,,(x) is
`the result of purging fix) with a pass-band filter based on f,.(x) and fm(x).
`
`Rough Localization of the Line Connecting the Eyes’ Pupils
`
`Using the above definitions, the first step of the algorithm is the rough localiza-
`tion of the line connecting the eyes’ pupils, which allows the construction of an
`approximate model of the face using anthropometric standards.
`The technique used for this purpose proceeds from the idea that the analysis of the
`vertical projection of the horizontal edges identifies the location of significant, highly
`structured features. The higher the peak, the more structured the feature. Eyes are the
`most structured part of a person’s face. Hence, their location determines a most
`prominent peak in the grey-level projection. Moreover, they are expected to be located
`somewhat below the head top, as the anthropometric guidelines reported in (Brunelli,
`1990) suggest. The head tep, Ya, can be computed directly from the edge of the face
`given by the face detection algorithm, while the upperbound of the search area for the
`eyes is an anthropometric parameter A relative to the size of the face.
`As a first rough approximation,
`the eye-connecting line can therefore be
`identified with the y—coordinate of the highest projection peak relative to this
`expectation zone. More exactly, this is done by searching for the Zth horizontal line
`of the matrix, where Z is the ordinate of the maximum of the filtered vertical
`
`histogram in the search area:
`
`Ray-1
`
`Hm(Z) = max, HMO!)
`Ya+r3
`
`(5)
`
`SAMSUNG EXHIBIT 1006
`
`Page 11 of21
`
`SAMSUNG EXHIBIT 1006
`Page 11 of 21
`
`

`

`Eyes Detection for Face Recognition
`
`373
`
`Note that Z is defined relative to the filtered histogram H,,,,,(y). The analysis of
`H0) could already be very useful in determining the position of the eye-connecting
`line. However, some care is required to minimize misleading interferences such as
`high frequency noise or extensive areas with a high number of constant or quasi-
`constant brightness transitions. Accordingly, a bandpass filter is applied to purge
`the histogram from such disturbances, and the filtered histogram H,.,,(y) is used
`instead of H0) (for suitable values of m and n).
`In case there exists some other value of Hm,,(y) that is significantly high, i.e. if
`there exists some point 2’ such that
`
`Hm,.(Z’) > K ‘ H(Z)
`
`(K < 1 constant)
`
`(6)
`
`the eye-cennecting line is identified with the value Z or 2’ which is most plausible,
`relative to its distance from the head top Ya, on the basis of the expected position of
`the eyes in a “standard face.”
`
`Rough Localization of the Face’s Side Limits
`
`This and the next step are aimed at characterizing the relative “expectation
`zones” of the two eyes: the search area for the left eye will be restricted to a region
`between the left limit of the face and the nose, while the search area for the right eye
`will be restricted to a region between the nose and the right limit.
`It is clear that for this purpose it is not necessary to perform the search of the
`side limits on the entire image. Since the eyes’ expectation zones will be centered
`on the approximate eye-connecting line (calculated with the method indicated
`above), what is needed is the position of the side limits of the face relative to a region
`
`
`
`
`
`FIGURE 6. As a first approximation, the eve-connecting line is defined as the ordinate {Z} of
`the maximum of the filtered vertical histogram.
`
`SAMSUNG EXHIBIT 1006
`
`Page 12 of21
`
`SAMSUNG EXHIBIT 1006
`Page 12 of 21
`
`

`

`3T4
`
`L. Stringa
`
`centered on this line. In some cases these may not coincide with the “true” limits:
`
`the outermost extremities of a face could easily be located near the head top or,
`perhaps, far below, at the mouth level (as in people with pronouncedjaws). However,
`for the present purposes such facial features need not be considered, and the
`algorithm performance can be improved by restricting the search area to the
`indicated region.
`Let this region be defined by the interval Z — 00 : Z + Us, where 0;, and Us are
`fixed parameters (defining the search area Over and Under the eyes). Relative to
`this region, the rough localization of. the side limits is determined with reference to
`the face detection algorithm described in Face Detection and Eyes LOCalization. The
`detection algorithm extracts the edges of the face (see Fig. 2), and the abscissac of
`the leftmost and rightmost edge points in the interval Z — 00 : Z + U0, which we
`denote by X8 and Xd, can be taken to define the abscissae of the left and right side of
`the face, respectively.
`
`It is worth observing that this approach does not require any constraint on the
`experimental set-up. An alternative method, based on the assumption that the
`experiments be performed on a light background, has also been investigated, though
`it has not been used for the MAIA librarian. This alternative approach moves from
`the remark that, on a light background, the side limits of the face are typically
`
`localized in those areas in the image that are characterized by abrupt changes in
`brightness, corresponding to the transition from background to object. To locate
`them approximately, the following operations can therefore be performed. First, the
`image’s horizontal average density A(x) is computed relative to the search region.
`This is obtained by taking the vaiue of A(x) at any point x on the horizontal axis as
`the normalized sum of all the values of the corresponding vertical line in the interval
`Z — 00 : Z + U0:
`
`
`l
`
`Zl—Ufl
`
`
`
`A(x) = U0, 00+, may)
`
`2-0u
`
`(7)
`
`Second, A(x) is filtered on 2p + 1 samples (p a fixed parameter) to produce the
`filtered density Ap(x) as defined in Eq. (3). A glimpse at the example in Fig. 7 will show
`the typical behavior of this function; on a light background, the face limits are expected
`to coincide with the leftmost and rightmost significant brightness changes in the image,
`and these correspond to the lowest values of the filtered average density A,(x).
`On this basis, the horizontal coordinates of the side limits can easily be
`determined. Assuming the face to be roughly centered in the image, we can locate
`the search area for these points within a certain interval centered on Rsx/Z. Let A]
`and A, be the left and right extrema of this interval respectively. The left side of the
`face can then be defined as the abscissa X, of the minimum of Ap(x) in the left portion
`of the interval:
`
`SAMSUNG EXHIBIT 1006
`
`Page 13 of21
`
`SAMSUNG EXHIBIT 1006
`Page 13 of 21
`
`

`

`Eyes Detection for Face Recognition
`
`375
`
`
`
`FIGURE 7. Approximate localization of the left iXsl and right {Xe} face sides.
`
`£5112
`
`Ap(X.) = min I 24.706)
`
`(3)
`
`
`
`while the right side can be defined as the abscissa XI of the minimum of Ap(x) in the
`right portion:
`
`a
`A.(X..) : min/1.0:)
`RA]!2
`
`(9)
`
`As we mentioned, this alternative procedure has not been implemented in the
`
`MAIA librarian due to the limiting assumption on the light background. However,
`our experiments have shown that when this assumption is satisfied, the procedure
`yields essentially the same results as that based directly on the face detection
`
`algorithm.
`
`Rough Localization of the Nose’s Axis
`
`This step is similar to the previous one. With reference to our figures, note that
`whereas the face limits are usually darker in the image, the nose is normally lighter
`than the left and right regions of the facial image and determines a prominent peak
`in A900. Accordingly, our approach is to base the search for the nose’s axis on the
`maximum value of Ap(x).
`Considering that the nose is ejtpected to be located somewhere halfway between
`the face sides, the search area for its axis can safely be restricted to a central region
`
`comprised between the two vertical lines X8 and Xd calculated as above. Moreover,
`
`since the input image is assumed to provide a frontal view of the face (albeit not a
`
`SAMSUNG EXHIBIT 1006
`
`Page 14 of21
`
`SAMSUNG EXHIBIT 1006
`Page 14 of 21
`
`

`

`376
`
`L. Strings
`
`perfectly frontal one), it is not necessary to consider the entire interval XS :Xd. The
`
`search area can be further restricted to a region localized at a certain distance D”
`from X and X1. Based on standard anthropomorphic guidelines, and considering that
`
`the eyes are usually one eye width apart (Brunelii, 1990), this distance can roughly
`be assessed at one fourth of the above-mentioned interval:
`
`_ ESL—lg.
`Dn —
`4
`
`Using Eq. (10), the nose vertical axis X“ is then defined as
`
`xii-Du
`
`AJAX") = max xA,,(x)
`:5 HI)...
`
`(10)
`
`(11)
`
`i.e. as the abscissa of the maximum of the filtered density Aptlx) in the interval Xs +
`D11 : Xd — D".
`
`Detection of the Pupil’s Coordinates
`
`This is the final step. Using the approximate location of the eye-connecting line. of
`the face sides, and ofthe nose axis, the expectation zones of the two eyes can be estimated
`with reasonable accuracy. Their exact localization is then obtained by computing the
`horizontal and vertical coordinates of a pixel belonging to the corresponding pupils.
`
`
`
`F'__‘
`
`APO”
`
`l
`
`l
`
`I
`
`I
`
`x a
`
`'
`
`nose
`
`'
`
`X d
`
`FIGURE 8. Approximate location of the nose‘s axis IXnI.
`
`SAMSUNG EXHIBIT 1006
`
`Page 15 of21
`
`SAMSUNG EXHIBIT 1006
`Page 15 of 21
`
`

`

`Eyes Detection for Face Recognition
`
`377
`
`Left Pupil
`
`As shown in Fig. 9, the left eye is expected to be located within a region centered
`on the approximate eye—connecting line and comprised between the left limit of the
`face and the nose axis. More precisely, the search area is restricted to the rectangular
`region defined by the intervals Z — L. : Z + L; (high) and KH — L3 : Xn — L4 (wide),
`where each L is a suitable parameter.
`
`Relative to this area, the search of the pupil is based on the analysis of the
`horizontal grey-level distribution. For each line y, the relative horizontal density
`G’ (x) is calculated and a band—pass filter is applied to eliminate misleading inter-
`ferences and high frequency noise. This is obtained as in Eq. (4):
`
`Clix) = Gi(x) - are)
`
`(12

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket