`
`Date: October 8, 2021
`
`To whom it may concern:
`
`This is to certify that the attached translation is an accurate representation of the documents
`received by this office. The translation was completed from:
`
`•
`
`Japanese
`
`To:
`
`•
`
`English
`
`The documents are designated as:
`JPH07282227A (Nobuyoshi)
`•
`
`Taylor Liff, Project Manager in this company, attests to the following:
`
`“To the best of my knowledge, the aforementioned documents are a true, full and accurate
`translation of the specified documents.”
`
`Signature of Taylor Liff
`
`CERT-07, 2020-JUN-26, V7
`
`PERFECT CORP. EXH. 1006
`
`
`
`(19) Japan Patent Office (JP)
`
`(12) Japanese Unexamined Patent
`Application Publication (A)
`
`
`(11) Japanese Unexamined Patent
`Application Publication Number
`
`H7-282227
`
`
`
`(43) Publication date October 27, 1995
`
`(51) Int. Cl.6
`
`Identification codes
`
`JPO file numbers
`
` FI
`
`Technical indications
`
`
` G06T 1/00
`
` 7/00
`
` H04N 7/18 D
`7459-5L
`
` G06F 15/62 380
` 15/70 320
`
`Request for examination: Not yet requested: Number of claims: 6 (Total of 11 pages)
`
`(21) Application number
`
`H6-70851
`
`(71) Applicant
`
`(22) Date of application
`
`April 8, 1994
`
`
`
`
`
`
`
`
`
`
`(72) Inventor
`
`003078
`TOSHIBA CORP.
`72 HORIKAWA-MACHI, SAIWAI-KU, KAWASAKI
`CITY, KANAGAWA PREFECTURE
`
`ENOMOTO, NOBUYOSHI
`℅ Toshiba Corp., Yanagi-machi Factory
`70 Yanagi-machi, Saiwai-ku, Kawasaki City,
`Kanagawa Prefecture
`
`(74) Agent
`
`Patent attorney SUZUE, TAKEHIKO
`
`
`(54) [TITLE OF THE INVENTION]
`Human Face Region Detecting Device
`(57) [ABSTRACT]
`[OBJECT]
`To provide a human face region detecting device able to
`detect, accurately and quickly, the face region of a person.
`[STRUCTURE]
`In a face region detecting portion 13 for a gradation
`image of a person captured by an image accumulating
`portion 11, a y-direction projection is measured, and at
`intervals of given numbers of pixels the projections are
`compared sequentially to detect maximum points, and,
`simultaneously, the projection differentiation values at
`those maximum points are calculated to detect a plurality of
`face region dividing position candidates from the state of
`distribution of the maximum points, to divide an image of a
`person into a plurality of regions, where a statistic for the
`projection differentiation values for the maximum points is
`compared with projection differentiation values for the
`maximum points within each of the regions to detect a head
`top position and a
`face bottom position, where,
`additionally, x-direction projections of the gradation image
`are measured within a projection measurement range on the
`x axis, and the first differentiation values of those
`projections are compared to a statistic value thereof to
`detect left and right edges of the face, and
`these
`information are outputted as a face region detection
`controlling signal to an operation controlling portion.
`
`
`
`
`1: ITV Camera
`10: A/D Converter
`[RIGHT OF 10] Input Image
`11: Image Accumulating Portion
`[LOWER LEFT OF 11] Accumulation Controlling Signal
`[UNDER 11] Accumulated Image
`[RIGHT OF 13] Accumulated Image
`12: Camera Controlling Portion
`13: Face Region Detecting Portion
`4: Displaying Device
`[UNDER 12] Input Region Signal
`[LOWER LEFT OF 13] Face Region Detection Controlling Signal
`[LOWER RIGHT OF 13] Face Region Detection Result Signal
`[RIGHT OF 14] Detection Result Display Controlling Signal
`14: Operation Controlling Portion
`6: Image Recording Portion
`[LOWER LEFT OF 14] Warning Controlling Signal
`[LOWER RIGHT OF 14] Time Signal
`[LOWER LEFT OF 6] Image Recording Controlling Signal
`15: Identification Result Information Recording Portion
`5: Warning Device
`
`PERFECT CORP. EXH. 1006
`
`
`
`[PATENT CLAIMS]
`[CLAIM 1]
`A human face region detecting device for detecting a
`face region from a gradation image that includes a face
`region of a person, comprising:
`imaging inputting means for inputting a gradation image
`that includes a face region of a person;
`first projection measuring means for measuring y-
`direction projections of the gradation image inputted by the
`image inputting means;
`dividing means for dividing, into a plurality of regions,
`the gradation image inputted by the image inputting means,
`based on the projections measured by the first projection
`measuring means;
`first detecting means for detecting a y-direction face
`region dividing position through evaluating a distinctive
`feature within each region divided by the dividing means;
`second projection measuring means for measuring x-
`direction projections of the image inputted by the image
`inputting means; and
`second detecting means for detecting an x-direction face
`region dividing position based on the projections measured
`by the second projection measuring means.
`[CLAIM 2]
`A human face region detecting device for detecting a
`face region from a gradation image that includes a face
`region of a person, comprising:
`imaging inputting means for inputting a gradation image
`that includes a face region of a person;
`first projection measuring means for measuring y-
`direction projections of the gradation image inputted by the
`image inputting means;
`maximum point detecting means for detecting maximum
`points of the projections measured by the first projection
`measuring means;
`dividing means for dividing, into a plurality of regions,
`the gradation image inputted by the image inputting means,
`based on the maximum points detected by the maximum
`point detecting means;
`first detecting means for detecting a head top position by
`comparing a statistic of differentiation value at maximum
`points detected by the maximum point detecting means and
`a maximum point of the boundary vicinity at the top of a
`region divided by the dividing means, among the maximum
`points detected by the maximum points detecting means;
`second projection measuring means for measuring x-
`direction projections of the gradation image inputted by the
`image inputting stage; and
`second detecting means for detecting an x-direction face
`region dividing position based on a differentiation value of
`the projections measured by
`the second projection
`measuring means.
`[CLAIM 3]
`A human face region detecting device for detecting a
`face region from a gradation image that includes a face
`region of a person, comprising:
`imaging inputting means for inputting a gradation image
`that includes a face region of a person;
`first projection measuring means for measuring y-
`direction projections of the gradation image inputted by the
`image inputting means;
`
`Japanese Unexamined Patent Application Publication H7-282227
`(2)
`
`
`maximum point detecting means for detecting maximum
`points from the projections measured by the first projection
`measuring means;
`dividing means for dividing, into a plurality of regions,
`the gradation image inputted by the image inputting means,
`based on the maximum points detected by the maximum
`point detecting means;
`first detecting means for detecting a face bottom position
`based on projections of the maximum points in the vicinity
`of a boundary region at the bottom of a region divided by
`the dividing means, of the maximum points detected by the
`maximum point detecting means;
`second projection measuring means for measuring x-
`direction projections of the gradation image inputted by the
`image inputting means; and
`second detecting means for detecting a face region
`dividing position based on a differentiation value of the
`projections measured by the second projection measuring
`means.
`[CLAIM 4]
`A human face region detecting device for detecting a
`face region from a gradation image that includes a face
`region of a person, comprising:
`imaging inputting means for inputting a gradation image
`that includes a face region of a person;
`first projection measuring means for measuring y-
`direction projections of the image inputted by the image
`inputting means;
`maximum point detecting means for detecting maximum
`points from the projections measured by the first projection
`measuring means;
`dividing means for dividing, into a plurality of regions,
`the gradation image inputted by the image inputting means,
`based on the maximum points detected by the maximum
`point detecting means;
`first detecting means for detecting a y-direction face
`region dividing position through evaluating a distinctive
`feature within each region divided by the dividing means;
`second detecting means for estimating a position of eyes
`wherein a change feature is strong, based on a y-direction
`face region dividing position, detected by the first detecting
`means, to detect, in that region, an x-direction projection
`measuring region;
`second projection measuring means for measuring x-
`direction projections of the image inputted by the image
`inputting means, in the x-direction projection measuring
`region detected by the second detecting means; and
`third detecting means for detecting an x-direction face
`region dividing position based on a differentiation value of
`the projections measured by
`the second projection
`measuring means.
`[CLAIM 5]
`A human face region detecting device that uses a system
`wherein a distribution of maximum point spacing of
`projections on an axis is used in order to carry out division
`of regions of that wherein there is a different period of
`gradation change, by region, such as in a face part and a
`non-face part, in the y-direction, and performing threshold
`value processing thereon.
`[CLAIM 6]
`
`
`
`
`PERFECT CORP. EXH. 1006
`
`
`
`A human face region detecting device for detecting a
`face region from a gradation image that includes a face
`region of a person, comprising:
`imaging inputting means for inputting a gradation image
`that includes a face region of a person;
`first projection measuring means for measuring y-
`direction projections of the gradation image inputted by the
`image inputting means;
`maximum point detecting means for performing a
`plurality of samplings of projections while changing the
`sampling spacing on projections measured by the first
`projection measuring means to detect maximum points in
`each case;
`dividing means for selecting maximum points of an
`optimal sampling spacing, based on a state of distribution
`of the maximum points in each case, detected by the
`maximum point detecting means, to divide, based on that
`maximum point, the gradation image inputted by the image
`inputting means into a plurality of regions;
`first detecting means for detecting a y-direction face
`region dividing position through evaluating a distinctive
`feature within each region divided by the dividing means;
`second projection measuring means for measuring x-
`direction projections of the image inputted by the image
`inputting means; and
`second detecting means for detecting an x-direction face
`region dividing position based on a differentiation value of
`the projections measured by
`the second projection
`measuring means.
`[DETAILED EXPLANATION OF THE INVENTION]
`[0001]
`[FIELD OF APPLICATION IN INDUSTRY]
`The present invention relates to a human face region
`detecting device for detecting a face region from an image
`of a person that, for example, enters into a monitoring
`region.
`[0002]
`[PRIOR ART]
`Face regions are detected through, for example, the
`following method in this type of human face region
`detecting device. Specifically, there is a method wherein,
`after a person who has entered into a monitoring region has
`been imaged by an ITV camera, for example, and the image
`that has been captured has been subjected to binarization by
`gradations, the face region is detected by detecting
`connecting regions.
`[0003]
`Moreover, as another face region detecting method there
`is a method wherein an image captured by an ITV camera
`is binarized, by the gradations thereof, and projections for
`the vertical and horizontal directions are calculated and
`division into regions in the pattern thereof is carried out
`through threshold values of the pattern values. As yet
`another face region detecting method there is a method
`wherein a color image captured by an ITV camera is used
`and region division
`is carried out after performing
`binarization.
`[0004]
`[PROBLEM SOLVED BY THE PRESENT INVENTION]
`In the three methods set forth above, after an image that
`has been captured is binarized it is, for example, stored in a
`frame buffer, and the frame buffer is accessed and the
`
`Japanese Unexamined Patent Application Publication H7-282227
`(3)
`
`
`binarized image is handled as a two-dimensional array to
`carry out the process of dividing into regions, so there is a
`problem in that it is difficult to carry out, by simple
`processing means, reliable and high-speed detection of the
`desired region. Given this, the object of the present
`invention is to provide a human face region detecting
`device able to detect, reliably and quickly, the face region
`of a person.
`[0005]
`[MEANS FOR SOLVING THE PROBLEM]
`A human face region detecting device according to the
`present invention, for detecting a face region from a
`gradation image that includes a face region of a person,
`comprises:
`imaging
`inputting means for
`inputting a
`gradation image that includes a face region of a person;
`first projection measuring means for measuring y-direction
`projections of the gradation image inputted by the image
`inputting means; dividing means for dividing, into a
`plurality of regions, the gradation image inputted by the
`image inputting means, based on the projections measured
`by the first projection measuring means; first detecting
`means for detecting a y-direction face region dividing
`position through evaluating a distinctive feature within
`each region divided by the dividing means; second
`projection measuring means for measuring x-direction
`projections of the image inputted by the image inputting
`means; and second detecting means for detecting an x-
`direction face region dividing position based on the
`projections measured by the second projection measuring
`means.
`[0006]
`A human face region detecting device according to the
`present invention, for detecting a face region from a
`gradation image that includes a face region of a person,
`comprises:
`imaging
`inputting means for
`inputting a
`gradation image that includes a face region of a person;
`first projection measuring means for measuring y-direction
`projections of the gradation image inputted by the image
`inputting means; maximum point detecting means for
`detecting maximum points of the projections measured by
`the first projection measuring means; dividing means for
`dividing, into a plurality of regions, the gradation image
`inputted by the image inputting means, based on the
`maximum points detected by the maximum point detecting
`means; first detecting means for detecting a head top
`position by comparing a statistic of differentiation value at
`maximum points detected by the maximum point detecting
`means and a maximum point of the boundary vicinity at the
`top of a region divided by the dividing means, among the
`maximum points detected by the maximum points detecting
`means; second projection measuring means for measuring
`x-direction projections of the gradation image inputted by
`the image inputting stage; and second detecting means for
`detecting an x-direction face region dividing position based
`on a differentiation value of the projections measured by
`the second projection measuring means.
`[0007]
`A human face region detecting device according to the
`present invention, for detecting a face region from a
`gradation image that includes a face region of a person,
`comprises:
`imaging
`inputting means for
`inputting a
`gradation image that includes a face region of a person;
`
`
`
`
`PERFECT CORP. EXH. 1006
`
`
`
`first projection measuring means for measuring y-direction
`projections of the gradation image inputted by the image
`inputting means; maximum point detecting means for
`detecting maximum points from the projections measured
`by the first projection measuring means; dividing means for
`dividing, into a plurality of regions, the gradation image
`inputted by the image inputting means, based on the
`maximum points detected by the maximum point detecting
`means; first detecting means for detecting a face bottom
`position based on projections of the maximum points in the
`vicinity of a boundary region at the bottom of a region
`divided by the dividing means, of the maximum points
`detected by the maximum point detecting means; second
`projection measuring means for measuring x-direction
`projections of the gradation image inputted by the image
`inputting means; and second detecting means for detecting
`a face region dividing position based on a differentiation
`value of the projections measured by the second projection
`measuring means.
`[0008]
`A human face region detecting device according to the
`present invention, for detecting a face region from a
`gradation image that includes a face region of a person,
`comprises:
`imaging
`inputting means for
`inputting a
`gradation image that includes a face region of a person;
`first projection measuring means for measuring y-direction
`projections of the image inputted by the image inputting
`means; maximum point detecting means for detecting
`maximum points from the projections measured by the first
`projection measuring means; dividing means for dividing,
`into a plurality of regions, the gradation image inputted by
`the image inputting means, based on the maximum points
`detected by the maximum point detecting means; first
`detecting means for detecting a y-direction face region
`dividing position through evaluating a distinctive feature
`within each region divided by the dividing means; second
`detecting means for estimating a position of eyes wherein a
`change feature is strong, based on a y-direction face region
`dividing position, detected by the first detecting means, to
`detect, in that region, an x-direction projection measuring
`region; second projection measuring means for measuring
`x-direction projections of the image inputted by the image
`inputting means, in the x-direction projection measuring
`region detected by the second detecting means; and third
`detecting means for detecting an x-direction face region
`dividing position based on a differentiation value of the
`projections measured by the second projection measuring
`means.
`[0009]
`A human face region detecting device according to the
`present invention uses a system wherein a distribution of
`maximum point spacing of projections on an axis is used in
`order to carry out division of regions of that wherein there
`is a different period of gradation change, by region, such as
`in a face part and a non-face part, in the y-direction, and
`performing threshold value processing thereon.
`[0010]
`A human face region detecting device according to the
`present invention, for detecting a face region from a
`gradation image that includes a face region of a person,
`comprises:
`imaging
`inputting means for
`inputting a
`gradation image that includes a face region of a person;
`
`Japanese Unexamined Patent Application Publication H7-282227
`(4)
`
`
`first projection measuring means for measuring y-direction
`projections of the gradation image inputted by the image
`inputting means; maximum point detecting means for
`performing a plurality of samplings of projections while
`changing the sampling spacing on projections measured by
`the first projection measuring means to detect maximum
`points in each case; dividing means for selecting maximum
`points of an optimal sampling spacing, based on a state of
`distribution of the maximum points in each case, detected
`by the maximum point detecting means, to divide, based on
`that maximum point, the gradation image inputted by the
`image inputting means into a plurality of regions; first
`detecting means for detecting a y-direction face region
`dividing position through evaluating a distinctive feature
`within each region divided by the dividing means; second
`projection measuring means for measuring x-direction
`projections of the image inputted by the image inputting
`means; and second detecting means for detecting an x-
`direction face region dividing position based on a
`differentiation value of the projections measured by the
`second projection measuring means.
`[0011]
`[OPERATION]
`A face region of a person can be detected accurately and
`quickly through measuring y-direction projections of a
`gradation image of a person; detecting maximum points of
`those projections; dividing into a plurality of regions from
`the state of distribution of the maximum points; comparing
`a statistic of the differential values of the maximum points
`with differential values of projections at the maximum
`points in the vicinities of the boundaries at the tops of the
`regions to detect a head top position; using the shapes of
`the projections at the maximum points in the vicinities of
`the boundaries of the bottoms of the regions to detect a face
`bottom position; detecting a x-direction projection
`measuring region based on the detected head top position
`and face bottom position and measuring the x-direction
`projections of the gradation image within this region; and
`detecting x-direction face region dividing positions based
`on the differentiation values of the projections and on a
`statistic value thereof.
`[0012]
`[EMBODIMENT]
`An embodiment according to the present invention will
`be explained below in reference to the drawings. FIG. 1
`depicts schematically a human face region processing
`device according to the present embodiment. That is, an
`ITV 1 camera captures, in black and white, an image within
`a monitoring region 7, to convert it into an electric signal.
`The image signal captured by the ITV camera 1 is sent, by
`a transmission path 2, to a processing device 3 and a
`displaying device 4. The displaying device 4 displays the
`image that was captured by the ITV camera 1, and the
`processing device 3 continuously reads in the image
`captured by the ITV camera 1 to carry out image
`processing and evaluations, etc., in order to detect the
`region of a face of a person from an image that includes a
`person. When the result is that a face region is detected, a
`face region detected indicator is shown on the screen of the
`displaying device 4, or an alarm tone is emitted by a
`warning device 5, and the image at that time is recorded
`
`
`
`
`PERFECT CORP. EXH. 1006
`
`
`
`using an image recording device 5 such as a VCR, or the
`like.
`[0013]
`FIG. 2 depicts schematically the critical portions of the
`human face region processing device described above. Note
`that for the explanation identical reference symbols are
`assigned to identical parts as in FIG. 1. In FIG. 2, the
`processing device 3 is structured from an A/D converter 10,
`an image accumulating portion 11, a camera controlling
`portion 12, a face region detecting portion 13, an operation
`controlling portion 14, and an
`identification
`result
`information recording portion 15.
`[0014]
`A black-and-white image signal from the ITV camera 1
`is converted into a digital signal, at given time intervals, by
`the A/D converter 10, and stored in an image accumulating
`portion 11 as eight-bit image data at intervals of given
`numbers of frames.
`[0015]
`A camera controlling portion 12 controls the operation of
`the ITV camera 1, to adjust the imaging region of the ITV
`camera 1, to set the image inputting range. In the face
`region detecting portion 13, the face region of a person in
`the image is detected based on the image captured by the
`image accumulating portion 11. When a face region of a
`person is detected in the image, a face region detected
`result signal is outputted.
`[0016]
`In the operation controlling portion 13, various types of
`controlling signals are outputted, to control the operations
`of the various portions, based on the face region detected
`result signal outputted by the face region detecting portion
`13. That is, a detection result display controlling signal is
`outputted to the displaying device 4; a warning controlling
`signal is outputted to the warning device 5; an image
`recording controlling signal is outputted to the image
`recording device 6; an accumulation controlling signal is
`outputted to the image accumulating portion 11; an input
`region signal is outputted to the camera controlling portion
`12; a face region detection controlling signal is outputted to
`the face region detecting portion 13; and a time signal is
`outputted to the identification result information recording
`portion 15.
`[0017]
`The displaying device 4 displays the image captured by
`the image accumulating portion 11, and when a face region
`of a person has been detected in the image by the face
`region detecting portion 13, the detection result is displayed
`based on the detection result controlling signal from the
`operation controlling portion 14.
`[0018]
`When a face region of a person has been detected in the
`image by the face region detecting portion 13, the image
`recording device 6 records the image wherein the face
`region of the person has been detected, based on the image
`recording controlling signal from the operation controlling
`portion 14.
`[0019]
`The warning device 5 issues an alarm tone, based on the
`warning controlling signal from the operation controlling
`portion 14, when the face region of a person has been
`
`Japanese Unexamined Patent Application Publication H7-282227
`(5)
`
`
`detected in the image by the face region detecting portion
`13.
`[0020]
`The image accumulating portion 11 is controlled so as to
`perform an operation for reading in image data from the
`A/D converter 10, based on the accumulation controlling
`signal from the operation controlling portion 14, when the
`face region of a person has been detected in the image by
`the face region detecting portion 13.
`[0021]
`The camera controlling portion 12 performs a zooming
`operation of the ITV camera 1 based on the input region
`signal from the operation controlling portion 14 when a
`face region of a person has been detected in the image by
`the face region detecting portion 13.
`[0022]
`The face region detecting portion 13 is configured so as
`to control the operation for detecting the face region based
`on the face region detection controlling signal from the
`operation controlling portion 14 when a face region of a
`person has been detected in the image by the face region
`detecting portion 13.
`[0023]
`The identification result information recording portion
`15 records the face region detected result signal from the
`face region detecting portion 13 and information on the
`face region that has been detected, together with the time,
`based on a time signal from the operation controlling
`portion 14, when a face region of a person has been
`detected in the image by the face region detecting portion
`13.
`[0024]
`The processing operation by the human face region
`detecting device depicted in FIG. 1 will be explained next
`in reference to the flowchart depicted in FIG. 3. Processing
`first advances to Step S1, the operation controlling portion
`14 sets the image inputting range to the entire imaging
`region of the ITV camera 1, and processing advances to
`Step S2.
`[0025]
`In Step S2, the image captured by the ITV camera 1 is
`inputted into the processing device 3 and is converted into a
`digital signal by the A/D converter 10. In Step S3, the
`image data that has been converted into a digital signal is
`read into the image accumulating portion 11. Additionally,
`the image that has been read into the image accumulating
`portion 11 is displayed on the displaying device 4.
`[0026]
`In Step S4, a detection process for the face region of a
`person that has been detected is carried out by the face
`region detecting portion 13 based on the image read in by
`the image accumulating portion 11. Note that the method
`for detecting a person from the image read in by the image
`accumulating portion 11 may be a method such as, for
`example, one wherein a region wherein there is a change is
`extracted by carrying out a difference calculating and
`binarization calculating process on a plurality of images
`read in, in a time sequence, into the image accumulating
`portion 11, to find a rectangle that surrounds the region of
`change, where if the size of this rectangle is greater than a
`given value (a pixel count threshold value), a person has
`
`
`
`
`PERFECT CORP. EXH. 1006
`
`
`
`been detected, but because this is not the point of the
`present invention, the explanation thereof is omitted.
`[0027]
`Processing advances to Step S5, and if a face region of a
`person was detected in Step S4, processing advances to
`Step S6; but if not detected, processing returns to Step S1,
`and the same process as described above is performed.
`[0028]
`In Step S6, when the operation controlling portion 14 has
`received a face region detected result signal from the face
`region detecting portion 13, the operation controlling
`portion 14 outputs an image recording controlling signal to
`the image recording device 6, and the image wherein the
`face region was detected is recorded in the image recording
`portion 6, and processing advances to Step S7.
`[0029]
`In Step S7, when the operation controlling portion 14
`receives a face region detected result signal from the face
`region detecting portion 13, it outputs a warning controlling
`signal to the warning device 5, and when the warning
`controlling signal is received in the warning device 5, an
`alarm tone is emitted.
`[0030]
`In Step S8, the operation controlling portion 14, based on
`the face region detected result signal from the face region
`detecting portion 13, outputs an input region signal to the
`camera controlling portion 12 so
`that
`the camera
`controlling portion 12 performs zooming control on the
`ITV camera 1 so as to center the image inputting range on
`the face region of the detected person.
`[0031]
`In Step S9, an enlarged image, captured by the ITV
`camera 1, is inputted into the processing device 3 and
`converted by the A/D converter 10 into a digital signal. In
`Step S10, the image data that has been converted into the
`digital signal is read into the image accumulating portion
`11, and processing advances to Step S11.
`[0032]
`In Step S11, the enlarged image that has been read into
`the image accumulating portion 11 and centered on the face
`region of the detected person is displayed on the displaying
`device 4. In Step S12, the enlarged image that is centered
`on the face region of the detected person is recorded in the
`image recording device 6.
`[0033]
`In Step S13, the recording time of the time of recording
`onto the image recording device 6, and the face region
`detected processing result that includes an intermediate
`result of the calculation for the face region detecting
`process carried out in Step S4, are recorded in identification
`result information recording means 8 based on a time signal
`from the operation controlling portion 14 and the face
`region detected result signal from the face region detecting
`portion 13.
`[0034]
`When the face region detecting process has been
`completed in this way, processing returns to Step S1, and
`the same processing operation as described above is
`executed again. The face region detecting process executed
`in Step S4 will be explained next.
`[0035]
`
`Japanese Unexamined Patent Application Publication H7-282227
`(6)
`
`
`The principal behind the face region detecting method in
`the present embodiment will be explained briefly first. The
`image of the detection subject is a black-and-white
`gradation image (grayscale image) captured by the image
`accumulating portion 11. In this case, for those parts that
`are the features as the face region that is to be detected
`within the image, when compared to parts other than the
`face, the changes in image gradation, depending on the
`parts within the face (the eyes, nose, mouth, etc.) will be
`narrow.
`[0036]
`Given this, in the gradation image, gradation projections
`on the x axis and on the y axis are taken for the gradation
`image, and, in each of the projections, the regions wherein
`changes that are narrow when compared to other positions
`can be found are defined as regions corresponding to the
`face.
`[0037]
`Moreover,