`
`SAMSUNG EXHIBIT 1005
`Samsung v. Image Processing Techs.
`
`
`
`
` FOR THE PURPOSES OF INFORMATION ONLY
`Codes used to identify States party to the PCT on the front pages of pamphlets publishing international applications under the PCT.
`
`
`
`AL
`AM
`AT
`AU
`AZ
`BA
`BB
`BE
`BF
`BG
`BJ
`BR
`BY
`CA
`CF
`CG
`Cl-I
`CI
`CM
`CN
`CU
`CZ
`DE
`DK
`
`Alba.nia
`Armenia
`Austria
`Australia
`Azerbaijan
`Bosnia and Herzegovina
`Barbados
`Belgium
`Burkina Faso
`Bulgaria
`Benin
`Brazil
`Belarus
`Canada
`Central African Republic
`Congo
`Switzerland
`Cote d‘Ivoire
`Cameroon
`China
`Cuba
`Czech Republic
`Germany
`Denmark
`Estonia
`
`
`
`ES
`FI
`FR
`GA
`GB
`GE
`GH
`GN
`GR
`HU
`IE
`IL
`IS
`IT
`JP
`KE
`KG
`KP
`
`KR
`KZ
`LC
`LI
`LK
`LR
`
`Spain
`Finland
`France
`Gabon
`United Kingdom
`Georgia
`Ghana
`Guinea
`Greece
`Hungary
`Ireland
`Israel
`Iceland
`Italy
`Japan
`Kenya
`Kyrgyzstan
`Democratic People's
`Republic of Korea
`Republic of Korea
`Kazakstan
`Saint Lucia
`Liechtenstein
`Sri Lanka
`Liberia
`
`LS
`LT
`LU
`LV
`MC
`MD
`MG
`MK
`
`ML
`MN
`MR
`MW
`MX
`NE
`NL
`N0
`NZ
`PL
`PT
`R0
`RU
`SD
`SE
`SG
`
`Lesotho
`Lithuania
`Luxembourg
`Latvia
`Monaco
`Republic of Moldova
`Madagascar
`The former Yugoslav
`Republic of Macedonia
`Mali
`Mongolia
`Mauritania
`Malawi
`Mexico
`’
`Niger
`Netherlands
`Norway
`New Zealand
`Poland
`Portugal
`Romania
`Russian Federation
`Sudan
`Sweden
`Singapore
`
`SI
`SK
`SN
`SZ
`TD
`TG
`TJ
`TM
`TR
`TT
`UA
`UG
`US
`UZ
`VN
`YU
`ZW
`
`Slovenia
`Slovakia
`Senegal
`Swaziland
`Chad
`Togo
`Tajikistan
`Turkmenistan
`Turkey
`Trinidad and Tobago
`Ukraine
`Uganda
`United States of America
`Uzbekistan
`Viet Nam
`Yugoslavia
`Zimbabwe
`
`
`
`SAMSUNG EXHIBIT 1005
`
`Page 2 of 91
`
`SAMSUNG EXHIBIT 1005
`Page 2 of 91
`
`
`
`WO 99/35893
`
`PCT/EP99/00300
`
`METHOD AND APPARATUS FOR DETECTION OF DROWSINESS
`
`Binford
`
`BACKGROUND OF THE INVENTION
`
`1.
`
`Field of the Invention.
`
`The present invention relates generally to an image processing system, and
`
`more particularly to the use of a generic image processing system to detect drowsiness.
`
`2.
`
`Description of the Related Art.
`
`It is well known that a significant number of highway accidents result from
`
`drivers becoming drowsy or falling asleep, which results in many deaths and injuries.
`
`Drowsiness is also a problem in other fields, such as for airline pilots and power plant
`
`operators, in which great damage may result from failure to stay alert.
`
`A number of different physical criteria may be used to establish when a person
`
`is drowsy, including a change in the duration and interval of eye blinking. Normally, the
`
`duration of blinking is about 100 to 200 ms when awake and about 500 to 800 ms when
`
`drowsy. The time interval between successive blinks is generally constant while awake, but
`
`varies within a relatively broad range when drowsy.
`
`Numerous devices have been proposed to detect drowsiness of drivers. Such
`
`devices are shown, for example, in U.S. Patent Nos. 5,841,354; 5,813,99;
`
`5,689,241;5,684,461; 5,682,144; 5,469,143; 5,402,109; 5,353,013; 5,195,606; 4,928,090;
`
`4,555,697; 4,485,375; and 4,259,665.
`
`In general, these devices fall into three categories: i)
`
`devices that detect movement of the head of the driver, e.g., tilting; ii) devices that detect a
`
`physiological change in the driver, e.g., altered heartbeat or breathing, and iii) devices that
`
`CONFIRMATION COPV
`‘
`
`SAMSUNG EXHIBIT 1005
`
`Page 3 of 91
`
`SAMSUNG EXHIBIT 1005
`Page 3 of 91
`
`
`
`WO 99/36893
`
`PCT/EP99/00300
`
`detect a physical result of the driver falling asleep, e.g., a reduced grip on the steering wheel.
`
`None of these devices is believed to have met with commercial success.
`
`Commonly-owned PCT Application Serial Nos. PCT/FR97/01354 and
`
`PCT/EP98/053 83 disclose a generic image processing system that operates to localize objects
`
`in relative movement in an image and to determine the speed and direction of the objects in
`
`real-time. Each pixel of an image is smoothed using its own time constant. A binary value
`
`corresponding to the existence of a significant variation in the amplitude of the smoothed
`
`pixel from the prior frame, and the amplitude of the variation, are determined, and the time
`
`constant for the pixel is updated. For each particular pixel, two matrices are formed that
`
`include a subset of the pixels spatially related to the particular pixel. The first matrix contains
`
`the binary values of the subset of pixels. The second matrix contains the amplitude of the
`
`variation of the subset of pixels. In the first matrix, it is determined whether the pixels along
`
`an oriented direction relative to the particular pixel have binary values representative of
`
`significant variation, and, for such pixels, it is determined in the second matrix whether the
`
`amplitude of these pixels varies in a known manner indicating movement in the oriented
`
`direction.
`
`In domains that include luminance, hue, saturation, speed, oriented direction, time
`
`constant, and x and y position, a histogram is formed of the values in the first and second
`
`matrices falling in user selected combinations of such domains. Using the histograms, it is
`
`determined whether there is an area having the characteristics of the selected combinations of
`
`domains.
`
`It would be desirable to apply such a generic image processing system to
`
`detect the
`
`drowsiness of a person.
`
`SAMSUNG EXHIBIT 1005
`
`Page 4 of 91
`
`SAMSUNG EXHIBIT 1005
`Page 4 of 91
`
`
`
`W0 9936893
`
`PCT/EP99/00300
`
`SUMMARY OF THE INVENTION
`
`The present invention is a process of detecting a driver falling asleep in which
`
`an image of the face of the driver is acquired. Pixels of the image having characteristics
`
`corresponding to characteristicsof at least one eye of the driver are selected and a histogram
`
`is formed of the selected pixels. The histogram is analyzed over time to identify each
`
`opening and closing of the eye, and from the eye opening and closing infomiation,
`
`characteristics indicative of a driver falling asleep are determined.
`
`In one embodiment, a sub-area of the image comprising the eye is determined
`
`prior to the step of selecting pixels of the image having characteristics corresponding to
`
`characteristics of an eye. In this embodiment, the step of selecting pixels of the image having
`
`characteristics of an eye involves selecting pixels within the sub-area of the image. The step
`
`of identifying a sub-area of the image preferably involves identifying the head of the driver,
`
`or a facial characteristic of the driver, such as the driver's nostrils, and then identifying the
`
`sub-area of the image using an anthropomorphic model. The head of the driver may be
`
`identified by selecting pixels of the image having characteristics corresponding to edges of
`
`the head of the driver. Histograms of the selected pixels of the edges of the driver's head are
`
`projected onto orthogonal axes. These histograms are then analyzed to identify the edges of
`
`the driver's head.
`
`The facial characteristic of the driver may be identified by selecting pixels of
`
`the image having characteristics corresponding to the facial characteristic. Histograms of the
`
`selected pixels of the facial characteristic are projected onto orthogonal axes. These
`
`histograms are then analyzed to identify the facial characteristic. If desired, the step of
`
`identifying the facial characteristic in the image involves searching sub-images of the image
`
`until the facial characteristic is found. In the case in which the facial characteristic is the
`
`SAMSUNG EXHIBIT 1005
`
`Page 5 of 91
`
`SAMSUNG EXHIBIT 1005
`Page 5 of 91
`
`
`
`WO 99/36893
`
`PCT/EP99/00300
`
`nostrils of the driver, a histogram is formed of pixels having low luminance levels to detect
`
`the nostrils. To confirm detection of the nostrils, the histograms of the nostril pixels may be
`
`analyzed to determine whether the spacing between the nostrils is within a desired range and
`
`whether the dimensions of the nostrils fall within a desired range. In order to confirm the
`
`identification of the facial characteristic, an anthropomorphic model and the location of the
`
`facial characteristic are used to select a sub-area of the image containing a second facial
`
`characteristic. Pixels of the image having characteristics corresponding to the second facial
`
`characteristic are selected and a histograms of the selected pixels of the second facial
`
`characteristic are analyzed to confirm the identification of the first facial characteristic.
`
`In order to determine openings and closings of the eyes of the driver, the step
`
`of selecting pixels of the image having characteristics corresponding to characteristics of an
`
`eye of the driver involves selecting pixels having low luminance levels corresponding to
`
`shadowing of the eye. In this embodiment, the step analyzing the histogram over time to
`
`identify each opening and closing of the eye involves analyzing the shape of the eye
`
`shadowing to determine openings and closings of the eye. The histograms of shadowed
`
`pixels are preferably projected onto orthogonal axes, and the step of analyzing the shape of
`
`the eye shadowing involves analyzing the width and height of the shadowing.
`
`An alternative method of determining openings and closings of the eyes of the
`
`driver involves selecting pixels of the image having characteristics of movement
`
`corresponding to blinking. In this embodiment, the step analyzing the histogram over time to
`
`identify each opening and closing of the eye involves analyzing the number of pixels in
`
`movement corresponding to blinking over time. The characteristics of a blinking eye are
`
`preferably selected from the group consisting of i) DP=l, ii) CO indicative of a blinking
`
`SAMSUNG EXHIBIT 1005
`
`Page 6 of 91
`
`SAMSUNG EXHIBIT 1005
`Page 6 of 91
`
`
`
`WO 99/36893
`
`PCT/EP99/00300
`
`eyelid, iii) velocity indicative of a blinking eyelid, and iv) up and down movement indicative
`
`ofa blinking eyelid.
`
`An apparatus for detecting a driver falling asleep includes a sensor for
`
`acquiring an image of the face of the driver, a controller, and a histogram formation unit for
`
`forming a histogram on pixels having selected characteristics. The controller controls the
`
`histogram formation unit to select pixels of the image having characteristics corresponding to
`
`characteristics of at least one eye of the driver and to form a histogram of the selected pixels.
`
`The controller analyzes the histogram over time to identify each opening and closing of the
`
`eye, and determines from the opening and closing information on the eye, characteristics
`
`indicative of the driver falling asleep.
`
`In one embodiment, the controller interacts with the histogram formation unit
`
`to identify a sub-area of the image comprising the eye, and the controller controls the
`
`histogram formation unit to_ select pixels of the image having characteristics corresponding to
`
`characteristics of the eye only within the sub-area of the image. In order to select the sub-area
`
`of the image, the controller interacts with the histogram formation unit to identify the head of
`
`the driver in the image, or a facial characteristic of the driver, such as the driver's nostrils.
`
`The controller then identifies the sub-area of the image using an anthropomorphic model. To
`
`identify the head of the driver, the histogram formation unit selects pixels of the image having
`
`characteristics corresponding to edges of the head of the driver and forms histograms of the
`
`selected pixels projected onto orthogonal axes. To identify a facial characteristic of the
`
`driver, the histogram formation unit selects pixels of the image having characteristics
`
`corresponding to the facial characteristic and forms histograms of the selected pixels
`
`projected onto orthogonal axes. The controller then analyzes the histograms of the selected
`
`pixels to identify the edges of the head of the driver or the facial characteristic, as the case
`
`SAMSUNG EXHIBIT 1005
`
`Page 7 of 91
`
`SAMSUNG EXHIBIT 1005
`Page 7 of 91
`
`
`
`W0 9936893
`
`PCT/EP99/00300
`
`may be. If the facial characteristic is the nostrils of the driver, the histogram formation unit
`
`selects pixels of the image having low luminance levels corresponding to the luminance level
`
`of the nostrils. The controller may also analyze the histograms of the nostril pixels to
`
`determine whether the spacing between the nostrils is within a desired range and whether
`
`dimensions of the nostrils fall within a desired range. If desired, the controller may interact
`
`with the histogram formation unit to search sub-images of the image to identify the facial
`
`characteristic.
`
`In order to verify identification of the facial characteristic, the controller uses
`
`an anthropomorphic model and the location of the facial characteristic to cause the histogram
`
`formation unit to select a sub-area of the image containing a second facial characteristic. The
`
`histogram formation unit selects pixels of the image in the sub-area having characteristics
`
`corresponding to the second facial characteristic and forms a histogram of such pixels. The
`
`controller then analyzes the histogram of the selected pixels corresponding to the second
`
`facial characteristic to identify the second facial characteristic and to thereby confirm the
`
`identification of the first facial characteristic.
`
`In one embodiment, the histogram formation unit selects pixels of the image
`
`having low luminance levels corresponding to shadowing of the eyes, and the controller then
`
`analyzes the shape of the eye shadowing to identify shapes corresponding to openings and
`
`closings of the eye. The histogram formation unit preferably forms histograms of the
`
`shadowed pixels of the eye projected onto orthogonal axes, and the controller analyzes the
`
`width and height of the shadowing to determine openings and closings of the eye.
`
`In an alternative embodiment, the histogram formation unit selects pixels of
`
`the image in movement corresponding to blinking and the controller analyzes the number of
`
`pixels in movement over time to determine openings and closings of the eye. The
`
`6
`
`SAMSUNG EXHIBIT 1005
`
`Page 8 of 91
`
`SAMSUNG EXHIBIT 1005
`Page 8 of 91
`
`
`
`W0 99,36,393
`
`PCT/EP99/00300
`
`characteristics of movement corresponding to blinking are preferably selected from the group
`
`consisting of i) DP=1, ii) CO indicative of a blinking eyelid, iii) velocity indicative of a
`
`blinking eyelid, and iv) up and down movement indicative of a blinking eyelid.
`
`If desired, the sensormay be integrally constructed with the controller and the
`
`histogram formation unit. The apparatus may comprise an alarm, which the controller
`
`operates upon detection of the driver falling asleep, and may comprise an illumination source,
`
`such as a source of IR radiation, with the sensor being adapted to view the driver when
`
`illuminated by the illumination source.
`
`A rear-view mirror assembly comprises a rear-view mirror and the described
`
`apparatus for detecting driver drowsiness mounted to the rear-view mirror. In one
`
`embodiment, a bracket attaches the apparatus to the rear-view mirror. In an alternative
`
`embodiment, the rear-view mirror comprises a housing having an open side and an interior.
`
`The rear-view mirror is mounted to the open side of the housing, and is see-through from the
`
`interior of the housing to the exterior of the housing. The drowsiness detection apparatus is
`
`mounted interior to the housing with the sensor directed toward the rear-view mirror. If
`
`desired, ajoint attaches the apparatus to the rear-view mirror assembly, with the joint being
`
`adapted to maintain the apparatus in a position facing the driver during adjustment of the
`
`mirror assembly by the driver. The rear-view mirror assembly may include a source of
`
`illumination directed toward the driver, with the sensor adapted to view the driver when
`
`illuminated by the source of illumination. The rear-view mirror assembly may also include
`
`an alarm, with the controller operating the alarm upon detection of the driver falling asleep.
`
`Also disclosed is a vehicle comprising the drowsiness detection device.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`Fig. 1 is a diagrammatic illustration of the system according to the invention.
`
`SAMSUNG EXHIBIT 1005
`
`Page 9 of 91
`
`SAMSUNG EXHIBIT 1005
`Page 9 of 91
`
`
`
`W0 99/36393
`
`PCT/EP99/00300
`
`Fig. 2 is a block diagram of the temporal and spatial processing units of the
`
`Fig. 3 is a block diagram of the temporal_ processing unit of the invention.
`
`Fig. 4 is a block diagram of the spatial processing unit of the invention.
`
`Fig. 5 is a diagram showing the processing of pixels in accordance with the
`
`invention.
`
`invention.
`
`Fig. 6 illustrates the numerical values of the Freeman code used to determine
`
`movement direction in accordance with the invention.
`
`Fig. 7 illustrates nested matrices as processed by the temporal processing unit.
`
`Fig. 8 illustrates hexagonal matrices as processed by the temporal processing
`
`unit.
`
`unit.
`
`Fig. 9 illustrates reverse-L matrices as processed by the temporal processing
`
`Fig. 10 illustrates angular sector shaped matrices as processed by the temporal
`
`processing unit.
`
`Fig. 11 is a block diagram showing the relationship between the temporal and
`
`spatial processing units, and the histogram formation units.
`
`Fig. 12 is a block diagram showing the interrelationship between the various
`
`histogram formation units.
`
`Fig. 13 shows the formation of a two-dimensional histogram of a moving area
`
`from two one-dimensional histograms.
`
`Fig. 14 is a block diagram of an individual histogram formation unit.
`
`Figs. 15A and 15B illustrate the use of a histogram formation unit to find the
`
`orientation of a line relative to an analysis axis.
`
`SAMSUNG EXHIBIT 1005
`
`Page 10 of 91
`
`SAMSUNG EXHIBIT 1005
`Page 10 of 91
`
`
`
`WO 99/35393
`
`PCT/EP99/00300
`
`Fig. 16 illustrates a one—dimensional histogram.
`
`Fig. 17 illustrates the use of semi-graphic sub-matrices to selected desired
`
`areas of an image.
`
`Fig. 18 is a side view illustrating a rear view mirror in combination with the
`
`drowsiness detection system of the invention.
`
`Fig. 19 is a top view illustrating operation of a rear view mirror.
`
`Fig. 20 is a schematic illustrating operation of a rear view mirror.
`
`Fig. 21 is a cross-sectional top view illustrating a rear view mirror assembly
`
`incorporating the drowsiness detection system of the invention.
`
`Fig. 22 is a partial cross-sectional top view illustrating a joint supporting the
`
`drowsiness detection system of the invention in the mirror assembly of Fig. 21.
`
`Fig. 23 is a top view illustrating the relationship between the rear view mirror
`
`assembly of Fig. 21 and a driver.
`
`Fig. 24 illustrates detection of the edges of the head of a person using the
`
`system of the invention.
`
`Fig. 25 illustrates masking outside of the edges of the head of a person.
`
`Fig. 26 illustrates masking outside of the eyes of a person.
`
`Fig. 27 illustrates detection of the eyes of a person using the system of the
`
`invention.
`
`Fig. 28 illustrates successive blinks in a three-dimensional orthogonal
`
`coordinate system.
`
`Figs. 29A and 29B illustrate conversion of peaks and valleys of eye movement
`
`histograms to information indicative of blinking.
`
`SAMSUNG EXHIBIT 1005
`
`Page 11 of 91
`
`SAMSUNG EXHIBIT 1005
`Page 11 of 91
`
`
`
`W0 99,36,993
`
`PCT/EP99/00300
`
`Fig. 30 is a flow diagram illustrating the use of the system of the invention to
`
`detect drowsiness.
`
`Fig. 31 illustrates the use of sub-images to search a complete image.
`
`Fig. 32 illustrates the use of the system of the invention to detect nostrils and
`
`to track eye movement.
`
`Fig. 33 illustrates the use of the system of the invention to detect an open eye.
`
`Fig. 34 illustrates the use of the system of the invention to detect a closed eye.
`
`Fig. 35 is a flow diagram of an alternative method of detecting drowsiness.
`
`Fig. 36 illustrates use of the system to detect a pupil.
`
`DETAILED DESCRIPTION OF THE INVENTION
`
`The present invention discloses an application of the generic image processing
`
`system disclosed in commonly-owned PCT Application Serial Nos. PCT/FR97/01354 and
`
`PCT/EP98/053 83, the contents of which are incorporated herein by reference for detection of
`
`various criteria associated with the human eye, and especially to detection that a driver is
`
`falling asleep while driveing a vehicle.
`
`The apparatus of the invention is similar to that described in the
`
`aforementioned PCT Application Serial Nos. PCT/FR97/01354 and PCT/EP98/05383, which
`
`will be described herein for purposes of clarity. Referring to Figs. 1 and 10, the generic
`
`image processing system 22 includes a spatial and temporal processing unit 11 in
`
`combination with a histogram formation unit 22a. Spatial and temporal processing unit 11
`
`includes an input 12 that receives a digital video signal S originating from a video camera or
`
`other imaging device 13 which monitors a scene 13a. Imaging device 13 is preferably a
`
`conventional CMOS-type CCD camera, which for purposes of the presently-described
`
`invention is mounted on a vehicle facing the driver. It will be appreciated that when used in
`
`10
`
`SAMSUNG EXHIBIT 1005
`
`Page 12 of 91
`
`SAMSUNG EXHIBIT 1005
`Page 12 of 91
`
`
`
`wo 99/35393
`
`PCT/EP99/00300
`
`non-vehicluar applications, the camera may be mounted in any desired fashion to detect the
`
`specific criteria of interest. It is also foreseen that any other appropriate sensor, e.g.,
`
`ultrasound, IR, Radar, etc., may be used as the imaging device. Imaging device 13 may have
`
`a direct digital output, or an analogioutput that is converted by an A/D convertor into digital
`
`signal S. Imaging device 13 may also be integral with generic image processing system 22, if
`
`desired.
`
`While signal S may be a progressive signal, it is preferably composed of a
`
`succession of pairs of interlaced frames, TR, and TR‘, and TR2 and TR'2, each consisting of a
`
`succession of horizontal scanned lines, e.g., l,_,, l,_2,...,l,,,7 in TR,, and 2,, in TR2. Each line
`
`consists of a succession of pixels or image—points PI, e.g., a,_,, a,_, and a,_, for line l,_,; al,7,,
`
`and al ,m for line 1,_,7 ; a1,,, and a,_, for line 12,. Signal S(PI) represents signal S composed of
`
`pixels PI.‘
`
`S(PI) includes a frame synchronization signal (ST) at the beginning of each
`
`frame, a line synchronization signal (SL) at the beginning of each line, and a blanking signal
`
`(BL). Thus, S(Pl) includes a succession frames, which are representative of the time domain,
`
`and within each frame, a series of lines and pixels, which are representative of the spatial
`
`domain.
`
`In the time domain, "successive frames" shall refer to successive frames of the
`
`same type (i.e., odd frames such as TR, or even frames such as TR',), and "successive pixels
`
`in the same position" shall denote successive values of the pixels (PI) in the same location in
`
`successive frames of the same type, e.g., a,_, of 1,’, in frame TR, and a,,, of l,_, in the next
`
`corresponding frame TR,
`
`Spatial and temporal processing unit 11 generates outputs ZH and SR 14 to a
`
`data bus 23 (Fig. 11), which are preferably digital signals. Complex signal ZH comprises a
`
`11
`
`SAMSUNG EXHIBIT 1005
`
`Page 13 of 91
`
`SAMSUNG EXHIBIT 1005
`Page 13 of 91
`
`
`
`WO 99/36893
`
`PCT/EP99/00300
`
`number of Output signals generated by the system, preferably including signals indicating the
`
`existence and localization of an area or object in motion, and the speed V and the oriented
`
`direction of displacement D1 of each pixel of the image. Also preferably output from the
`
`system is input digital video signaliS, which is delayed (SR) to make it synchronous with the
`
`output Zl-I for the frame, taking into account the calculation time for the data in composite
`
`signal ZH (one frame). The delayed signal SR is used to display the image received by
`
`camera 13 on a monitor or television screen 10, which may also be used to display the
`
`information contained in composite signal ZH. Composite signal ZH may also be transmitted
`
`to a separate processing assembly 10a in which further processing of the signal may be
`
`accomplished.
`
`Referring to Fig. 2, spatial and temporal processing unit 11 includes a first
`
`assembly 1 la, which consists of a temporal processing unit 15 having an associated memory
`
`16, a spatial processing unit 17 having a delay unit 18 and sequencing unit 19, and a pixel
`
`clock 20, which generates a clock signal HP, and which serves as a clock for temporal
`
`processing unit 15 and sequencing unit 19. Clock pulses HP are generated by clock 20 at the
`
`pixel rate of the image, which is preferably 13.5 MHZ.
`
`Fig. 3 shows the operation of temporal processing unit 15, the function of
`
`which is to smooth the video signal and generate a number of outputs that are utilized by
`
`spatial processing unit 17. During processing, temporal processing unit 15 retrieves from
`
`memory 16 the smoothed pixel values LI of the digital video signal from the immediately
`
`prior frame, and the values of a smoothing time constant CI for each pixel. As used herein,
`
`L0 and CO shall be used to denote the pixel values (L) and time constants (C) stored in
`
`memory 16 from temporal processing unit 15, and LI and CI shall denote the pixel values (L)
`
`and time constants (C) respectively for such values retrieved from memory 16 for use by
`
`12
`
`SAMSUNG EXHIBIT 1005
`
`Page 14 of 91
`
`SAMSUNG EXHIBIT 1005
`Page 14 of 91
`
`
`
`WO 99/36893
`
`PCT/EP99/00300
`
`temporal processing unit 15. Temporal processing unit 15 generates a binary output signal
`
`DP for each pixel, which identifies whether the pixel has undergone significant variation, and
`
`.a digital signal CO, which represents the updated calculated value of time constant C.
`
`Referring to Fig.‘3,'temporal processing unit 15 includes a first block 15a
`
`which receives the pixels PI of input video signal S. For each pixel PI, the temporal
`
`processing unit retrieves from memory 16a smoothed value LI of this pixel from the
`
`immediately preceding corresponding frame, which was calculated by temporal processing
`
`unit 15 during processing of the immediately prior frame and stored in memory 16 as LO.
`
`Temporal processing unit 15 calculates the absolute value AB of the difference between each
`
`pixel value PI and LI for the same pixel position (for example a”, of 1,‘, in TR, and of 1,_, in
`
`TR2:
`
`AB = IPI-LI I
`
`Temporal processing unit 15 is controlled by clock signal HP from clock 20 in
`
`order to maintain synchronization with the incoming pixel stream. Test block 15b of
`
`temporal processing unit 15 receives signal AB and a threshold value SE. Threshold SE may
`
`be constant, but preferably varies based upon the pixel value PI, and more preferably varies
`
`with the pixel value so as to form a gamma correction. Known means of varying SE to form
`
`a gamma correction is represented by the optional block 15e shown in dashed lines. Test
`
`block 15b compares, on a pixel—by-pixel basis, digital signals AB and SE in order to
`
`determine a binary signal DP. If AB exceeds threshold SE, which indicates that pixel value
`
`PI has undergone significant variation as compared to the smoothed value Ll of the same
`
`pixel in the prior frame, DP is set to "l" for the pixel under consideration. Otherwise, DP is
`
`set to "0" for such pixel.
`
`13
`
`SAMSUNG EXHIBIT 1005
`
`Page 15 of 91
`
`SAMSUNG EXHIBIT 1005
`Page 15 of 91
`
`
`
`wo 99/36893
`
`PCT/EP99/00300
`
`When DP = 1, the difference between the pixel value PI and smoothed value
`
`LI of the same pixel in the prior frame is considered too great, and temporal processing unit
`
`15 attempts to reduce this difference in subsequent frames by reducing the smoothing time
`
`constant C for that pixel. Conversely, if DP = O, temporal processing unit 15 attempts to
`
`increase this difference in subsequent frames by increasing the smoothing time constant C for
`
`that pixel. These adjustments to time constant C as a function of the value of DP are made by
`block 15c. IfDP = 1, block 15c reduces the time constant by a unit value U so that the new
`
`value of the time 7constant CO equals the old value of the constant CI minus unit value U.
`
`CO=CI-U
`
`If DP = 0, block 15c increases the time constant by a unit value U so that the
`
`new value of the time constant CO equals the old value of the constant Cl plus unit value U.
`
`CO=CI+U
`
`Thus, for each pixel, block 15c receives the binary signal DP from test unit
`
`15b and time constant C1 from memory 16, adjusts CI up or down by unit value U, and
`
`generates a new time constant CO which is stored in memory 16 to replace time constant CI.
`
`In a preferred embodiment, time constant C, is in the form 2P, where p is
`
`incremented or decremented by unit value U, which preferably equals 1, in block 15c. Thus,
`
`if DP = 1, block 15c subtracts one (for the case where U=1) from p in the time constant 2*’
`
`which becomes 2?". If DP = 0, block 15c adds one to p in time constant 2”, which becomes
`
`2?”. The choice of a time constant of the form 2“ facilitates calculations and thus simplifies
`
`the structure of block 15c.
`
`Block 15c includes several tests to ensure proper operation of the system.
`
`First, CO must remain within defined limits. In a preferred embodiment, CO must not
`
`become negative (CO _>_- 0) and it must not exceed a limit N (CO 5 N), which is preferably
`
`14
`
`SAMSUNG EXHIBIT 1005
`
`Page 16 of 91
`
`SAMSUNG EXHIBIT 1005
`Page 16 of 91
`
`
`
`wo 99/36893
`
`PCT/EP99/00300
`
`seven.
`
`In the instance in which CI and C0 are in the form 2”, the upper limit N is the
`
`maximum value for p.
`
`The upper limit N may be constant, but is preferably variable. An optional
`
`input unit 15f includes a register ormemory that enables the user, or controller 42 to vary N.
`
`The consequence of increasing N is to increase the sensitivity of the system to detecting
`
`displacement of pixels, whereas reducing N improves detection of high speeds. N may be
`
`made to depend on P1 (N may vary on a pixel-by—pixel basis, if desired) in order to regulate
`
`the variation of L0 as a function of the lever of PI, i.e., NU, = f(PlU,), the calculation of which
`
`is done in block 15f, which in this case would receive the value of PI from video camera 13.
`
`Finally, a calculation block 15d receives, for each pixel, the new time constant
`
`CO generated in block 15c, the pixel values PI of the incoming video signal S, and the
`
`smoothed pixel value LI of the pixel in the previous frame from memory 16. Calculation
`
`block 15d then calculates a new smoothed pixel value LO for the pixel as follows:
`
`LO=Ll + (PI - LI)/CO
`
`If CO = 2", then
`
`LO=LI + (PI — LI)/29°
`
`where "p0", is the new value of p calculated in unit 15c and which replaces previous value of
`
`"pi" in memory 16.
`
`The purpose of the smoothing operation is to normalize variations in the value
`
`of each pixel PI of the incoming video signal for reducing the variation differences. For each
`
`pixel of the frame, temporal processing unit 15 retrieves LI and CI from memory 16, and
`
`generates new values LO (new smoothed pixel value) and CO (new time constant) that are
`
`stored in memory 16 to replace LI and Cl respectively. As shown in Fig. 2, temporal
`
`15
`
`SAMSUNG EXHIBIT 1005
`
`Page 17 of 91
`
`SAMSUNG EXHIBIT 1005
`Page 17 of 91
`
`
`
`WO 99/36893,
`
`PCT/EP99/00300
`
`processing unit 15 transmits the CO and DP values for each pixel to spatial processing unit 17
`
`through the delay unit 18.
`
`The capacity of memory 16 assuming that there are R pixels in a frame, and
`
`therefore 2R pixels per complete image, must be at least 2R(e+f) bits, where e is the number
`
`of bits required to store a single pixel value LI (preferably eight bits), and f is the number of
`bits required to store a single time constant CI (preferably 3 bits). If each video image is
`
`composed of a single frame (progressive image), it is sufficient to use R(e+f) bits rather than
`
`2R(e+f) bits.
`
`Spatial processing unit 17 is used to identify an area in relative movement in
`
`the images from camera 13 and to determine the speed and oriented direction of the
`
`movement. Spatial processing unit 17, in conjunction with delay unit 18, cooperates with a
`
`control unit 19 that is controlled by clock 20, which generates clock pulse HP at the pixel
`
`frequency. Spatial processing unit 17 receives signals DPU and COO (where i and j correspond
`
`to the x and y coordinates of the pixel) from temporal processing unit 15 and processes these
`
`signals as discussed below. Whereas temporal processing unit 15 processes pixels within
`
`each frame, spatial processing unit 17 processes groupings of pixels within the frames.
`
`Fig. 5 diagrammatically shows the temporal processing of successive
`
`corresponding frame sequences TR,, TR}, TR3 and the spatial processing in the these frames
`
`of a pixel P1 with coordinates x, y, at times t1, t2, and t3. A plane in Fig. 5 corresponds to the
`
`spatial processing of a frame, whereas the superposition of frames corresponds to the
`
`temporal processing of successive frames.
`
`Signals DP” and C0,] from temporal processing unit 15 are distributed by
`
`spatial processing unit 17 into a first matrix 21 containing a number of rows and columns
`
`much sma