throbber
United States Patent [19]
`Bolle et al.
`
`I 1111111111111111 11111 lllll lllll lllll lllll 111111111111111 111111111111111111
`US005546475A
`[11] Patent Number:
`[451 Date of Patent:
`
`5,546,475
`Aug. 13, 1996
`
`[54] PRODUCE RECOGNITION SYSTEM
`
`[75]
`
`Inventors: Rudolf M. Holle, Bedford Hills;
`Jonathan H. Connell, Cortlandt-Manor;
`Norman Haas, Mount Kisco, all of
`N.Y.; Rakesh Mohan, Stamford, Conn.;
`Gabriel Taubin, Hartsdale, N.Y.
`
`[73) Assignee: International Business Machines
`Corporation, Armonk, N.Y.
`
`[21] Appl. No.: 235,834
`
`[22] Filed:
`
`Apr. 29, 1994
`
`Int. Cl.6
`••..•...•....•...••....•...••... G06K 9/46; G06K 9/66
`[51]
`[52] U.S. Cl ........................... 382/190; 382/110; 382/164;
`382/165; 382/170; 382/173
`[58] Field of Search ..................................... 382/1 IO, 164,
`382/165, 170, 173, 190, 199, 181
`
`[56]
`
`References Cited
`
`U.S. PATENT DOCUMENTS
`
`3,770,111
`4,106,628
`4,515,275
`4,534,470
`4,574,393
`4,718,089
`4,735,323
`5,020,675
`5,060,290
`5,085,325
`5,164,795
`5,253,302
`
`11/1973 Greenwood et al. ................... 250/227
`8/1978 Warkentin ct al. ....................... 209n4
`5/1985 Mills et al. ............................. 209/585
`8/1985 Mills ... ............ ..... ... . ... .... ..... ... 209/558
`3/1986 Blackwell et al. ...................... 364/526
`1/1988 Hayashi ct al .......................... 382/191
`5/1988 Okada et al.
`........................... 209/582
`6/1991 Cowlin et al. .......................... 209/538
`10/1991 Kelly ct al .............................. 382/110
`2/ 1992 Jones et al. ................ ... . ......... 209/5 80
`11/1992 Conway .................................. 356/407
`10/ 199 3 Mas sen . . .. . ... . . . . . . .. . . .. . . .. . . .. . . .. .. . 3 82/165
`
`FOREIGN PATENT DOCUMENTS
`
`3044268
`5063968
`
`211991
`3/1993
`
`Japan .
`Japan .
`
`OTHER PUBLICATIONS
`
`M. J. Swain & D. H. Ballard, "Color Indexing," Int. Journal
`of Computer Vision, vol. 7, No. I, pp. 11-32,1991.
`M. Miyahara & Y. Yoshida, "Mathematical Transform of
`(R,G,B,) color data to Munsell (H,V,C,) color data," SPIE
`vol. 1001 Visual Communications and Image Processing,
`1988, pp. 650-657.
`
`L. vanGool, P. Dewaele, & A. Oostcrlinck, "Texture Analy(cid:173)
`sis anno 1983," Computer Vision, Graphics, and Image
`Processing, vol. 29, 1985, pp. 336-357.
`
`T. Pavlidis, "A Review of Algorithms for Shape Analysis,"
`Computer Graphics and Image Processing vol. 7, 1978, pp.
`243-258.
`
`S. Marshall, "Review of Shape Coding Techniques," Image
`and Vision Computing, vol. 7, No. 4, Nov. 1989, pp.
`281-294,
`
`S. Mersch, "Polarized Lighting for Machine Vision Appli(cid:173)
`cations," Proc. of RI/SME Third Annual Applied Machine
`Vision Cof., Feb. 1984, pp. 40-54 Schaumburg.
`
`B. G. Batchelor, D. A. Hill & D. C. Hodgson, "Automated
`Visual Inspection" IFS (Publications) Ltd. UK North-Hol(cid:173)
`land (A div. of Elsevier Science Publishers BV) 1985 pp.
`39-178.
`
`Primary Examiner-Leo Boudreau
`Assistant Examiner-Phuoc Tran
`Attorney, Agent, or Firm-Louis J. Percello
`
`[57]
`
`ABSTRACT
`
`The present system and apparatus uses image processing to
`recognize objects within a scene. The system includes an
`illumination source for illuminating the scene. By control(cid:173)
`ling the illumination source, an image processing system can
`take a first digitize image of the scene with the object
`illuminated a higher level and a second digitized image with
`the object illuminated at a lower level. Using an algorithm,
`the object(s) image is segmented from a background image
`of the scene by a comparison of the two digitized images
`taken. A processed image (that can be used to characterize
`features) of the object(s) is then compared to stored refer(cid:173)
`ence images. The object is recognized when a match occurs.
`The system can recognize objects independent of size and
`number and can be trained to recognize objects that is was
`not originally programmed to recognize.
`
`32 Claims, 16 Drawing Sheets
`
`Camero
`
`,.-.?:~",
`,131
`
`liXJ
`
`Llg-ht SC•UfC€
`no
`
`I Algorithms I
`.X
`2CJ_o_1
`
`we1gh1r,,J
`Dev.ce
`
`170
`
`lnlerocl1vc
`outpu1
`device
`160
`
`Petitioner LG Ex-1019, 0001
`
`

`

`U.S. Patent
`
`Aug. 13, 1996
`
`Sheet 1 of 16
`
`5,546,475
`
`FIG. i
`
`Camera
`
`100
`
`Light source
`110
`
`170
`
`Memory
`storage
`
`144
`
`Algorithms
`200
`
`Framegrabber
`
`Computer
`
`142
`
`140
`
`Human
`decision making
`
`r--------------------,
`: Weighing
`:
`___ --~ Device
`:
`170 :
`
`'
`
`I
`
`I
`
`I
`
`,_ -
`
`-
`
`-
`
`-
`
`-
`
`-
`
`-
`
`-
`
`-
`
`-
`
`-
`
`-
`
`.. -
`
`-
`
`-
`
`-
`
`-
`
`-
`
`- t
`
`Interactive
`output
`device
`160
`
`Training
`162
`
`--
`
`Petitioner LG Ex-1019, 0002
`
`

`

`U.S. Patent
`
`Aug. 13, 1996
`
`Sheet 2 of 16
`
`5,546,475
`
`FIG. 2
`
`200
`
`Imaging a
`target object
`210
`
`Segmenting the
`target object
`image
`
`220
`
`Computing one or more
`target object features
`
`Characterizing the
`target object
`feature(s)
`
`Normalizing the
`target object
`characterizations
`
`230
`
`240
`
`250
`
`251
`I
`
`255
`
`Comparing the normaHzed
`target object characterization to a
`reference to recognize
`
`260
`
`_ _ - - - - I Storage
`270
`
`Petitioner LG Ex-1019, 0003
`
`

`

`U.S. Patent
`
`Aug. 13, 1996
`
`Sheet 3 of 16
`
`5,546,475
`
`FIG. 3a
`
`320
`
`~
`310
`
`i~
`a
`
`311
`
`First image
`
`(cid:141)~. 130 'f
`
`1
`
`Second image
`
`311
`
`FIG. 3b
`
`340
`
`330
`
`311
`
`135
`
`311
`
`First image
`
`Second image
`
`Petitioner LG Ex-1019, 0004
`
`

`

`U.S. Patent
`
`Aug. 13, 1996
`
`Sheet 4 of 16
`
`5,546,475
`
`FIG. 4
`
`Background
`
`312
`
`405
`
`r-- 403
`407
`l
`
`Control
`450 , _ _ _
`
`Frame
`grabber
`
`142
`
`Computer
`140
`
`Algorithm
`
`Algorithm
`
`220
`
`200
`
`400
`
`Output device
`
`0
`
`160
`
`Petitioner LG Ex-1019, 0005
`
`

`

`U.S. Patent
`
`Aug. 13, 1996
`
`Sheet 5 of 16
`
`5,546,475
`
`FIG. 5
`
`Acquire light
`
`Image 1
`
`510
`
`340
`
`330
`
`Acquire dark
`
`Image 2
`
`520
`
`Compare on pixel
`
`to pixel basis
`
`530
`
`542
`\,
`\
`r------=-------- --------------
`
`l
`:
`
`544
`
`Object
`image
`
`131
`
`Background
`image
`311
`
`. l . . . ' I . I :
`l . : • ' .
`l
`l
`! I *---
`
`.
`-
`.
`-
`-
`.
`.
`_,,,,- Pixel 2 ·--...
`YES ( .• _ br1ght? _ _,:~ NO
`·-.. 552 ----
`:
`:
`.
`-
`553 ..
`.
`' ... ,,,'
`: 555
`.
`--....__:
`.
`~
`.
`.
`.
`'
`'
`•
`r--~---------------• r-*----------------,
`. . .
`.
`: Opaque
`:
`: Translucent
`.
`l
`j
`Image 556 j
`Image 554;
`---~-----------------~
`
`,..
`
`._.
`
`I
`
`I
`
`I
`
`I
`
`I
`
`220
`
`Petitioner LG Ex-1019, 0006
`
`

`

`U.S. Patent
`
`Aug. 13, 1996
`
`Sheet 6 of 16
`
`5,546,475
`
`Fl G. 6
`
`® 9
`
`311
`
`/
`
`0
`21
`
`Segmenting
`
`Fl (e.g., Hue)
`
`Histogram ming
`
`-"' I/
`
`220
`
`230
`
`640
`
`650
`
`R
`
`G
`
`B
`
`)
`
`I 0
`
`C
`(1)
`::J
`IT
`IJ)
`LL
`
`~
`
`Petitioner LG Ex-1019, 0007
`
`

`

`U.S. Patent
`
`Aug. 13, 1996
`
`Sheet 7 of 16
`
`5,546,475
`
`FIG. 7
`
`32
`
`/ ~ 0
`
`31 l
`
`130 9
`
`31 l
`
`I\
`7
`20
`
`Segmenting
`
`Fl
`
`220
`
`230
`
`Segmenting
`
`Fl
`
`220
`
`230
`
`Histogramming
`
`640
`
`Histogramming
`
`640
`
`745
`I I I
`
`I ~
`
`I
`
`(cid:157)
`
`74 0
`
`I
`
`I
`
`Normalization
`
`Normalization
`
`750
`
`760
`
`750
`
`770
`
`Petitioner LG Ex-1019, 0008
`
`

`

`U.S. Patent
`
`Aug. 13, 1996
`
`Sheet 8 of 16
`
`5,546,475
`
`270
`
`FIG. 8
`
`260
`
`Comparing
`840
`
`I I I I I
`
`810
`
`I I
`
`I I I
`
`831
`
`I I I
`
`I I
`
`I I
`832
`
`I
`
`833
`
`I
`
`I
`
`I I
`
`I I
`
`I I
`
`j
`834
`
`I I I
`
`835
`
`836
`
`I
`
`I
`837
`820
`
`Petitioner LG Ex-1019, 0009
`
`

`

`U.S. Patent
`
`Aug. 13, 1996
`
`Sheet 9 of 16
`
`5,546,475
`
`Fl G. 9
`
`Training 910
`
`Image
`
`920
`
`Segmenting
`220
`
`Fl
`
`230
`
`Histogramming
`640
`
`Normalization
`750
`
`Normalized
`Histogram
`Recognized?
`
`No
`
`Meet
`storage
`criteria?
`
`Yes
`
`Store
`
`930
`
`Petitioner LG Ex-1019, 0010
`
`

`

`U.S. Patent
`
`Aug. 13, 1996
`
`Sheet 10 of 16
`
`5,546,475
`
`FI G. iO
`
`/
`320
`
`311
`
`Segmenting
`
`Fl
`230
`
`F2
`1010
`
`Histo
`640
`
`Histo
`640
`
`•
`
`•
`
`•
`
`220
`
`FN
`1020
`
`Histo
`640
`
`Norm
`
`Norm.
`
`750
`
`7EfJ
`~
`Comparing
`
`/
`
`840
`
`I
`Memory storage 144
`
`Norm
`750
`
`}
`
`1050
`
`0 160
`
`Petitioner LG Ex-1019, 0011
`
`

`

`U.S. Patent
`
`Aug. 13, 1996
`
`Sheet 11 of 16
`
`5,546,475
`
`FIG. ii
`
`.,, ....
`' ·-
`-r;-•·~
`
`311
`
`1120
`
`/
`
`21 0
`
`Segmenting
`
`220
`
`Texture computation
`1140
`
`Histogramming
`
`1150
`
`Normalization
`
`1160
`
`I I
`
`1170
`
`Petitioner LG Ex-1019, 0012
`
`

`

`U.S. Patent
`
`Aug. 13, 1996
`
`Sheet 12 of 16
`
`5,546,475
`
`/
`
`
`
`21 0
`
`FIG. t2
`
`® 9
`
`311
`
`Segmenting
`
`220
`
`Boundary extraction
`1210
`
`FIG. 13
`
`Weighing device
`
`Boundary shape
`computation
`
`170
`
`Histogram ming
`
`Length
`normalization
`
`1220
`
`1230
`
`1235
`
`Computer
`
`140
`
`1240
`
`Petitioner LG Ex-1019, 0013
`
`

`

`U.S. Patent
`U.S. Patent
`
`Aug. 13, 1996
`Aug. 13, 1996
`
`Sheet 13 of 16
`Sheet 13 of 16
`
`5,546,475
`5,546,475
`
`~405
`
`1410
`
`~ 1430
`
`1430 1450
`
`3H
`
`1450
`
`1455
`1455
`
`FIG. ~ 4
`FIG. 44
`
`Petitioner LG Ex-1019, 0014
`
`Petitioner LG Ex-1019, 0014
`
`

`

`U.S. Patent
`U.S. Patent
`
`Aug. 13, 1996
`Aug. 13, 1996
`
`Sheet 14 of 16
`Sheet 14 of 16
`
`5,546,475
`5,546,475
`
`FIG.i5
`
`1520
`
`1530
`
`I
`I
`
`-------------------------· I
`I : I
`I :
`' ' '
`1540 i I
`-·-·-·----·---------------'
`164
`
`FIG.45
`
`160
`
`Petitioner LG Ex-1019, 0015
`
`Petitioner LG Ex-1019, 0015
`
`

`

`U.S. Patent
`
`Aug. 13, 1996
`
`Sheet 15 of 16
`
`5,546,475
`
`1600
`
`Fl G. t6
`
`Red
`1612
`
`Green
`1613
`
`Yellow
`1614
`
`Brown
`1615
`
`Round
`1616
`
`Straight
`1617
`
`Leafy
`1618
`
`Apples
`1619
`
`Citrus
`Fruits
`1620
`
`Peppers Potatoes
`..__ __ __.
`1621
`1622
`
`1610
`
`RED DEL APPL£
`
`w.31
`
`GALA APPLE
`
`..__ __ __,
`
`1632
`
`1633
`
`16 34
`
`1635
`
`1636
`
`1637
`
`1636
`
`1639
`
`1640
`
`1641
`
`1630
`
`Petitioner LG Ex-1019, 0016
`
`

`

`U.S. Patent
`
`Aug. 13, 1996
`
`Sheet 16 of 16
`
`5,546,475
`
`Fl G. ~7
`
`Weighing device
`
`170
`
`Computer
`
`140
`
`Memory
`storage
`
`144
`
`Price
`output
`
`1710
`
`Petitioner LG Ex-1019, 0017
`
`

`

`5,546,475
`
`1
`PRODUCE RECOGNITION SYSTEM
`
`FIELD OF THE INVENTION
`
`This invention relates to the field of recognizing (i.e.,
`identifying, classifying, grading, and verifying) objects
`using computerized optical scanning devices. More specifi(cid:173)
`cally, the invention is a trainable system and method relating
`to recognizing bulk items using image processing.
`
`BACKGROUND OF THE INVENTION
`
`Image processing systems exist in the prior art for rec(cid:173)
`ognizing objects. Often these systems use histograms to
`perform this recognition. One common histogram method
`either develops a gray scale histogram or a color histogram
`from a (color) image containing an object. These histograms
`arc then compared directly to histograms of reference
`images. Alternatively, features of the histograms arc
`extracted and compared to features extracted from histo(cid:173)
`grams of images containing reference objects.
`The reference histograms or features of these histograms
`are typically stored in computer memory. The prior art often
`performs these methods to verify that the target object in
`image is indeed the object that is expected, and, possibly, to
`grade/classify the object according to the quality of its
`appearance relative to the reference histogram. An alterna(cid:173)
`tive purpose could be to identify the target object by
`comparing the target image object histogram to the histo(cid:173)
`grams of a number of reference images of objects.
`In this description, identifying is defined as determining,
`given a set of reference objects or classes, which reference
`object the target object is or which reference class the target
`object belongs to. Classifying or grading is defined as
`determining that the target object is known to be a certain
`object and/or that the quality of the object is some quanti(cid:173)
`tatively value. Herc, one of the classes can be a "reject"
`class, meaning that either the quality of the object is too
`poor, or the object is not a member of the known class.
`Verifying, on the other hand, is defined as determining that
`the target is known to be a certain object or class and simply
`verifying this is to be true or false. Recognizing is defined
`as identifying, classifying, grading, and/or verifying.
`Bulk items include any item that is sold in bulk in
`supermarkets, grocery stores, retail stores or hardware
`stores. Examples include produce (fruits and vegetables),
`sugar, cotrec beans, candy, nails, nuts, bolts, general hard(cid:173)
`ware, parts, and package goods.
`In image processing, a digital image is an analog image
`from a camera that is converted to a discrete representation
`by dividing the picture into a fixed number of locations
`called picture clements and quantizing the value of the
`image at those picture elements into a fixed number of
`values. The resulting digital image can be processed by a
`computer algorithm to develop other images. These images
`can be stored in memory and/or used to determine informa(cid:173)
`tion about the imaged object. A pixel is a picture element of
`a digital image.
`Image processing and computer vision is the processing
`by a computer of a digital image to modify the image or to
`obtain from the image properties of the imaged objects such
`as object identity, location, etc.
`An scene contains one or more objects that arc of interest
`and the surroundings which also get imaged along with the
`objects. These surroundings arc called the background. The
`
`5
`
`15
`
`30
`
`35
`
`40
`
`2
`background is usually further away from the camera than the
`object(s) of interest.
`Segmenting (also called figure/ground separation) is sepa(cid:173)
`rating a scene image into separate object and background
`images. Segmenting refers to identifying those image pixels
`that are contained in the image of the object versus those that
`belong to the image of the background. The segmented
`object image is then the collection of pixels that comprises
`the object in the original image of the complete scene. The
`10 area of a segmented object image is the number of pixels in
`the object image.
`Illumination is the light that illuminates the scene and
`objects in it. Illumination of the whole scene directly deter(cid:173)
`mines the illumination of individual objects in the scene and
`therefore the reflected light of the objects received by
`imaging apparatus such as vi dco camera.
`Ambient illumination is illumination from any light
`source except the special lights used specifically for imaging
`an object. For example, ambient illumination is the illumi-
`20 nation due to light sources occurring in the environment
`such as the sun outdoors and room lights indoors.
`Glare or specular reflection is the high amount of light
`reflected oIT a shiny (specular, exhibiting mirror-like, pos(cid:173)
`sibly locally, properties) object. The color of the glare is
`25 mostly that of the illuminating light (as opposed to the
`natural color of the object).
`A feature of an image is defined as any property of the
`image, which can be computationally extracted. Features
`typically have numerical values that can lie in a certain
`range, say, RO-Rl. In prior art, histograms arc computed
`over a whole image or windows (sub-images) in an image.
`A histogram of a feature of an image is a numerical
`representation of the distribution of feature values over the
`image or window. A histogram of a feature is developed by
`dividing the feature range, RO-RI, into M intervals (bins)
`and computing the feature for each image pixel. Simply
`counting how many image or window pixels fall in each bin
`gives the feature histogram.
`Image features include, but arc not limited to, color and
`texture. Color is a two-dimensional property, for example
`Hue and Saturation or other color descriptions (explained
`below) of a pixel, but often disguised as a three-dimensional
`property, i.e., the amount of Red, Green, and Blue (RGB).
`45 Various color descriptions are used in the prior art, including
`( 1) the RGB space; (2) the opponent color space; (3) the
`Munsell (H,V,C) color space; and, (4) the Hue, Saturation,
`and Intensity (H,S,I) space. For the latter, similar to the
`Munsell space, Hue refers to the color of the pixel (from red,
`to green, to blue), Saturation is the "deepness" of the color
`(e.g., from greenish to deep saturated green). and Intensity
`is the brightness, or what the pixel would look like in a gray
`scale image.
`Texture, on the other hand, is an visual image feature that
`is much more difficult to capture computationally and is a
`feature that cannot be attributed to a single pixel but is
`attributed to a patch of image data. The texture of an image
`patch is a description of the spatial brightness variation in
`that patch. This can be a repetitive pattern (of texels), as the
`60 pattern on an artichoke or pineapple, or, can be more
`random, like the pattern of the leaves of parsley. These arc
`called structural textures and statistical textures, rcspcc(cid:173)
`ti vely. There exists a wide range of textures, ranging from
`the purely deterministic arrangement of a texcl on some
`tcssclation of the two-dimensional plane, to "salt and pep(cid:173)
`per" white noise. Research on image texture has been going
`on for over thirty years, and computational measures have
`
`50
`
`55
`
`65
`
`Petitioner LG Ex-1019, 0018
`
`

`

`5,546,475
`
`4
`the object is in the image and not obscured by other objects),
`(3) there is little diITcrencc in illumination of the scene of
`which the images (reference and target images) arc taken
`from which the reference object histograms and target object
`histograms arc developed, and (4) the object can be easily
`segmented out from the background or there is relatively
`little distraction in the background. Under these conditions,
`comparing a target object image histogram with reference
`object image histograms has been achieved in numerous
`JO ways in the prior art.
`
`STATEMENT OF PROBLEMS WITH THE
`PRIOR ART
`
`3
`been developed that arc one-dimensional or higher-dimen(cid:173)
`sional. However, in prior art, histograms of texture features
`arc not known to the inventors.
`Shape of some boundary in an image is a feature of
`multiple boundary pixels. Boundary shape refers to local s
`features, such as, curvature. An apple will have a roughly
`constant curvature boundary, while a cucumber has a piece
`of low curvature, a piece of low negative curvature, and two
`pieces of high curvature (the end points). Other boundary
`shape measures can be used.
`Some prior art uses color histograms to identify objects.
`Given an (R,G,B) color image of the target object, the color
`representation used for the histograms are the opponent
`color: rg=R-G, by=2*B-R-G, and wb=R+G+B. The wb
`axis is divided into 8 sections, while rg and by axes arc
`divided into 16 sections. This results in a three-dimensional
`histogram of 2048 bins. This system matches target image
`histograms to 66 pre-stored reference image histograms. The
`set of 66 pre-stored reference image histogram is fixed, and
`therefore it is not a trainable system, i.e., unrecognized target 20
`images in one instance will not be recognized in a later
`instance.
`U.S. Pat. No. 5,060,290 to Kelly and Klein discloses the
`grading of almonds based on gray scale histograms. Falling
`almonds arc furnished with uniform light and pass by a
`linear camera. A gray histogram, quantized into 16 levels, of
`the image of the almond is developed. The histogram is
`normalized by dividing all bin counts by 1700, where 1700
`pixels is the size of the largest almond expected. Five
`features arc extracted from this histogram: (I) gray value of
`the peak; (2) range of the histogram; (3) number of pixels at
`peak; (4) number of pixels in bin to the right of peak; and,
`(5) number of pixels in bin 4. Through lookup tables, an
`eight digit code is developed and if this code is in a library,
`the almond is accepted. The system is not trainable. The
`appearances of almonds of acceptable quality arc hard(cid:173)
`coded in the algorithm and the system cannot be trained to
`grade almonds differently by showing new instances of
`almonds.
`U.S. Pat. No. 4,735,323 to Okada et al. discloses a
`mechanism for aligning and transporting an object lo be
`inspected. The system more specifically relates lo grading of
`oranges. The transported oranges arc illuminated with a light
`within a predetermined wavelength range. The
`light
`reflected is received and converted into an electronic signal.
`A level histogram divided into 64 bins is developed, where
`
`IS
`
`Some prior art matching systems and methods, claim to be
`robust to distractions in the background, variation in view(cid:173)
`point, occlusion, and varying image resolution. However, in
`some of this prior art, lighting conditions arc not controlled.
`The systems fail when the color of the illumination for
`obtaining the reference object histograms is different from
`the color of the illumination when obtaining the target object
`image histogram. The ROB values of an image point in an
`image arc very dependent on the color of the illumination
`(even though humans have little difficulty naming the color
`25 given the whole image). Consequently the color histogram
`of an image can change dramatically when the color of the
`illumination (light frequency distribution) changes. Further(cid:173)
`more, in these prior art systems the objects arc not seg(cid:173)
`mented from the background, and, therefore, the histograms
`30 of the images arc not area normalized. This means the
`objects in target images have to be the same size as the
`objects in the reference images for accurate recognition
`because variations of the object size with respect to the pixel
`size can significantly change the color histogram. It also
`35 means that the parts of the image that correspond to the
`background have to be achromatic (e.g. black), or, at least,
`or a coloring not present in the object, or they will signifi(cid:173)
`cantly perturb the derived image color histogram.
`Prior art such as that disclosed in U.S. Pat. No. 5,060,290
`40 fail if the size of the almonds in the image is drastically
`different than expected. Again, this is because the system
`docs not explicitly separate the object from its background.
`This system is used only for grading almonds: it can not
`distinguish an almond from (say) a peanut.
`Similarly, prior art such as that disclosed in U.S. Pal. No.
`4,735,323 only recognizes different grades of oranges. A
`reddish grapefruit might very well be deemed a very large
`orange. The system is not designed to operate with more
`than one class of fruit al a time and thus can make do with
`weak features such as the ratio of green to white reflectivity.
`In summary, much of the prior art in the agricultural
`arena, typified by U.S. Pat. Nos. 4,735,323 and 5,060,290, is
`concerned with classifying/grading produce items. This
`55 prior art can only classify/identify objects/products/produce
`if they pass a scanner one object at a lime. It is also required
`that the range of sizes (from smallest lo largest possible
`object size) of the object/product/produce be known before(cid:173)
`hand. These systems will fail if more than one item is
`60 scanned at the same lime, or to be more precise, if more than
`one object appears at a scanning position at the same time.
`Further, the prior art often requires carefully engineered
`and expensive mechanical environment with carefully con(cid:173)
`trolled lighting conditions where the items arc transported to
`65 predefined spatial locations. These apparatuses arc designed
`specifically for one type of shaped object (round, oval, etc.)
`and arc impossible or, at least, not easily modified lo deal
`
`45
`
`50
`
`Lcvcl=(the intensity of totally renectcd light)/(the intensity of
`green light rcnected by an orange)
`
`The median, N, of this histogram is determined and is
`considered as representing the color of an orange. Based on
`N, the orange coloring can be classified into four grades of
`"cxcellcnt,""good,""fair" and "poor,"or can be graded finer.
`The systems is not trainable, in that the appearance of the
`different grades of oranges is hard-coded into the algorithms.
`The use of gray scale and color histograms is a very
`effective method for grading or verifying objects in an
`image. The main reason for this is that a histogram is very
`compact representation of a reference object that docs not
`depend on the location or orientation of the object in the
`image.
`However, for image histogram-based recognition to work,
`certain conditions have to be satisfied. It is required that: (I)
`the size of the object in the image is roughly known, (2)
`there is relatively little occlusion of the object (i.e., most of
`
`Petitioner LG Ex-1019, 0019
`
`

`

`5,546,475
`
`6
`level. Using an algorithm, the object(s) image is novclly
`segmented from a background image of the scene by a
`comparison of the two digitized images taken. A processed
`image (that can be used to characterize features) of the
`objcct(s) is then compared to stored reference images. The
`object is recognized when a match occurs.
`Processed images of an unrecognized object can be
`labeled with identity of object and stored in memory, based
`on certain criteria, so that the unrecognized object will be
`10 recognize when it is imaged in the future. In this novel way,
`the invention is taught Lo recognize previously unknown
`objects.
`Recognition of the object is independent of the size or
`number of the objects because the object image is novclly
`15 normalized before it is compared to the reference images.
`Optionally, use interfaces and apparatus that determines
`other features of the object (like weight) can be used with the
`system.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`5
`with other object types. The shape of the objects inspires the
`means of object transportation and is impossible or difficult
`for the transport means to transport different object types.
`This is especially true for oddly shaped objects like broccoli
`or ginger. This, and the use of features that are specifically s
`selected for the particular objects, docs not allow for the
`prior art to distinguish between types of produce.
`Additionally, none of the prior art are trainable systems
`where, through human or computer intervention, new items
`arc learned or old items discarded. That is, the systems can
`not be taught to recognize objects that were not originally
`programmed in the system or to stop recognizing objects
`that were originally programmed in the system.
`One area where the prior art has failed to be effective is
`in produce check out. The current means and methods for
`checking out produce poses problems. Affixing (PLU-price
`lookup) labels to fresh produce is disliked by customers and
`produce retailers/wholesalers. Pre-packaged produce items
`arc disliked, because of increased cost of packaging, dis(cid:173)
`posal (solid waste), and inability to inspect produce quality 20
`in pre-packaged form.
`The process of produce check-out has not changed much
`since the first appearance of grocery stores. At the point of
`sale (POS), the cashier has to recognize the produce item, 25
`weigh or count the item(s), and determine the price. Cur(cid:173)
`rently, in most stores the latter is achieved by manually
`entering the non-mnemonic PLU code that is associated with
`the produce. These codes are available at the POS in the
`form of printed list or in a booklet with pictures.
`Multiple problems arise from this process of produce
`check-out:
`(I) Losses incurred by the store (shrinkage). First, a
`cashier may inadvertently enter the wrong code num(cid:173)
`ber. If this is to the advantage of the customer, the 35
`customer will be less motivated to bring this to the
`attention of the cashier. Second, for friends and rela(cid:173)
`tives, the cashier may purposely enter the code of a
`lower-priced produce item (swcethcarting).
`(2) Produce check-out tends to slow down the check-out 40
`process because of produce identification problems.
`(3) Every new cashier has to be trained on produce names,
`produce appearances, and PLU codes.
`
`FIG. 1 is a block diagram of the one preferred embodi(cid:173)
`ment of the present system.
`FIG. 2 is a flow chart showing on preferred embodiment
`of the present method for recognizing objects.
`FIG. 3 illustrates segmenting a scene into an object image
`and a background image.
`FIG. 4 is a block diagram of a preferred embodiment of
`30 apparatus for segmenting images and recognizing object in
`images.
`FIG. 5 is a llow chart of a preferred method for segment(cid:173)
`ing target object images.
`FIG. 6 is a flow chart showing a preferred method of
`characterizing reference ot target object fcaturc(s).
`FIG. 7 is a llow chart showing a preferred method for
`(area/length) normalization of object fcaturc(s) character(cid:173)
`ization.
`FIG. 8 illustrates the comparison of an area/length nor(cid:173)
`malized target object characterization to one or more area
`normalized reference object characterizations.
`FIG. 9 is a flow chart showing a preferred (algorithmic)
`method of training the present apparatus to recognize new
`images.
`FIG. 10 is a block diagram showing multiple features of
`an object being extracted.
`FIG. 11 is a flow chart showing the histogramming and
`normalizing of the feature of texture.
`FIG. 12 is a flow chart showing the histogramming and
`normalizing of the feature of boundary shape.
`FIG. 13 is block diagram showing a weighing device.
`FIG. 14 shows an image where the segmented object has
`two distinct regions determined by segmenting the object
`image and where these regions arc incorporated in recogni(cid:173)
`tion algorithms.
`FIG. 15 shows a human interface to the present apparatus
`60 which presents an ordered ranking of the mosl likely iden(cid:173)
`tities of the produce being imaged.
`FIG. 16 shows a means for human determination of the
`identity of object(s) by browsing through subset(s) of all the
`previously installed stored icon images, and the means by
`which the subsets arc selected.
`FIG. 17 is a preferred embodiment of the present inven(cid:173)
`tion using object weight to price object(s).
`
`OBJECTS OF THE INVENTION
`
`45
`
`An object of this invention is an improved apparatus and
`method for recognizing objects such as produce.
`An object of this invention is an improved trainable
`apparatus and method for recognizing objects such as pro- 50
`duce.
`Another object of this invention is an improved apparatus
`and method for recognizing and pricing objects such as
`produce at the point of sale or in the produce department.
`A further object of this invention is an improved means
`and method of user interface for automated produce identi(cid:173)
`fication, such as, produce.
`
`55
`
`SUMMARY OF THE INVENTION
`
`The present invention is a system and apparatus that uses
`image processing to recognize objects within a scene. The
`system includes an illumination source for illuminating the
`scene. By controlling the illumination source, an image
`processing system can take a first digitized image of the 65
`scene with the object illuminated at a higher level and a
`second digitized image with the object illuminated at a lower
`
`Petitioner LG Ex-1019, 0020
`
`

`

`7
`DETAILED DESCRIPTION OF THE
`INVENTION
`
`5,546,475
`
`The apparatus 100 shown in FIG. 1 is one preferred
`embodiment of the present invention that uses image pro(cid:173)
`cessing to automatically recognize one or more objects 131. 5
`A light source 110 with a light frequency distribution that
`is constant over time illuminates the object 131. The light is
`non-monochromatic and may include infra-red or ultra vio-
`let frequencies. Light being non-monochromatic and of a
`constant frequency distribution ensures
`that the color
`appearance of the objects 131 does not change due to light
`variations between diiTcrent images taken and that stored
`images of a given object can be matched to images taken of
`that object at a later time. The preferred lights arc flash tubes
`Mouser U-4425, or two GE cool-white fluorescent bulbs (22 15
`Watts and 30 Watts), GE FE8T9-CW and GE FC12T9-CW,
`respectively. Such light sources arc well known.
`A video input device 120 is used to convert the reflected
`light rays into an image. Typically this image is two dimen(cid:173)
`sional. A preferred video input device is a color camera but
`any device that converts light rays into an image can be used.
`These cameras would include CCD camera and CID cam(cid:173)
`eras. The color camera output can be RGB, HSI, YC, or any
`other representation of color. One preferred camera is a Sony
`card-camera CCB-C35YC or Sony XC-999. Video input
`devices like this 120 are well known.
`Color images arc the preferred sensory modali!y in this
`invention. However, other sensor modalities arc possible,
`e.g., infra-red and ultra-violet images, smell/odor (measur(cid:173)
`able, e.g., with mass spectrometer), thermal decay proper(cid:173)
`ties, ultra-sound and magnetic resonance images, DNA,
`fundamental

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket