throbber
1191
`United States Patent
`1111
`Patent Number:
`5,546,475
`Date of Patent:1451
`Belle et al.
`Aug. 13, 1996
`
`
`
`1111111111111111111111111 |11|111111|1|1|111111111111111111111111 111111111
`U8005546475A
`
`[54] PRODUCE RECOGNITION SYSTEM
`
`[75]
`
`Inventors: Rudolf M. Bolle, Bedford Hills;
`Jonathan H. Connell, Cortlandt-Manor;
`Norman Haas, Mount Kisco. all of
`N.Y.; Rakesh Mohan, Stamford, Conn;
`Gabriel Taubin, Hartsdale, NY.
`
`[73] Assignec:
`
`International Business Machines
`Corporation, Armonk, NY.
`
`[21] Appl, No.: 235,834
`
`[22]
`
`Filed:
`
`Int. Cl.6
`[51]
`[52] US. CL
`
`Apr. 29, 1994
`
`
`
`[58] Field of Search.
`
`G06K 9/46; 006K 9/66
`.. 382/190; 382/110; 382/164;
`382/165; 382/170; 382/173
`.38W110, 164,
`382/165 170,173,190 199,181
`
`[56]
`
`References Cited
`US. PATENT DOCUMENTS
`
`
`3,770,111
`250/227
`11/1973 Greenwood et a1.
`4,106,628
`209/74
`8/1978 Warkentin ct a].
`.
`4,515,275
`. 209/585
`5/1985 Mills etal.
`.
`
`4,534,470
`8/1985 Mills ...........
`.209/558
`
`
`4,574,393
`311986 Blackwell ct al
`. 364/526
`4,718,089
`1/1983 Hayashi et a1.
`.
`. 382/191
`4,735,323
`. 209/582
`5/1988 Okada et a1.
`
`5,020,675
`..
`. 209/538
`6/1991 Cowlin et a1.
`
`5,060,290
`, 382/110
`10/1991 Kelly et a1.
`.
`5,085,325
`2/1992 Jones ct a].
`.
`. 209/580
`5,164,795
`11/1992 Conway ..
`. 356/407
`5,253,302
`10/1993 Massen
`............. 382/165
`
`FOREIGN PATENT DOCUMENTS
`
`3044268
`5063968
`
`Japan .
`2/1991
`Japan.
`3/1993
`OTHER PUBLICATIONS
`
`M. J. Swain & D, H. Ballard, “Color Indexing,” Int. Journal
`of Computer Vision, vol. 7, No. 1, pp. 11—32,1991.
`M. Miyahara & Y. Yoshida, “Mathematical Transform of
`(R,G,B,) color data to Munsell (H,V,C,) color data," SPIE
`vol. 1001 Visual Communications and Image Processing,
`1988, pp. 650—657.
`
`L. vanGool, P. Dewaelc. & A. Oosterlinck, “Texture Analy—
`sis anno 1983," Computer Vision, Graphics, and Image
`Processing, vol. 29, 1985, pp. 336457.
`a,
`
`T. Pavlidis, “A Review of Algorithms for Shape Analysis,
`Computer Graphics and lmage Processing vol. 7, 1978, pp.
`243—258.
`
`S. Marshall, “Review of Shape Coding Techniques,” Image
`and Vision Computing, vol. 7, No. 4. Nov. 1989, pp.
`281—294.
`
`S. Mersch, “Polarized Lighting for Machine Vision Appli—
`cations," Proc. of Rl/SME Third Annual Applied Machine
`Vision C011, Feb. 1984, pp. 40—54 Schaumburg.
`
`B. G. Batchclor. D. A. Hill & D. C. Hodgson, “Automated
`Visual Inspection" IFS (Publications) Ltd. UK North~Hol—
`land (A div. of Elsevier Science Publishers BV) 1985 pp.
`39—178.
`
`Primary Examiner—Leo Boudrcau
`Assistant Examiner—Phuoc Tran
`Attorney, Agent, or FirmALouis J. Percello
`
`[57]
`
`ABSTRACT
`
`The present system and apparatus uses image processing to
`recognize objects within a scene. The system includes an
`illumination source for illuminating the scene. By control-
`ling the illumination source, an image processing system can
`take a first digitize image of the scene with the object
`illuminated a higher level and a second digitized image with
`the object illuminated at a lower level. Using an algorithm,
`the object(s) image is segmented from a background image
`of the scene by a comparison of the two digitized images
`taken. A processed image (that can be used to characterize
`features) of the object(s) is then compared to stored refer-
`ence images. The object is recognized when a match occurs.
`The system can recognize objects independent of size and
`number and can be trained to recognize objects that is was
`not originally programmed to recognize.
`
`32 Claims, 16 Drawing Sheets
`
`100
`
`ngm 53.1156
`110
`
`1
`‘
`1
`1
`
`
`
`t where
`,1.
`20*,
`M"?
`L
`
`Q 1
`
`1
`
`rm,
`161/;
`111
`‘
`if?1 11
`c, M U
`
`
`1 1
`17011
`1 5'omq‘ 14411117"
`:001
`i “—
`1Algorthms
`Deuce
`WrapegmJber
`Comptam
`1 wagW19
`1413 1
`1.111
`.172 1
`1
`‘
`lnterocltw
`.' ULIVDU‘.
`“T1:
`.,
`Hut'1c1
`.-
`r
`‘1 1"”77
`dcvcc
`deduct” noun; 17110“ ,11
`1o:
`1
`1alnmg
`(1:
`
`1
`
`
`
`1
`1
`‘
`
`1
`1
`
`
`1
`1
`
`1
`
`_..
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 1 of 29
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 1 of 29
`
`

`

`US. Patent
`
`Aug. 13,1996
`
`Sheet 1 of 16
`
`5,546,475
`
`F IG.
`
`1
`
`Camera
`
`100
`
`Lighi source
`
`170
`
`Memory
`siora eg
`
`144
`
`Aigoriihms
`200
`
`Fromegrabber
`
`
`
`142
`
`Human
`decision making
`
`Computer
`
`Weighing.......
`Device
`:
`_______________170";
`MO
`I Interactive
`ouipui
`device
`160
`
`Training
`162
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 2 of 29
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 2 of 29
`
`

`

`US. Patent
`
`Aug. 13, 1996
`
`Sheet 2 of 16
`
`5,546,475
`
`FIG. 2
`
`Imaging a
`t
`'
`t
`targe objecm0
`
`Segmenting the
`target object
`'ma e
`'
`g
`220
`
`Computing one or more
`target object features
`
`230
`
`Characterizing the
`target object
`feature(s)
`
`260
`
`Normalizing the
`target object
`characterizations
`
`250
`
`Storage
`criteria
`255
`
`Comparing the normalized
`target object characterization to a
`reference to recognize
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 3 of 29
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 3 of 29
`
`

`

`US. Patent
`
`Aug. 13, 1996
`
`Sheet 3 of 16
`
`5,546,475
`
`FIG. 3a
`
`
`
`Firs’r image
`
`Second image
`
`FIG. 3b
`
`
`
`First image
`
`Second image
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 4 of 29
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 4 of 29
`
`

`

`US. Patent
`
`Aug. 13, 1996
`
`Sheet 4 of 16
`
`5,546,475
`
`F l G. 4
`
`Bockg round
`
`312
`
`\401
`
`
`
`Ou’rpuf device
`
`0 CompuTer
`
`
`140
`
`220
`
`-
`
`
`
`200
`
`
`
`400
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 5 of 29
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 5 of 29
`
`

`

`US. Patent
`
`Aug. 13, 1996
`
`Sheet 5 of 16
`
`5,546,475
`
`FIG.5
`
`
`
`Acquire dark
`
`image 2
`
`520
`
`
`
`'
`l
`'
`o prxe basrs 530
`
`
` Pixel
`i
`NO
`
`brighter?
`
`540
`
`
`
`
`Background
`image
`
`31 l
`
`.I
`
`l x
`
`Q
`
`.
`
`4‘
`
`,x/Pbrer 2
`yes .<
`bright?
`:
`552,"
`5535
`\5
`
`NO
`
`:
`E 555
`E/
`
`--v.................. .v. .................
`Translucent
`g Opaque
`Image 554:
`"“099 555
`
`............................................
`
`'
`
`220
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 6 of 29
`
`Acquire ligh’r
`
`image 1
`
`510
`
`Compare on pixel
`
`i
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 6 of 29
`
`

`

`US. Patent
`
`Aug. 13, 1996
`
`Sheet 6 of 16
`
`5,546,475
`
`F l G. 6
`
`Segmenfing
`
`220
`
`
`
`F1 (e.g.. Hue)
`
`Histogromming
`
`230
`
`640
`
`650
`
`R
`
`G
`
`B
`
`Hue;
`
`1:
`
`C
`(1)
`
`3 c
`
`7
`‘2Ll.
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 7 of 29
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 7 of 29
`
`

`

`US. Patent
`
`Aug. 13, 1996
`
`Sheet 7 of 16
`
`5,546,475
`
`F IG. 7
`
`
`
`Normalizafion
`
`NOmehZGTlOH
`
`750
`
`750
`
`l
`
`l
`
`760
`
`I
`
`l
`
`770
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 8 of 29
`
`Histogromming
`
`640
`
`7
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 8 of 29
`
`

`

`US. Patent
`
`Aug. 13, 1996
`
`Sheet 8 of 16
`
`5,546,475
`
`270
`
`EFIG. 8
`260 E
`
`831
`
`832
`
`Comparing
`
`840
`
`810
`
`833
`
`E?
`T?
`
`835
`
`836
`
`E 8
`
`37
`
`820
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 9 of 29
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 9 of 29
`
`

`

`US. Patent
`
`Aug. 13,1996
`
`Sheet 9 of 16
`
`5,546,475
`
`FIG.9
`
`Training
`
`910
`
`Image
`
`F1
`
`920
`
`220
`
`230
`
`.
`‘
`Histogramming
`640
`
`
`
`
`
`iszsgoeisnaa
`
`9
`e '
`
`
`
`
`Mee’r
`
`storage
`
`criieria?
`
`
`255
`
`N arm alizafion
`
`Yes
`
`750
`
`Store
`
`930
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 10 of 29
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 10 of 29
`
`

`

`U.S. Patent
`
`Aug. 13, 1996
`
`Sheet 10 of 16
`
`5,546,475
`
`FIG. ‘IO
`
`
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 11 of 29
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 11 of 29
`
`

`

`US. Patent
`
`Aug. 13, 1996
`
`Sheet 11 of 16
`
`5,546,475
`
`FIG. H
`
`210
`
`Segmenfing
`
`220
`
`Texfure computation
`
`
`
`1 140
`
` Histo ramming
`g
`1 150
`
`
`
`
`
`Normolizofion
`
`1 160
`
`\l/
`
`l
`
`‘
`
`1
`
`H7O
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 12 of 29
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 12 of 29
`
`

`

`US. Patent
`
`Aug. 13, 1996
`
`Sheet 12 of 16
`
`5,546,475
`
`F:H3.12!
`
`210 Segmen’ring
`
`220
`
`
`
`
`
`Weighing device
`
`Computer
`
`‘40
`
`
`
`
`
`Boundary extraction
`
`i210
`
`Boundary shape
`compuiafion
`
`1220
`
`Histogramming
`
`Length
`normalization
`
`1230
`
`1235
`
`|
`
`’
`
`1240
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 13 of 29
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 13 of 29
`
`

`

`US. Patent
`
`Aug. 13, 1996
`
`Sheet 13 of 16
`
`5,546,475
`
`4405
`
`I430
`
`I450
`
`I455
`
`FIG. 44
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 14 of 29
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 14 of 29
`
`

`

`U.S. Patent
`
`Aug. 13, 1996
`
`Sheet 14 of 16
`
`5,546,475
`
`FIG.i5
`
`
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 15 of 29
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 15 of 29
`
`

`

`US. Patent
`
`Aug. 13, 1996
`
`Sheet 15 of 16
`
`5,546,475
`
`l 600
`
`FIG. 16
`
`
`
`
`
`
`Red
`
`Green
`
`Yellow
`
`Brown
`
`1612
`
`1613
`
`1614
`
`1615
`
`Round
`
`Slroighl
`
`Leofy
`
`Apples
`
`1616
`
`1617
`
`1618
`
`1619
`
`
`
`
`
`Peppem
`
`Potatoes
`
`1621
`
`1 622
`
`1610
`
`
`
`RED DEL APPLE
`
`
`$3 GALA APPLE 1632
`
`
`1633
`
`|634
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 16 of 29
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 16 of 29
`
`

`

`US. Patent
`
`Aug. 13, 1996
`
`Sheet 16 of 16
`
`5,546,475
`
` Weighing device
`
`storage
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 17 of 29
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 17 of 29
`
`

`

`5,546,475
`
`It)
`
`15
`
`20
`
`25
`
`30
`
`35
`
`40
`
`45
`
`2
`background is usually further away from the camera than the
`object(s) of interest.
`Segmenting (also called figure/ground separation) is sepa—
`rating a scene image into separate object and background
`images. Segmenting refers to identifying those image pixels
`that are contained in the image of the object versus those that
`belong to the image of the background. The segmented
`object image is then the collection of pixels that comprises
`the object in the original image of the complete scene. The
`area of a segmented object image is the number of pixels in
`the object image.
`Illumination is the light that illuminates the scene and
`objects in it. Illumination of the whole scene directly deter-
`mines the illumination ofindividual objects in the scene and
`therefore the reflected light of the objects received by
`imaging apparatus such as video camera.
`Ambient
`illumination is
`illumination from any light
`source except the special lights used specifically for imaging
`an object. For example, ambient illumination is the illumi-
`nation due to light sources occurring in the environment
`such as the sun outdoors and room lights indoors.
`Glare or specular reflection is the high amount of light
`reflected off a shiny (specular, exhibiting mirror—like, pos—
`sibly locally, properties) object. The color of the glare is
`mostly that of the illuminating light (as opposed to the
`natural color of the object).
`A feature of an image is defined as any property of thc
`image, which can be computationally extracted. Features
`typically have numerical values that can lie in a certain
`range, say, RO—Rl, In prior art, histograms are computed
`over a whole image or windows (sub-images) in an image.
`A histogram of a feature of an image is a numerical
`representation of the distribution of feature values over the
`image or window. A histogram of a feature is developed by
`dividing the feature range, RO—Rl. into M intervals (bins)
`and computing the feature for each image pixel. Simply
`counting how many image or window pixels fall in each bin
`gives the feature histogram.
`Image features include, but are not limited to, color and
`texture. Color is a two-dimensional property, for example
`Hue and Saturation or other color descriptions (explained
`below) of a pixel, but often disguised as a three-dimensional
`property, i.e.. the amount of Red, Green, and Blue (RGB).
`Various color descriptions are used in the prior art, including
`(1) the RGB space; [2) the opponent color space; (3) the
`Munsell (H,V,C) color space; and, (4) the Hue, Saturation,
`and Intensity (H,S,I) space. For the latter, similar to the
`Munsell space, Hue refers to the color of the pixel (from red,
`to green, to blue), Saturation is the “decpness” of the color
`(c.g., from greenish to deep saturated green). and Intensity
`is the brightness, or what the pixel would look like in a gray
`scale image.
`Texture, on the other hand, is an visual image feature that
`is much more diflicult to capture computationally and is a
`feature that cannot be attributed to a single pixel but
`is
`attributed to a patch of image data. The texture of an image
`patch is a description of the spatial brightness variation in
`that patch. This can be a repetitive pattern (of texels), as the
`60 pattern on an artichoke or pineapple, or, can be more
`random, like the pattern of the leaves of parsley. These are
`called structural
`textures and statistical
`textures,
`respec—
`lively. There exists a wide range of textures, ranging from
`the purely deterministic arrangement of a [excl on some
`tesselation of the two—dimensional plane, to “salt and pep—
`per” white noise. Research on image texture has been going
`on for over thirty years, and computational measures have
`
`1
`PRODUCE RECOGNITION SYSTEM
`
`FIELD OF THE INVENTION
`
`This invention relates to the field of recognizing (i.e.,
`identifying, classifying, grading, and verifying) objects
`using computerized optical scanning devices. More specifi—
`cally, the invention is a trainable system and method relating
`to recognizing bulk items using image processing.
`
`BACKGROUND OF THE INVENTION
`
`Image processing systems exist in the prior art for rec-
`ognizing objects, Often these systems use histograms to
`perform this recognition. One common histogram method
`either develops a gray scale histogram or a color histogram
`from a (color) image containing an object. These histograms
`are then compared directly to histograms of reference
`images. Alternatively,
`features of
`the histograms are
`extracted and compared to features extracted from histo-
`grams of images containing reference objects.
`The reference histograms or features of these histograms
`are typically stored in computer memory. The prior art often
`performs these methods to verify that the target object in
`image is indeed the object that is expected, and, possibly, to
`grade/classify the object according to the quality of its
`appearance relative to the reference histogram. An alterna—
`tive purpose could be to identify the target object by
`comparing the target image object histogram to the histo—
`grams of a number of reference images of objects.
`In this description, identifying is defined as determining,
`given a set of reference objects or classes, which reference
`object the target object is or which reference class the target
`object belongs to. Classifying or grading is defined as
`determining that the target object is known to be a certain
`object and/or that the quality of the object is some quanti~
`tatively value. Here, one of the classes can be a “reject”
`class, meaning that either the quality of the object is too
`poor, or the object is not a member of the known class.
`Verifying, on the other hand, is defined as determining that
`the target is known to be a certain object or class and simply
`verifying this is to be true or false. Recognizing is defined
`as identifying, classifying, grading, and/or verifying.
`Bulk items include any item that
`is sold in bulk in
`supermarkets, grocery stores,
`retail stores or hardware
`stores. Examples include produce (fruits and vegetables),
`sugar, coffee beans, candy, nails, nuts, bolts, general hard—
`ware, parts, and package goods.
`In image processing, a digital image is an analog image
`from a camera that is converted to a discrete representation
`by dividing the picture into a fixed number of locations
`called picture elements and quantizing the value of the
`image at
`those picture elements into a fixed number of
`values. The resulting digital image can be processed by a
`computer algorithm to develop other images. These images
`can be stored in memory and/or used to determine informa-
`tion about the imaged object. A pixel is a picture element of
`a digital image.
`
`55
`
`Image processing 3115‘ computer “5.10“ is the processing
`by a computer or a digital image 10 modify the image 01' “3
`obtain from the image properties or the imaged objects SUCh
`as object identity, location, etc.
`An scene contains one or more objects that are of interest 65
`and the surroundings which also get imaged along with the
`objects. These surroundings are called the background. The
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 18 of 29
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 18 of 29
`
`

`

`3
`been developed that are onc»dimensional or higher-dimen-
`sional. However, in prior art, histograms of texture features
`are not known to the inventors.
`
`5,546,475
`
`5
`
`4
`the object is in the image and not obscured by other objects).
`(3) there is little difference in illumination of the scene of
`which the images (reference and target images) are taken
`from which the reference object histograms and target object
`histograms are developed, and (4) the object can be easily
`segmented out from the background or there is relatively
`little distraction in the background. Under these conditions,
`comparing a target object image histogram with reference
`object image histograms has been achieved in numerous
`10 ways in the prior art.
`
`STATEMENT OF PROBLEMS WITH THE
`PRIOR ART
`
`Shape of some boundary in an image is a feature of
`multiple boundary pixels. Boundary shape refers to local
`features, such as, curvature. An apple will have a roughly
`constant curvature boundary, while a cucumber has a piece
`of low curvature, a piece of low negative curvature, and two
`pieces of high curvature (the end points). Other boundary
`shape measures can be used.
`Some prior art uses color histograms to identify objects.
`Given an (3,0,3) color image of the target object, the color
`representation used for the histograms are the opponent
`color: rng—G, by=2*B—R—G, and wb=R+G+B. The wb
`Some prior art matching systems and methods, claim to be
`axis is divided into 8 sections, while rg and by axes are 15
`robust to distractions in the background, variation in view-
`divided into 16 sections. This results in a three-dimensional
`point, occlusion, and varying image resolution. However, in
`histogram of 2048 bins. This system matches target image
`some of this prior art, lighting conditions are not controlled.
`histograms to 66 pro—stored reference image histograms. The
`The systems fail when the color of the illumination for
`set of 66 pro-stored reference image histogram is fixed, and
`therefore itis not atrainable system, i.e., unrecognized target 2” obtaining the reference object histograms is different from
`images in one instance will not be recognized in a later
`the color of theillumination when obtaining the target object
`instance.
`image histogram. The RGB values of an image point in an
`U.S. Pat, No. 5,060,290 to Kelly and Klein discloses the
`image are very dependent on the color of the illumination
`grading of ajmonds based on gray scale histograms. Falling
`(even though humans have little difficulty naming the color
`almonds are furnished Wm] uniform jigm and pass by a 25 given the whole image). Consequently the color histogram
`linear camera. A gray histogram, quantized into 16 levels, of
`Of an image ‘33" change dramatically when 1h° 00101“ or the
`the image of the almond is developed. The histogram is
`illumination (light frequency distribution) changes. Further-
`normalized by dividing all bin counts by 1700, where l700
`more,
`in these prior art 53’5“:th the objects are not 338'
`pixels is the size of the largest almond expected. Five
`mentcd from the background, and, therefore, the histograms
`features are extracted from this histogram: (l) gray value of 30 0f the images are h0l area normalized. This means 1h9
`the peak; (2) range of the histogram; (3) number of pixels at
`objects in target images have to be the same size as the
`peak; (4) number of pixels in bin to the right of peak; and,
`objects in the reference images for accurate recognition
`(5) number of pixels in bin 4. Through lookup tables, an
`because variations of the object size with respect to the pixel
`eight digit code is developed and if this code is in a library.
`size can significantly change the color histogram it also
`the almond is accepted. The system is not trainable. The 35 means that the parts 0f the image that correspond ‘0 the
`appearances of almonds of acceptable quality are hard—
`background have to be achromatic (cg. black), or, at least,
`coded in the algorithm and the system cannot be trained to
`or a coloring “01 PTCSChl in the object, 01’ they Will signifi—
`gradc almonds differently by showing new instances of
`cantly perturb lhc dChVCd image “3101' histogram.
`almonds.
`Prior art such as that disclosed in U.S. Pat. No. 5,060,290
`[15‘ Pat, No. 4,735,323 to Okada et al. discloses a 40
`fail if the size of the almonds in the image is drastically
`mechanism for ajignmg and transporting an object to be
`dilTerent than expected. Again, this is because the system
`inspected. The system more specifically relates to grading of
`does not CXPhChly separate th object from its background.
`orangesThe transported oranges are illuminated with alight
`Till? system is USCd only for grading almonds: h can not
`within a predetermined wavelength range. The light 45 dlSllhngh an almond from (533) 'd peanut.
`reflected is received and convened into an electronic signal.
`Similarly, prior art such as that disclosed in U.S. Pat. No.
`A level histogram divided into 64 bins is developed, where
`4,735,323 only recognizes different grades of oranges. A
`reddish grapefruit might very well be deemed a very large
`orange. The system is not designed to operate with more
`than one class of fruit at a time and thus can make do with
`lcvcl=0h|§ liliensiiy 0f Wally reflected lish1)/(llie inlensil)’ of
`weak features such as the ratio of green to white reflectivity.
`gm" “gm rammed by 3" mg”)
`In summary: much 0f the prior an ill
`the agllWlluml
`The median. N, of this histogram is determined and is
`arena, typified by US Pat. NOS- 4:735323 and 53060290» l3
`considered as representing the color of an orange. Based on
`Concerned Wllh classifying/grading produce items. ThlS
`N, the orange coloring can be classified into four grades of
`"excellent,”“good,”“fair" and “poor,“or can be graded finer. 55 prior art can only classify/identify objects/products/produce
`The systems is not trainable, in that the appearance of the
`if they P355 a scanner one object at a time. h is 3150 required
`different grades of orangesis hard~codedinto the algorithms.
`that the range 0f sizes (from smallest [0 largest possible
`The use of gray scale and color histograms 1'5 a very
`object size) of the objccllproduct/producc be known before—
`eflective method for grading or verifying objects in an
`hand. These systems will
`fail if more than one item is
`image. The main reason for this is that a histogram is very 60
`scanned at the same time, 0T 10 be more precise, ifmore than
`compact representation of a reference object that does not
`one object appears at a scanning position at lhh same time.
`depend on the location or orientation of the object in the
`Further, the prior an often requires carefully engineered
`image.
`and expensive mechanical environment with carefully con»
`However, for image hi stogram-based recognition to work,
`trolled lighting conditions where the items are transported to
`certain conditions have to be satisfied. It is required that: (l) 65 predefined spatial locations. These apparatuses are designed
`the size of the object in the image is roughly known, (2)
`specifically for one type of shaped object (round, oval, etc.)
`there is relatively little occlusion of the object (i.e., most of
`and are impossible or, at least, not easily modified to deal
`
`0
`
`5
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 19 of 29
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 19 of 29
`
`

`

`5,546,475
`
`6
`the object(s) image is novelly
`level. Using an algorithm,
`segmented from a background image of the scene by a
`comparison of the two digitized images taken. A processed
`image [that can be used to characterize features) of the
`object(s) is then compared to stored reference images. The
`object is recognized when a match occurs.
`Processed images of an unrecognized object can be
`labeled with identity of object and stored in memory, based
`on certain criteria, so that the unrecognized object will be
`recognize when it is imaged in the future. In this novel way,
`the invention is taught to recognize previously unknown
`objects.
`Recognition of the object is independent of the size or
`number of the objects because the object image is novelly
`15 normalized before it is compared to the reference images.
`Optionally, use interfaces and apparatus that determines
`other features of the object (like weight) can be used with the
`svstem.
`
`5
`with other object types. The shape of the objects inspires the
`means of object transportation and is impossible or difficult
`for the transport means to transport difl‘erent object types.
`This is especially true for oddly shaped objects like broccoli
`or ginger. This, and the use of features that are specifically
`selected for the particular objects, does not allow for the
`prior art to distinguish between types of produce.
`Additionally, none of the prior art are trainable systems
`where. through human or computer intervention, new items
`are learned or old items discarded. That is, the systems can 10
`not be taught to recognize objects that were not originally
`programmed in the system or to stop recognizing objects
`that were originally programmed in the system.
`One area where the prior art has failed to be effective is
`in produce check out. The current means and methods for
`checking out produce 90555 problems-Affixing (FLU—price
`IOOkUP) labels 10 fresh produce rs dISIlked by customers and
`produce retailers/wholesalers. Prc-packagcd produce items
`are disliked, because of increased cost of packaging, dis-
`posal (solid waste), and inability to inspect produce quality 20
`in pre-paekaged form_
`
`25
`
`The process of produce check-cut has not changed {nuch
`sincc the first appearance of grocery stores. At the point of
`sale EPOS)’ the clashier has to gargogmze. the 30d?” 13m,
`wcrg
`or count ‘ e rtem(s), an
`‘eterm-me t e pnce.
`up
`rcntly,
`in most stores the latter is achieved by manually
`entering the non-mnemonic PLU code that is associated with
`the produce. These codes are available at the P08 in the
`form of printed list or in a booklet with pictures.
`Multiple problems arise from this process of produce
`check-out:
`(l) Losses incurred by the store (shrinkage). First, a
`cashier may inadvertently enter the wrong code num—
`ber. If this is to the advantage of the customer,
`the 35
`customer will be less motivated to bring this to the
`attention of the cashier. Second, for friends and rela—
`tives, the cashier may purposely enter the code of a
`lower-priced produce item (sweethearting).
`(2) Produce check—out tends to slow down the check—out 40
`proeess because of produce identification problems.
`(3) Every new cashier has to be trained on produce names,
`produce appearances, and PLU codes.
`
`30
`
`OBJECTS OF THE INVENTION
`
`45
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`FIG. 1 is a block diagram or the one preferred embodi—
`merit of the present system.
`FIG. 2 is a flow chart showing on preferred embodiment
`of the present method for recognizing objects.
`,
`.
`.
`.
`,
`”6' 3 ll]u51rams_ segmemmg a scene mm an Obie“ image
`and a background image.
`'
`FIG- 4 15 a block diagram or a preferred embodiment 9f
`apparatus for segmenting images and recognizing Obie“ rn
`images.
`‘
`. FIG. 5 rs allow chart ofa preferred method for segment—
`mg target object images.
`_
`FIG- 5 1.5 a flow chart showrng 'f‘ preferred melhnd or
`characterrzrng reference 01 target object featurc(s).
`FIG. 7 is a flow chart showing a preferred method for
`(area/length) normalization of object feature(s) character»
`ization.
`FIG. 8 illustrates the comparison of an area/length nor-
`malizcd target object characterization to one or more area
`normalized reference 0131001 characterizations.
`FIG. 9 is a flow chart showing a preferred (algorithmic)
`method of training the present apparatus to recognize new
`images.
`
`FIG; 10 is. a bk’Ck diagram showing multiple features of
`An object of this invention is an improved apparatus and
`an ObJeCl being extracted.
`method for recognizing objects such as produce.
`FIG-.11 ‘3 a flow chart showrng the histogrammrng and
`An object of this invention is an improved trainable
`apparatus and method for recognizing objects such as pro— 50 normalrzrng of the feature of texture.
`duee.
`FIG. 12 is a flow chart showing the histogramming and
`
`Another object of this invention is an improved apparatus
`and method for recognizing and pricing objects such as
`produce at the point of sale or in the produce department.
`A further object of this invention is an improved means 55
`and method of user interface for automated produce identi-
`fication, such as, produce.
`
`SUMMARY OF THE INVENTION
`Thc present invention is a system and apparatus that uses
`image processing to recognize objects within a scene. The
`system includes an illumination source for illuminating the
`scene. By controlling the illumination source, an image
`processing system can take a first digitized image of the 65
`scene with the object illuminated at a higher level and a
`second digitizedimage with the object illuminated at a lower
`
`normalrzrng 0f the feature 0f boundary shape.
`FIG. 13 is block diagram showing a weighing device.
`FIG. 14 shows an image where the segmented object has
`two distinct regions determined by segmenting the object
`image and where these regions are incorporated in recogni-
`tion algorithms.
`FIG. 15 shows a human interface to the present apparatus
`so which presents an ordered ranking of the most likely iden-
`“”05 0f the produce being imaged.
`_
`.
`. FIG- 16 SHOWS 3 [mans f0? human determination or the
`identity 0f 9133901“) by browsrng through SUbSCl(S) Of all the
`preyiously “151311651 stored 1‘30“ images, and the means by
`“mm“ [ha 511115015 are 59199161
`FIG. 17 is a preferred embodiment of the present inven—
`tion using object weight to price object(s).
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 20 of 29
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1006, p. 20 of 29
`
`

`

`5,546,475
`
`5
`
`7
`DETAILED DESCRIPTION OF THE
`INVENTION
`
`8
`disclosure, one skilled in the art could develop other equiva»
`lent calculating devices 140 and frame grabbers 142.
`An optional interactive output device 160 can be con
`The apparatus 100 shown in FIG. 1 is one preferred
`nectcd to the calculating device 140 for interfacing with a
`embodiment of the present invention that uses image pro-
`user,
`like a cashier. The output- QCVlCC 1.60 can include
`cessing to automatically recognize one or more objects 131.
`screens ”F” “3'“ ll“? “3“ m “@1310“ makmg 15“ and C3"
`A light source 110 with alight frequency distribution that
`31;“ va‘de. mechanisms ‘0 ”am. 16; Sysmh 100 to recog—
`is constant over time illuminates the object 131. The light is
`ntze new objects. An optional weighing dFV‘Cc 17" can also
`non-monochromatic and may include infra-red or ultra vio-
`PmVldc an 1nput 10 the CKICUlHUIlg dCVICC 140 about the
`let frequencies. Light being non-monochromatic and of a
`10 weight (01' density) 0f the object 131. 5’36 description below
`constant
`frequency distribution ensures
`that
`the color
`(FIG 13)-
`appearance of the objects 131 does not change due to light
`FIG. 2 is a flow chart of the algorithm 200 run by the
`variations between different images taken and that stored
`calculating device, or computer 140. In step 210, a target
`imagesofa given object can be matched ‘0 images taken or
`object to be recognized is imaged by camera 120. Imaging
`that Obie“ at a later hmc- The preferred lights are flash tubes
`like this is we” known. The image of target object 131 is
`MOUSE! [$4425. 01' [W0 GE COOI-Whilc fluorescent bulbs (22 [5
`then novelly segmented 220 from its background. The
`Watts and 30 Watts), GE FESTQ‘CW and GE FC12T9-CW,
`purpose of step 220 is to separate the target object 131 from
`respectively. Such light sources are well known.
`the background so that the system 100 can compute char-
`A video input device 120 is used to convert the reflected
`acteristics of separated object 131 image pixels indepen-
`lighl rays lmO an image. Typically this image is “V0 dimen»
`sional. A preferred video input device is a color camera but 30 dently of mg background of the scene. In step 230 one or
`any dEViCC that CODVCHS light rays into an image can be used
`more features of the object 131 can be computed, preferably
`Thcsc cameras would include CCD camera and CID cam-
`pixel by pixel, from the segmented object image. In step
`eras. The “’10" camera OUle can be RGB: HSL YC: 01' any
`240, characterizations of these pixel-by-pixel computed
`other representation or COIOF- One preferred camera is 3 50W
`feature sets are developed. Normalizing, in step 250. ensures
`card-camera CCB-C35YC 0T Sony X0999 Video i"Pl-1L 25
`that these characterizations do not depend on the actual area.
`dCVlC‘iS like this 120 are WC“ khOWh-
`length, size, or characteristics related to area/length/size that
`Color images are the preferred sensory modality in this
`the objectts) 131 occupy in t

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket