throbber
(12) United States Patent
`Neubauer et al.
`
`USOO6553131B1
`(10) Patent No.:
`US 6,553,131 B1
`(45) Date of Patent:
`Apr. 22, 2003
`
`(54) LICENSE PLATE RECOGNITION WITH AN
`INTELLIGENT CAMERA
`
`6,141,620 A * 10/2000 Zyburt et al. ............... 701/117
`6,339,651 B1
`1/2002 Tian et al. .................. 382/105
`
`(75) Inventors: Claus Neubauer, Monmouth Junction,
`N S. Jenn-Kwei Tyan, Princeton,
`
`(73) Assignee: Siemens Corporate Research, Inc.,
`Princeton, NJ (US)
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U.S.C. 154(b) by 0 days.
`
`(*) Notice:
`
`* cited by examiner
`Primary Examiner Bhavesh M. Mehta
`Assistant Examiner Seyed Azarian
`(57)
`ABSTRACT
`An intelligent camera System and method for recognizing
`license plates, in accordance with the invention, includes a
`camera adapted to independently capture a license plate
`image and recognize the license plate image. The camera
`includes a processor for managing image data and eXecuting
`a license plate recognition program device. The license plate
`(21) Appl. No.: 09/396,950
`recognition program device includes a program for detecting
`orientation, position, illumination conditions and blurring of
`(22) Filed:
`Sep. 15, 1999
`the image and accounting for the orientations, position,
`(51) Int. Cl." .................................................. G06K 9/00
`(52) U.S. Cl. ........................................ 382,105.382170 E. A.C. R s W.A.RA
`(58) Field of Search/17727s. 159,173.
`S. E. for Segmenting characters depicted in the baseline image by
`218 104 170 169 17 173. 340,907.
`employing a projection along a horizontal axis of the base
`s
`s
`s
`s 93s 9 3. 701/1 17
`line image to identify positions of the characters. A Statistical
`s 1 - s
`classifier is adapted for classifying the characters. The
`classifier recognizes the characters and returns a confidence
`References Cited
`Score based on the probability of properly identifying each
`U.S. PATENT DOCUMENTS
`character. A memory is included for Storing the license plate
`4567,609 A * 1/1986 Metcalf
`recognition program and the license plate images taken by
`382/105
`5,651,075 A
`7/1997 Frazier et al... E. an image capture device of the camera.
`ClCalT . . . . . . . . . . . . . . . . . . . . . .
`5,809,161 A * 9/1998 Auty et al. ................. 382/104
`5,847,661. A 12/1998 Ricci .......................... 340/902
`19 Claims, 5 Drawing Sheets
`
`ITULO COCOS COTUS O
`
`C
`
`C, O Ol
`
`(56)
`
`image capture
`10
`
`Course localization 12
`
`Sub-sample Image
`14
`
`Fine localization
`24
`
`Segmentation 26
`
`Projection profile
`28
`
`Compute vertical edges
`f6
`
`Filter projection profile
`30
`
`Saliency map
`18
`
`Determine segments
`32
`
`Coarse localization
`result/region of interest
`determined
`20
`
`| TH detacho
`Tilt detection and
`refinement of borders
`22
`
`Character normalization
`and classification
`34
`
`Compare with
`rules and codes
`36
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`BOSCH EXHIBIT 1007
`
`Page 1 of 13
`
`

`

`U.S. Patent
`
`Apr. 22, 2003
`
`Sheet 1 of 5
`
`US 6,553,131 B1
`
`
`
`Image Capture
`10
`
`-
`
`COOrse localization 12
`
`Sub-sample image
`14
`
`Fine localization
`24
`
`Segmentation 26
`
`Projection profile
`28
`
`Compute vertical edges
`16
`
`Filter projection profile
`30
`
`Saliency map
`18
`
`Determine segments
`32
`
`Coarse localization
`result/region of interest
`determined
`20
`
`Tilt detection and
`refinement of borders
`22
`
`Character normalization
`and classification
`34
`
`Compare with
`rules and codes
`36
`
`FIG.
`
`Page 2 of 13
`
`

`

`U.S. Patent
`
`Apr. 22, 2003
`
`Sheet 2 of 5
`
`US 6,553,131 B1
`
`Projection on x-axis
`
`12 24 36 48 60 72 84 96 108 120
`
`TP-filter from x-projection
`
`12 24 36 48 60 72 84 96 108 120
`
`Segmentation
`
`12 24 36 48 60 72 84 96 108 120
`
`FG. 2A
`
`255.00
`204.00
`FIG. 2B 53.66
`102.00
`51.00
`
`162.00
`129.60
`FIG. 2C '375
`64.80
`5240
`
`FIG. 2D 666
`
`
`
`FIG. 3
`
`Page 3 of 13
`
`

`

`U.S. Patent
`
`Apr. 22, 2003
`
`Sheet 3 of 5
`
`US 6,553,131 B1
`
`16 x 20 pixels
`Input image
`
`4 x convulution
`5 x 5 kernels
`
`16 x 20 pixels
`
`Subsampling
`
`
`
`8 x 10 pe
`
`fully connected
`204
`100 ...As to Oooooooooooooo of -1
`7
`\ 205
`(t DOOOOOOOOOOOOODDDDDDD-1
`
`
`
`40 Output neurons
`classes; 0-9, A-Z,...,
`
`FG. L.
`
`Page 4 of 13
`
`

`

`U.S. Patent
`
`Apr. 22, 2003
`
`Sheet 4 of 5
`
`US 6,553,131 B1
`
`
`
`Processor
`303
`
`
`
`
`
`
`
`External system
`304
`
`Comera 300
`
`License plate recognition
`device program 312
`
`Statistical classifier
`314
`
`FIG. 5
`
`Page 5 of 13
`
`

`

`U.S. Patent
`
`Apr. 22, 2003
`
`Sheet 5 of 5
`
`US 6,553,131 B1
`
`
`
`(ERH
`
`Confidence:
`0.9 08 0.9 0.90.9 0.85 0.9 0.90.9
`
`FIG. 6
`
`Page 6 of 13
`
`

`

`1
`LCENSE PLATE RECOGNITION WITH AN
`INTELLIGENT CAMERA
`
`US 6,553,131 B1
`
`BACKGROUND
`
`1. Technical Field
`This disclosure relates to optical recognition methods and
`Systems and more particularly, to an intelligent camera for
`character recognition.
`2. Description of the Related Art
`Most image processing Systems for industrial applications
`and Video Surveillance are still based on a personal computer
`(PC), a frame grabber and a separate CCD camera. PC based
`vision systems for traffic Surveillance have been described in
`the prior art. In contrast to PC based systems intelligent or
`Smart cameras are recently becoming more popular, Since
`they offer Several advantages. They require minimal Space,
`because processor and Sensor are integrated in one package.
`They are more robust and reliable in outdoor environments,
`they require leSS maintenance and they are well Suited for
`cost Sensitive applications. However, intelligent cameras
`offer less computational power, Since they are typically one
`or two generations behind the current processor generation
`and they are limited with respect to main memory (e.g., 8
`MB) and hard disk space (e.g., a 5 MB flash disk).
`In principle, license plate recognition is may be similar to
`optical character recognition (OCR) for document process
`ing. However, existing OCR engines cannot be Successfully
`used as license plate readers because they cannot tolerate an
`extreme range of illumination variations, Such as inhomo
`geneous illumination and shadows, blurring due to dirt,
`Screws, particles, etc. In addition, OCR products are limited
`due of their memory and processing Speed requirements.
`Therefore, a need exists for versatile algorithms for appli
`cations that go beyond simple identification or measuring
`tasks. A further need exists for a System that reliably
`recognizes license plates for applications ranging from Sur
`veillance to automated Systems for determination of the
`parking fees.
`
`SUMMARY OF THE INVENTION
`A method for recognizing license plates employing an
`intelligent camera with a processor and a memory, in accor
`dance with the present invention, includes capturing an
`image including a license plate by the intelligent camera,
`and detecting a region in which the license plate is located
`by performing a coarse localization of the image.
`Orientation, position, and illumination conditions of the
`image are detected and accounted for to obtain a baseline
`image of the license plate. A fine localization of the baseline
`image is performed to obtain a more accurate depiction of
`Vertical resolution of the baseline image of the license plate.
`Characters depicted in the baseline image are Segmented by
`employing a projection along a horizontal axis of the base
`line image to identify positions of the characters. The
`characters are classified based on a Statistical classifier to
`obtain a confidence Score for the probability of properly
`identifying each character. The above Steps are recursively
`performed until each confidence Score exceeds a threshold
`value to recognize the characters.
`In alternate methods, the Step of detecting orientation,
`position, and illumination conditions of the image and
`accounting for the orientations, position, and illumination
`conditions of the image to obtain a baseline image of the
`license plate may include the Step of comparing each char
`
`15
`
`25
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`2
`acter in the image of the license plate with examples of
`images with different illuminations to account for illumina
`tion effects on the image. The Step of detecting a region in
`which the license plate is located by performing a coarse
`localization of the image may include the Steps of Sub
`Sampling the image to reduce a number of pixels, extracting
`Vertical edges in the image, generating a Saliency map based
`upon the vertical edges to identify regions in the image with
`a probability of including the license plate and extracting a
`localization result which includes the image of the license
`plate. The Step of Segmenting characters depicted in the
`baseline image by employing a projection along a horizontal
`axis of the baseline image to identify positions of the
`characters may include the Steps of providing a projection
`profile of pixel intensities acroSS Vertical lines of pixels in
`the baseline image, filtering the projection profile and iden
`tifying locations of characters in the baseline image depicted
`by area below a threshold value in the filtered projection
`profile.
`The Statistical classifier may employ a convolutional
`network, and the Step of classifying the characters based on
`a Statistical classifier to obtain a confidence Score for the
`probability of properly identifying each character may
`include the Step of training the classifier by employing
`Virtual Samples of characters. The method may include the
`Step of comparing character blocks and characters to pre
`determined license plate codes and conventions to check
`accuracy of recognition. The Step of recursively performing
`the method StepS until each confidence Score exceeds a
`threshold value may include the Step of considering adjacent
`characters together to attempt to improve the confidence
`Score. The above method steps may be implemented by a
`program Storage device readable by machine, tangibly
`embodying a program of instructions executable by the
`machine to perform the method steps.
`An intelligent camera System for recognizing license
`plates, in accordance with the invention, includes a camera
`adapted to independently capture a license plate image and
`recognize the license plate image. The camera includes a
`processor for managing image data and executing a license
`plate recognition program device. The license plate recog
`nition program device includes means for detecting
`orientation, position, illumination conditions and blurring of
`the image and accounting for the orientations, position,
`illumination conditions and blurring of the image to obtain
`a baseline image of the license plate. The camera includes
`means for Segmenting characters depicted in the baseline
`image by employing a projection along a horizontal axis of
`the baseline image to identify positions of the characters. A
`Statistical classifier is adapted for recognizing and classify
`ing the characters based on a confidence Score, the confi
`dence Score being based on a probability of properly iden
`tifying each character. A memory is included for Storing the
`license plate recognition program and the license plate
`image taken by an image capture device of the camera.
`In alternate embodiments, a trigger device may be adapted
`to cause an image to be captured responsive to an event. The
`event may include an approach of a vehicle. The means for
`detecting may include examples of images with different
`illuminations to account for illumination effects on the
`image for each character within the image. The means for
`Segmenting may include means for providing a projection
`profile of pixel intensities acroSS Vertical lines of pixels in
`the baseline image, a filter profile for filtering the projection
`profile and means for identifying locations of characters in
`the baseline image depicted by area below a threshold value
`in the filtered projection profile. The Statistical classifier may
`
`Page 7 of 13
`
`

`

`3
`includes one of a convolutional network and a fully con
`nected multilayer perceptron. The memory may include a
`database for predetermined license plate codes and conven
`tions for checking accuracy of recognition. The intelligent
`camera System may include a parking lot control System
`coupled to the intelligent camera System for determining
`parking fees based on character recognition of license plates.
`These and other objects, features and advantages of the
`present invention will become apparent from the following
`detailed description of illustrative embodiments thereof,
`which is to be read in connection with the accompanying
`drawings.
`
`BRIEF DESCRIPTION OF DRAWINGS
`This disclosure will present in detail the following
`description of preferred embodiments with reference to the
`following figures wherein:
`FIG. 1 is a block/flow diagram showing a system/method
`for employing an intelligent camera for license plate recog
`nition in accordance with the present invention;
`FIG. 2A is an image of a license plate;
`FIGS. 2B-2D show segmentation steps for segmenting
`the characters of the license plate image of FIG. 2A, in
`accordance with the present invention;
`FIG. 3 depicts illustrative examples of virtual samples
`which may be employed in accordance with the present
`invention;
`FIG. 4 is a Schematic diagram of a convolutional network
`for employing the present invention;
`FIG. 5 is a block diagram of an intelligent camera in
`accordance with the present invention; and
`FIG. 6 depicts a license plate at various Stages of pro
`cessing in accordance with the present invention.
`
`DETAILED DESCRIPTION OF PREFERRED
`EMBODIMENTS
`The present invention is directed to a robust intelligent
`character recognition System and method for determining
`characters under various conditions, for example, in differ
`ent illumination conditions, at different angles, etc. In one
`particularly useful embodiment, the present invention is
`employed in license plate recognition.
`A robust car identification System for Surveillance of
`parking lot entrances that runs completely on a Stand alone
`low cost intelligent camera is provided. To meet accuracy
`and Speed requirements, hierarchical classifiers and coarse
`to find Search techniques are applied at each recognition
`Stage for localization, Segmentation and classification.
`The present invention provides an efficient hierarchical
`decomposition of a recognition task, where coarse Segmen
`tation and classification methods are applied. Ambiguous
`patterns may be forwarded to more Sophisticated methods to
`achieve a desired recognition time (for example, 2-3 Sec
`onds based on an IntelTM 486/100 MHz processor). Given
`multiple Segmentation hypotheses, the reliable recognition
`or rejection of Segmented characters is one important aspect
`for the performance. The character recognition task is pref
`erably accomplished by employing a convolutional network.
`It should be understood that the elements shown in FIGS.
`1, 5 and 6 may be implemented in various forms of
`hardware, software or combinations thereof. Preferably,
`these elements are implemented in Software on one or more
`appropriately programmed general purpose digital process
`ing unit having a processor and memory and input/output
`
`15
`
`25
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`US 6,553,131 B1
`
`4
`interfaces. Referring now in Specific detail to the drawings
`in which like reference numerals identify similar or identical
`elements throughout the several views, and initially to FIG.
`1, a flow diagram of an illustrative license plate recognition
`method is shown. In block 10, character identification is
`based, in one example, on frontal or rear views of the car to
`obtain an image of a license plate. For example, the car may
`be stopped in front of a gate. An inductive Sensor may be
`employed to externally trigger image capture. In a preferred
`embodiment, the camera is fixed So that the orientation of
`the license plates is approximately constant. However, the
`Size and position of the license plates can vary slightly due
`to the position and the type of car. The approaching car may
`be detected by the camera itself and the external inductive
`Sensor trigger may be eliminated.
`In block 12, a course localization is performed preferably
`based on multi-resolution analysis. Different resolutions of
`the image are checked to assist in character recognition of
`Symbols or characters on the license plate. Coarse localiza
`tion hones in on the license plate image within the image
`captured. This is performed by Sub-Sampling the image, in
`block 14, to reduce the number of pixels therein (i.e., to
`reduce computational complexity). In block 16, Vertical
`edges are computed (since vertical edges dominate in license
`plates, however, horizontal edges may also be computed).
`Vertical edge computation determines vertically disposed
`pixel groups. In block 18, a Saliency map is generated. A
`Saliency map blends vertical edges to achieve intensity
`regions in the image. Highest peak values in the intensity
`regions have the highest probability of being the license
`plate and are Selected for further processing. In block 20, the
`invention hones in on the image of the license plate to
`provide the course localization.
`In block 22, tilt detection and refinement of an area of
`interest is determined. This includes addressing illumination
`effects, positions (distances away from the license plate, left
`and right borders), rotations, blurring and other effects.
`Comparisons to normalized correlation models or templates
`are performed to find Similar illuminations, distances, etc. to
`attempt to reduce the effects on the image. Advantageously,
`this provides a robustness to the System which is not
`achieved by OCR algorithms.
`In block 24, a fine localization based on local edge and
`regional features is performed, i.e., a vertical fine localiza
`tion for license plates. After Successful localization, multiple
`Segmentation hypotheses are created by an algorithm, in
`block 26, based on nonlinear projections onto a baseline
`image. The nonlinear projections are employed in a profile,
`in block 28 (See FIG. 2B) and filtered in block 30 (See FIG.
`2C). As a result, the left and right boundary of each character
`is determined, in block 32 (See FIG. 2D). In block 34, the
`recognition System preferably employs a convolutional neu
`ral network classifier or other Statistical classifier, which
`identifies the characters and returns a confidence value for
`each character. Based on the confidence measure, the Seg
`mentation hypothesis with the highest overall confidence is
`accepted. After image analysis, a table of rules, for example,
`valid city codes and country Specific rules are applied to
`Verify, correct or reject Strings, in block 36. For example,
`German license plates include two blocks with letters fol
`lowed by one block with digits only and the overall number
`of characters may not exceed eight. These rules can signifi
`cantly narrow the Search and improve performance.
`Localization of License Plates
`In block 12, license plate localization is preferably based
`on multi-resolution analysis. The coarse localization relies
`on the fact that a license plate (text) includes a high amount
`
`Page 8 of 13
`
`

`

`S
`of Vertical edges compared to the remaining parts of a car
`and its Surrounding in an image. Vertical edges are computed
`and a Saliency map is generated on a reduced resolution.
`Pixels with high intensity in the Saliency map correspond to
`positions that are likely to include the license plate. The
`highest peal value in the Saliency map corresponds to the
`license plate. In rare instances other peaks may occur. In this
`case a candidate list of possible locations is ordered by the
`intensity of the vertical edge feature which is computed and
`evaluated until a reasonable result is found. This method
`provides a robust technique for localizing the license plate
`with images having Size and orientation variations due to
`varying distance, position and type of license plate or car.
`Horizontal edges or other local features may be employed
`for coarse localization as well. Position and Size variations
`as well as Small rotations are accounted for by the System in
`block 22.
`In block 24, after coarse localization, fine localization is
`performed. The orientation of the license plate is precisely
`determined and corrected. The left and right border of the
`license plate are detected. The height and position of the
`license plate within the image are determined to more
`accurately retrieve a region of interest. This refinement is,
`again, based on Vertical edge features of the original image
`(not a Sub-Sampled image) to gain higher accuracy. After the
`position, orientation and size of the license plate are
`determined, the region of interest is resampled to a prede
`termined vertical resolution (e.g., 20 pixels). This resolution
`is maintained for Subsequent Steps to reduce computational
`overhead. Accurate positioning and Size normalization along
`the Vertical access improve the accuracy of Segmentation
`and classification.
`Segmentation of the Character String
`Based on the position and Size of the detected license plate
`the region of interest is extracted and re-sampled So that the
`Vertical resolution is about 20 pixels for the character height,
`this may be adjusted depending on the application. The 20
`pixel resolution for the character height was found to be
`Sufficient for the Subsequent Segmentation and recognition
`as will be described.
`In contrast to U.S. license plates the characters of German
`license plates are not equally Spaced and the character width
`is Strongly varying. Furthermore, two different fonts and
`layouts are used. While Sophisticated Segmentation algo
`rithms like Hidden Markov Model, which are also Success
`fully used for handwriting recognition or Speech
`recognition, may perform better, here a simple Segmentation
`based on projections to the baseline is advantageously
`employed to minimize execution time.
`Referring to FIGS. 2A-D, segmentation steps for block
`26 of FIG. 1 are illustratively depicted for a license plate 100
`shown in FIG. 2A. Based on a central part of the region
`including the characters of FIG. 2A, a profile or projection
`is generated in FIG. 3B from right to left along the image,
`based on a darkest pixel in each column (in the y direction).
`FIG. 2B is a plot of pixel intensity (X-axis) versus column
`number (y-axis). A low pass filtered profile, for example, the
`profile depicted in FIG. 2C is subtracted from the profile in
`FIG. 2B resulting in a profile of FIG. 3D. In FIG. 2D, the
`areas or regions below a threshold (e.g., 1.00) each include
`a character.
`Touching characters caused by dirt particles, image blur
`ring and shadows that partially cover the license plate may
`cause problems for a projection-based Segmentation and the
`peak between characters can become very Small. Thus, the
`Segmentation is preferably parameterized to be sensitive to
`Spaces. The Segmentation also detects many false positive
`
`15
`
`25
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`US 6,553,131 B1
`
`6
`Spaces. If the classification of the Segmented characters does
`not identify characters with high confidence, then a more
`powerful recognition based Segmentation algorithm may be
`applied, as described in commonly assigned U.S. application
`Ser. No. 09/396,952, entitled “CHARACTER SEGMEN
`TATION METHOD FOR VEHICLE LICENSE PLATE
`RECOGNITION", filed concurrently herewith and incorpo
`rated herein by reference.
`Identification of Characters
`Since the Segmentation process itself is inherently
`ambiguous, it has to be tightly coupled to the character
`identification. Segmentation provides locations that are
`likely to include characters. Consequently, the Overall per
`formance depends highly on the performance of the char
`acter classifier and its ability to recognize valid characters
`and to reject invalid characters due to false positive
`segmentation, as performed in block 26 of FIG. 1. Several
`experiments have been conducted by the inventors to choose
`optimal classifiers for this problem. In one experiment, fully
`connected multi-layer perception and convolutional net
`Works were employed. Experiments demonstrate the advan
`tage of convolutional networks compared to the fully con
`nected multilayer perception; however, either method may
`be used as well as equivalent methods.
`For German license plate recognition, 40 classes have to
`be recognized (10 digits, 26 regular characters, 3 special
`German characters: ae, ue, oe, and the registration label
`appearing between first and Second text field). Furthermore,
`negative examples caused by incorrect Segmentation and
`localization were used for training. All neurons in an output
`layer of the convolutional networks should respond with
`Zero activation to a negative input pattern. Negative patterns
`are particularly important to increase the rejection capability
`in case of a wrong Segmentation. By increasing the number
`of negative samples compared to the number of valid
`characters the classifier is biased towards high rejection
`accuracy instead of using all its capacity for interclass
`Separation.
`In addition to negative samples, virtual Samples (VS) or
`templates improve the generalization of the classifier during
`training, if the training Set has limited size. Virtual Samples
`are generated from the original images by Several affine
`transformations. In particular, the size, orientation and hori
`Zontal and vertical position of the characters may be varied
`randomly within a reasonable range (+/-2 pixel shift and
`+/-2%. Scaling, rotation<about 3 degrees) between training
`cycles and noise is added during training. With Virtual
`Samples, the classifier becomes more robust with respect to
`Segmentation and location tolerances. Examples of Virtual
`samples are illustratively shown in FIG. 3.
`Since the Segmentation may include false positive Spaces,
`all combinations of character positions are considered,
`which are possible given a minimum and maximum width
`for the characters. The neural network evaluates the Sub
`images corresponding to these multiple hypotheses. The
`result of the neural network classification is a posterior
`probability of a class given an observation (see C. Bishop,
`Neural Networks for Pattern Recognition, Oxford Univ.
`Press, 1995). This probability may be used to choose the
`most likely combination of characters.
`Before classification, each Sub-image is normalized with
`respect to brightness and contrast. The Sub-image preferably
`has a fixed size, for example, 16x20 pixels, So that wide
`characters like W still fit into the window. Very narrow
`characters like 'I' or 1 may cause problems, since neigh
`boring characters are partly overlapping with their window.
`Therefore, the character width determined by the segmen
`
`Page 9 of 13
`
`

`

`7
`tation is employed to Skip the left and right area beside the
`character and these pixels are Set to a uniform background
`gray Value.
`Fully Connected Multilayer Perceptron
`A three-layered perceptron (MLP)may be used for clas
`sification of characters and training by performing error
`back-propagation. The number of hidden neurons is varied
`manually for optimal generalization and about 100 hidden
`neurons was found to be preferred.
`Convolutional Neural Network Based on Local Connections
`Referring to FIG. 4, a topology of a convolutional net
`work is illustrated. An input image 200 is presented. A first
`layer includes four planes 201 with, for example, 16x20
`pixels. Four convolution kernels 202 of, for example, a 3x3
`pixel window Size are applied So that four edge orientations
`are extracted (diagonals, vertical and horizontal). A Sub
`Sampling is performed to reduce the planes 201 by, Say, a
`factor two to 8x10 pixels and pixel averaging is performed
`to obtain planes 203. After convolutional and Sub-Sampling,
`two fully connected layers, 204 and 205 follow with, for
`example, 100 and 40 neurons, respectively.
`When the convolutional network is employed for classi
`fication of characters, pixels of the input image, i.e., a
`Segmented character, are extracted into four most likely
`orientations (diagonals, vertical and horizontal). Each
`extraction is Sub-Sampled to reduce its size. Comparisons
`occur to decipher the characters and the best character
`(highest confidence character(s)) are output from the output
`neurons (e.g., 40 for German license plates).
`Training for this network (determining weights of hidden
`and output neurons) is performed by error back-propagation
`and virtual Samples, which are employed in a similar fashion
`as with MLP training. Virtual samples advantageously
`improve the systems accuracy. (See FIG. 3).
`Results
`To test the overall recognition performance 900 license
`plate images were processed in accordance with the inven
`tion. An overall recognition rate of 96% percent was
`obtained for the methods of the present invention as shown
`in Table 1.
`
`TABLE 1.
`
`Recognition
`rate %
`
`Execution time
`sec
`Pentium IITM 333
`MHz
`
`Execution
`time sec
`Intelligent
`Calea
`
`Localization
`Segmentation,
`Character
`classification
`Overall
`
`98.0
`98.0
`
`96.O
`
`O.10
`O.15
`
`O.25
`
`1.O
`1.5
`
`2.5
`
`An average processing time of 2.5 Sec on the target System
`was achieved by the intelligent camera of the present
`invention. In this case, the actual time may vary between
`about 2 and about 4 Seconds depending on image quality and
`appearance of the license plate, however further improve
`ments in processing time are contemplated. For bad images,
`more false positive Segmentation hypotheses are created and
`have to be evaluated by the classifier. In one embodiment,
`localization consumes about forty percent of the execution
`time-segmentation and identification consume about Sixty
`percent of the execution time.
`Referring to FIG. 5, an intelligent camera system 301 is
`shown in accordance with the present invention. In one
`embodiment, the methods of present invention may be
`developed and optimized on a PC. The methods may be
`
`US 6,553,131 B1
`
`8
`loaded on an intelligent camera 300, i.e., a camera capable
`of processing and Storing a program. Camera 300 includes
`a memory 302 and one or more processors 303 for executing
`program steps. Memory 302 preferably stores a current
`image of a license plate and Sub-images thereof. Also,
`license plate rules/codes (e.g., city codes) may be stored in
`memory 302 as well as trained weights of the neural network
`described above. Larger memories may include a plurality of
`images of license plates as well as other data. In one
`embodiment, intelligent camera 300, includes a Siemens VS
`710 camera. The Siemens VS 710 camera uses an INTEL
`processor (486/100 MHz, 8 MB RAM, 4 MB flash disk
`serial and Profi-bus interface). An interface 306 may be
`adapted to work with an external system 304. External
`System 304 may include a parking lot control System which
`includes fee Schedules, time in/out data, etc. for a given
`vehicle. Camera 300 is preferably compatible with a PC to
`permit license plate recognition methods to be downloaded
`and employed by camera 300. Therefore, the PC software
`written, for example, in C/C++ can be used on the camera
`without major adaptation. Routines for image capture and
`communication with the control System may have to be
`linked. Camera 300 may include a sensor 308 which triggers
`image capture through a lens 310 when a condition is
`Sensed. For example, if a vehicle approaches a parking lot
`gate, an image is captured for the vehicles license plate. The
`license plate image is recognized by the above methods of
`the present invention (FIG. 1). Memory 302 may store the
`license plate information as well as other information, Such
`as time in/out, etc. Advantageously, camera 300 may be
`placed remotely in a parking lot or other location without the
`need for a large processing System (a PC, etc.). In accor
`dance With the invention, an automated license plate recog
`nition System with high recognition accuracy is provided
`which includes a robust and low cost intelligent camera.
`With an intelligent camera based on a more powerful
`processor, motion analysis may be employed to extend
`applications towards moving vehicles and various outdoor
`Surveillance tasks. Camera 300 includes a license plate
`recognition program device 312 preferably implemented in
`Software. License plate recognition program device 312
`detects orientation, position, illumination conditions and
`blurring of the ima

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket