throbber

`
`
`
`
`Recognition of Vehicle License Plates from a
`Video Sequence
`
`I-Chen Tsai, Jui-Chen Wu, Jun-Wei Hsieh, and Yung-Sheng Chen
`
`
`
`Abstract—This paper proposes a robust system to
`recognize vehicle license plate by multi-frames learning. To
`fast locate the position of a license plate, we adopt a
`morphology-based method to extract important contrast
`features as filters to find all possible license plate
`candidates after calculating motion energy from video
`frames. The contrast feature is robust to lighting changes
`and invariant to different transformations like image
`scaling, translation, and skewing. Due to noise, many
`impossible license regions may be extracted. Hence, a
`Support Vector Machine (SVM) algorithm is adopted for
`verifying license plate regions. After locating license plate,
`the scheme of shape contexts is used to recognize the
`characters in license plate. To improve the correct rate of
`recognition, the verifying technique of multi-frames is
`further involved in our approach. Experimental results
`show that the proposed method
`is robust for the
`recognition of license plate.
`
`Index Terms—license plate recognition, morphology-based
`method, Support Vector Machine, shape contexts.
`
`
`I. INTRODUCTION
`With the rapid development of Intelligent Transportation
`Systems (ITS), license plate recognition (LPR) system has
`being broadly studied up to now. The applications of license
`plate recognition are widely used such as traffic volume, the
`monitoring of unsupervised park, traffic law enforcement, auto
`toll collation on highways, and so on.
`In the LPR system, many researches divided the topic into
`two parts, license plate locating and license plate recognition.
`In the license locating, some researches extracted the license
`plate based on its color [1], shape and gray level; the others
`looked for the region whose features are similar to characters in
`image directly. Duan et al. [2] proposed a method which
`combines Hough transform and contour to detect license plate
`
`
`
`Manuscript received August 20, 2008. This work was supported in part by
`the National Science Council, Taiwan, ROC, under the grant number
`NSC92-2213-E-155-052.
`I-Chen Tsai, Jui-Chen Wu and Jun-Wei Hsieh are with the Department of
`Electrical Engineering, Yuan Ze University, 135 Yuan-Tung Road, Chung-Li,
`Taoyuan 320, Taiwan, ROC.
`Yung-Sheng Chen is with the Department of Electrical Engineering, Yuan
`Ze University, 135 Yuan-Tung Road, Chung-Li, Taoyuan 320, Taiwan, ROC
`(corresponding author, phone: 886-3-4638800 ext 7113; fax: 886-3-4639355;
`e-mail: eeyschen@saturn.yzu.edu.tw).
`
`in static picture. They located the candidates by finding the
`contour in the edge space, and Hough transform is applied to
`filter the fake. However, it is possible that the edge of license
`plate may belong to an imperfect contour under the varied
`environments. In [3], Fujiyoshi et al. found the center of license
`plate in video by means of neural network; the input pattern
`position was random. After many iterations of learning (about
`10000 times), the position of center was decided. The accuracy
`was well but the time consuming was huge.
`In the character recognition, some well-known schemes
`were used such as artificial neural networks [4], fuzzy C-means
`and support vector machine. In [4], a contour tracking
`algorithm was proposed to detect the region of license plate in
`static picture, where an enhanced neural networks was
`designed to recognize the character. The algorithm built a
`hidden layer between input layer and output layer, in which
`similarity was defined by the ratio of the stored pattern and
`input pattern. Chang et al. [1] employed the HSI and color edge
`to find the license plate. The color space possesses a better
`linear independent than RGB. They built H, S, I and E (edge)
`maps and aggregated them by fuzzy operations. They finally
`combined optical character recognition (OCR) with neural
`network method to recognize characters. These methods cost
`much time to recognize character usually.
`As mentioned previously, many researches of license plate
`recognition in video sequence often use single image, which is
`captured by video camera, to achieve the result. Nevertheless,
`the video sequence having a series of variation and containing
`lots of information is worthy of considering since the rich of
`information in video can provide more detailed and powerful
`data being analyzed. In this study, a license plate recognition
`system using video sequence is proposed and simply previewed
`as follows. The morphological operation in [5] is first applied
`to locate candidate positions of a license plate. They are then
`verified by a support vector machine (SVM) method. In order
`to improve the character recognition, the method of shape
`contexts which can resist deformation is adopted since the
`quality of video sequence is usually not clear as the static image.
`Through a recognition process, the system will output a
`recognition result of vehicle license plate by learning technique
`of multi-frames. Fig. 1 shows the flowchart of the whole system.
`The system first uses motion energy calculation to estimate the
`appearance of a vehicle. Then a feature extraction method is
`used to locate license plate region. All possible license plates
`are verified by an SVM. After locating the license plate, the
`scheme of shape contexts is used to recognize the character in
`license plate. We finally refine each character by using a
`verifying technique of multi-frames.
`
`IAENG International Journal of Computer Science, 36:1, IJCS_36_1_04
`______________________________________________________________________________________
`
`(Advance online publication: 17 February 2009)
`
`AVS EXHIBIT 2005
`Toyota Inc. v. American Vehicular Sciences LLC
`IPR2013-00424
`
`

`

`
`
`
`
`
`No
`
`No
`
`Verify by SVM
`
`Load video image
`
`Start
`
`If motion energy
`> threshold
`
`Yes
`
`Feature
`Extractions
`
`Detection of
`License Plate
`Location
`
`Yes
`
`Results
`
`No
`
`If motion energy
`> threshold
`
`Verifying
`Function
`
`Character
`Recognition
`
`Character
`Segmentation
`
`Yes
`
`Find the License
`Region
`
`Fig. 1 Flowchart of the proposed LPR system.
`
`II. PROPOSED APPROACH
`This paper proposes an automatic system to recognize vehicle
`license plate by multi-frames learning. This approach includes
`mainly two parts: license plate detection and license plate
`recognition. The details of the approach are described in the
`following subsections.
`
`A. License Plate Detection
`Video sequence contains background and foreground
`information. In general, background information is more static
`than foreground and the human visual system is usually
`sensitive to the motion of object. In order to simulate the status,
`we extract the motion of object by frame subtraction. Then, we
`may be aimed at processing the part of motion. We use frame
`subtraction in HSI space and the energy of differenced frame is
`defined [6] as
`
`
`
`
`e
`
`=
`
`1N M
`
`−
`
`1
`−
`
`n
`
`=
`
`0
`
`m
`
`=
`
`0
`
`1
`× ∑ ∑
`N M
`
`(
`I n m
`,
`
`)
`
`2
`
`,
`
`
`
`(1)
`
`
`where n and m denote the row and column of differenced frame.
`N and M are the half-height and half-width of differenced
`frame.
`The appearance of vehicle can be estimated by the energy
`e as shown in Fig. 2. The peak of energy which occurs about
`the 400th-frame indicating the appearance of vehicle.
`
`
`Fig. 2 Illustration of motion energy.
`
`
`
`
`
`the
`the car motion, we apply
`After detecting
`morphological operations to extract the position of license plate.
`Because license plates often have high contrast characteristics
`especially in vertical direction, we adopt a morphology-based
`approach to detect license plate regions. The whole steps of the
`morphology-based extraction method are shown in Fig. 3.
`
`
`
`In order to eliminate noise, a smoothing operation with a
`7,7S
`structure element
` is first applied. Then, the closing and
`1,7S are performed
`opening operations with a structure element
`cI and
`oI
`into the smoothed image so that the output images
`can be obtained, respectively. For detecting license plate edges,
`cI and
`oI . To
`a differencing operation is further applied into
`make these edges more compactly and closely, a closing
`operation is used so that all characters embedded in a license
`plate will be connected as a single segment. After that, a
`thresholding operation is used for converting the analyzed
`image into a binary map. Then, a labeling process is executed
`to extract the license plate analogue segments, and thus a set of
`potential license plates can be obtained for further verification.
`
`
`Input
`Image
`
`Average
`(7,7)
`
`Closing
`(1,7)
`
`Opening
`(1,7)
`
`Differencing
`
`Closing
`(5,5)
`
`Feature
`Extraction
`
`Labeling
`
`Thresholding
`
`
`Fig. 3 Flowchart of the proposed method to extract contrast
`feature for license plate detection.
`
`
`
`In order to extract the possible position of a vehicle license
`plate, the connected component and labeling operator are
`applied. Furthermore, based on geometrical characteristics, the
`potential candidates’ positions can be finally detected as shown
`in Fig. 4.
`
`
`(a) (b)
`
`
`
`
`
`(c)
`Fig. 4 The process of morphological operation. (a) the
`differenced
`frame picture,
`(b)
`result after applying
`morphological operators, and (c) marking the region of
`license plate.
`
`
`
`For raising the possibility of the license plate’s region, we
`apply the classifier to distinguish the license plate from
`candidates. The classifier of SVM [7] has been displayed a
`great performance without needing to add a priori knowledge.
`Hence, we adopt the classifier of SVM to distinguish the
`license plate from candidates. The support vectors in our
`system are license plate patterns and non-license-plate patterns.
`The license plate patterns belong to class 1 and the
`non-license-plate patterns belong to class 2. Given a set of
`labeled training examples, let a feature space G including l
`patterns be represented by
`
`IAENG International Journal of Computer Science, 36:1, IJCS_36_1_04
`______________________________________________________________________________________
`
`(Advance online publication: 17 February 2009)
`
`

`

`
`
`
`
`
`x-projection technique. In practice, for characters embedded in
`a license plate, they should satisfy the following requirements:
`
`
`A1: their widths should be similar;
` A2: their heights should be similar;
` A3: the densities of all character are among the determined
`range; and
` A4: the ratios of all character must be greater than one.
`
` region satisfying the above properties is presumed that it is a
`character of license plate. In our study, the letters and numbers
`appearing in a license plate are ordered as shown Fig. 6.
`
`
` A
`
`
`Fig. 5 Example of training patterns.
`
`
`
`(a)
`
`
`
`
`
`(b)
`Fig. 6 (a) The letters and (b) numbers in a license plate.
`
`
`
`
`
`In this paper, we use the shape contexts for recognizing
`the license plate character. In previous method [10], the size of
`. In real case, however the size of
`test images is about 200 300

`) may be too small to sample. The edge
`character (e.g. 7 18×
`points of character are few and they are not representative
`enough. For this reason, we use all points whose value is 1 in a
`character image to replace the edge points. Some characters
`possess a smaller width, it will influence the precise of
`recognition by shape contexts. To avoid this problem, the size
`of character is normalized to its height which is more stable
`usually.
`
`
`
`(a) (b) (c)
`
`
`
`
`
`
`
` (e)
`(d)
`Fig. 7 Recognition using shape contexts. (a) original image,
`(b) shape of original image, (c) shape of test image, (d) the
`feature of original image, and (e) the feature of test image.
`
`
`
`}
`{
`o
`o o
`of
`An object is represented by a set
`,......,
`=
`,
`Ω
`l
`1
`2
`shape points which are sampled from internal and external
`
`,
`
`
`
`(2)
`
`
`
`=
`
`
`
`}
`{
`n∈ix R stands for an input vector and
`iy ∈ + − is
`where
`1, 1
`the desired output. The key of support vector machine is the
`training set selection. The collection of training set will
`influence not only the function of hyper-plane, but also the
`accuracy of distinguishing. Generally, SVM has the following
`form to represent the optimal spearing hyper-plane:
`⎛
`⎞
`∑
`(
`)
`(
`)
`f x
`k x x
`y
`,


`⋅
`⎜
`⎟
`i
`i
`1
`⎝
`⎠
`support vectors
`However, a training set of support vector machine needs a large
`number of patterns. Since, the number of training patterns is m ,
`2m size. Thus, the
`the memory of training an SVM needs
`number of training patterns is generally over 5000. This leads
`to many researches to exploit how to reduce the time and the
`memory size of SVM [8].
`In our method, the morphological operation is applied to
`reduce the number of test patterns. The procedure to build a
`reliable SVM training set is shown as follows, where some
`training patterns are displayed in Fig. 5.
`
`G
`
`=
`
`{
`(
`
`x
`
`i
`
`,
`
`y
`i
`
`} 1
`)
`
`l
`i
`
`
`
`=
`
`sign
`
`−
`
`b
`
`.
`
`
`
`(3)
`
`i
`
`, where z is
`
`Procedure of support vector machine:
`1. Collect the initial patterns that include license plate
`patterns and non-license-plate patterns.
`{ } 1
`z
`
`=L p
`2. Regard all of patterns as a set
`i=
`the number of patterns.
`ip , and obtain a
`3. Calculate the value of hue in each
`, where m and n are
`{
`}
`support vector
`=p
`h h
`h ×
`,..,
`1,
`m n
`i
`0
`the height and the width of pattern, respectively.
`i ∈p L into the above Eq. (3). Then
`4. Transport all of
`find the hyper-plane between these two kinds of
`patterns.
`5. Classify the selected patterns which contain the
`random patterns and training patterns.
`6. Verify the results if most of them are correct? If the
`accuracy is low, iterate step 2-4 until the accuracy
`rate is steady and the training set of SVM is thus
`determined.
`7. According to Eq. (3), check which side of the
`hyper-plane the unknown pattern belonging to. If the
`computing value is greater than 1, the pattern belongs
`to license plate. Otherwise, the pattern should be a
`member of non-license-plate.
`
`
`B. License Plate Recognition
`Before performing recognition, license plate region is
`binarized first into a binary map using a threshold RT . RT is
`found using the “minimum within-group variance” dynamic
`thresholding method [9]. Given a license plate candidate R, the
`verification scheme takes its longest axis as the x-axis and then
`finds its several character geometrical properties using an
`
`IAENG International Journal of Computer Science, 36:1, IJCS_36_1_04
`______________________________________________________________________________________
`
`(Advance online publication: 17 February 2009)
`
`

`

`
`
`contours of the object. We assume that the shapes of original
`image (b) and test image (c) are given in Fig. 7, where (d) and (e)
`are the statistical graphs of (b) and (c) respectively in the
`log-polar coordinates. Each pixel in Fig. 7(d) and (e) indicates
`the bin of log-polar coordinates. The darker of the pixel
`indicates the more pixels inside this bin.
`io on the original shape, the corresponding
`For each point
`point it on the test shape is our target. Then, there are
`1l −
`vectors from a point on the shape to all the others, and the shape
`is represented by these vectors. The vectors contain much more
`information than the edge points, and the description of shape is
`more detailed. For building a histogram to match, the center
`point is chosen by one of the shape point and the remaining
`1l − points are computed in log-polar space.
`The log-polar mapping is a transformation from the points
`on Cartesian coordinate to the log-polar plane. The functions of
`transformation are shown as
`
`)
`(
`, and
`+
`)
`(
`y x
`.
`
`tan

`=
`Fig. 8 shows the example for log-polar space.
`
`
`
`r
`
`=
`
`log
`
`2
`
`y
`
`2
`
`x
`
`1
`−
`
`(4)
`
`(5)
`
`
`
`
`Fig. 8 The diagram of log-polar.
`
`
`
`The log-polar space is more sensitive to positions of points
`the shape
`than
`the perpendicular coordinate. The
`on
`information of the log-polar space contains not only the
`position but also the direction to other points.
`To estimate the similarly between two shape contexts
`which come from the same histogram distribution, the
` can be used. Let
`statistical method such as
`test
`2
`χ −
`(
`)
`(
`)
`S and
`S denote the histogram of two
`ah
`bh
`1,2,...,
`1,2,...,
`shape contexts and there are S bins of the histogram. Eq. (6)
`functions the similarity between the two shapes, which is
`ranged between 0 and 1,
`
`2
`2
`χ χ
`≡
`ab
`
`(
`
`h h
`,
`a
`b
`
`)
`
`=
`
`1
`2
`
`S
`
`∑
`
`s
`
`1
`=
`
`2
`
`⎡
`⎣
`
`( )
`( )
`h s
`h s
`−
`⎤
`⎦
`a
`b
`( )
`( )
`h s
`h s
`+
`a
`b
`
`.
`
` (6)
`
`
`According to Eq.(6), the template which contains the minimal
`2χ has the most similar to the test image.
`After performing the above methods, the character can be
`identified and recognized. However, the hypothesis we
`extracted may contain incomplete characters as shown in Fig. 9.
`To improve the correct rate of recognition, the verifying
`technique of multi-frames is further adopted.
`The procedure of verifying process is described as follows.
`First, we build a matrix to store the similarity of character. As
`
`
`
`the result of character recognition in each frame is obtained, we
`compare the similarity in this frame with the matrix and remain
`the smaller one. Here we establish a threshold of similarity for
`checking. All of the characters’ similarity should be less than
`the threshold.
`
`
`Fig. 9 The incomplete segmentation.
`
`
`
`(a)
`
`
`
` (b)
`
`
`(c) (d)
`Fig. 10 The process of verifying technique.
`
`
`
`
`
`
`
`As the illustration in Fig. 10(b), the last character is not
`extracted, and the third character is recognized incorrectly. In
`our method, we keep recognizing the following frames and
`rectifying the result of characters as Fig. 10(c) and (d) display.
`The recognition process is handled until the vehicle disappears
`in scene. The result obtained after verifying technique is much
`reliable because it is solved with unceasing checking.
`
`
`III. EXPERIMENTAL RESULTS
`The proposed system is implemented on MS-Window based
`PC with Pentium VI 2.8G CPU inside and the programming
`environment is under Borland C++ Builder 6.0. The video is
`shot by the digital video camera, and transmitted to the personal
`computer through a captured card.
`In order to analyze the performance of our proposed
`approach, four experiments are demonstrated as follows.
`
`Experiment 1: Fundamental Function
`
`
`
`
`Fig. 11 The motion energy for illustration.
`
`
`The first one shows the fundamental function in our vehicle
`license plate recognition system, where the input of video
`sequence contains one vehicle. According to the motion energy
`as shown in Fig. 11, the system detects the appearance of
`vehicle about the 150th frame. In this experiment, the video
`sequence is shown in Fig. 12 and the license plate number is
`displayed at the left-bottom of the screen.
`
`IAENG International Journal of Computer Science, 36:1, IJCS_36_1_04
`______________________________________________________________________________________
`
`(Advance online publication: 17 February 2009)
`
`

`

`
`
`we persist for tracking the recognition result by shape contexts
`to rectify the number of license plate. Fig. 13(e) shows the final
`result of the license plate. We record the verifying times for
`every sequence to get the final result of license plate’s number
`in Table 1. Observing Table 1, there are 28 of 41 video
`sequences that pass more than once verifying and achieve at the
`final result. In other words, more than half experiments need
`verifying function to get the final result; this confirms that there
`is the necessary of verifying function to improve the precise of
`system.
`
`
`
`
`
`(a) (b)
`
`
`(c) (d)
`
`(e)
`
`
`
` (f)
`
`
`(g) (h)
`
`
`(i) (j)
`
`
`(k) (l)
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`(m) (n)
`Fig. 12 A video sequence in Experiment 1.
`
`
`
`
`Experiment 2: Verifying Function
`In this experiment, we illustrate the result of verifying license
`plate character.
`The process of verifying function is exhibited in Fig. 13,
`where (a) the initial column is empty; (b)-(d) the numbers of
`license plate are recognized. During the period of car passing
`
`
`
`
`(a) (b)
`
`
`(c) (d)
`
`
`
`
`
`
`(e) (f)
`Fig. 13 A video sequence in Experiment 2.
`
`
`
`Table 1 Verifying times for each sequence.
`Verifying times used for
`Number of video
`obtaining the final result
`sequences
`1
`13
`2
`8
`3
`9
`4
`6
`5
`4
`6
`1
`Average
`2.34
`
`
`Experiment 3: Noise
`Sometimes, the video sequences contain some noises. There are
`many reasons which may cause this situation. For example, the
`shooting environments, the tapes of digital video camera, the
`man-made mistakes and the mechanism of digital video camera
`are probably made it.
`In this experiment, the noise appearing in the video
`sequences is made by the tape of digital video camera. The
`video sequences corrupt some random blemishes which are
`illustrated in Fig. 14. In this case, our proposed method
`demonstrates a good recognition.
`
`IAENG International Journal of Computer Science, 36:1, IJCS_36_1_04
`______________________________________________________________________________________
`
`(Advance online publication: 17 February 2009)
`
`

`

`
`
`
`
`
`
`Fig. 14 Illustration of a video sequence with noise.
`
`
`
`
`(i) (j)
`
`
`
`In another case, we illustrate a diagram of motion energy
`as shown in Fig. 15 and recognize the license plates in video
`sequence with noise as displayed in
`Fig. 16. Compared to the previous motion energy shown
`in Fig. 2, we notice that the curve of motion energy in Fig. 15 is
`more rigged. Because the frequency of noise is higher than
`signal, the energy of noise is also greater than to the signal.
`
`
`
`Fig. 15 The diagram of motion energy with noise effects.
`
`
`
`
`(a) (b)
`
`
`(c) (d)
`
`
`(e) (f)
`
`
`(g) (h)
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`(k)
`
`
`
` (l)
`
`
`(m) (n)
`
`
`(o) (p)
`
`
`(q) (r)
`
`
`(s) (t)
`
`
`(u) (v)
`
`
`Fig. 16 A video sequence with noise effects.
`
`IAENG International Journal of Computer Science, 36:1, IJCS_36_1_04
`______________________________________________________________________________________
`
`(Advance online publication: 17 February 2009)
`
`

`

`
`
`Experiment 4: The Recognition Rate of Our System
`We perform a series of video sequences in real environments to
`demonstrate the robustness of our system. To evaluate the
`performances of our scheme in vehicle license recognition
`system, the average accuracy of recognition is used in this
`paper. Recognition rate is the ratio of correctly recognized
`Num
`vehicle license plate
` to the total vehicle license
`Correct
`Num in database; that is,
`plates number
`Total
`
`
`Recognition rate =
`
`Num
`Total
`
`.
`
` (7)
`
`Num
`Correct
`
`The video sequences are selected overall from human eye. All
`of the results are obtained by multi-frames. The input of video
`sequences contains 41 license plates, and the recognition of
`license plate character is recorded.
`In 41 video sequences, there are 38 license plates in which
`all characters are recognized correctly, the precision rate is
`92.68%. The result is shown in Table 2. There are 246
`characters which are figured out, and only four characters are
`recognized incorrectly. The accuracy of our system to
`recognize characters is 98.3%. The statistic analysis is shown in
`Fig. 17.
`
`
`Table 2 The accuracy of license plate recognized.
`The number of video sequences
`41
`Characters are recognized correctly
`38
`Accuracy
`92.68 %
`
`
`
`
`
`Fig. 17 The statistic results of character recognized.
`
`Based on our experiments, the main contributions of this
`approach can be summarized as follows.
`1. Some
`traditional methods proposed a vertical
`edge-matching algorithm for grouping all possible
`positions of license plates through edge matching.
`They assumed the vertical boundaries between a
`license plate and its backgrounds are contrasted
`significantly. However, as long as the colors of the
`license plates are similar to their backgrounds, such
`an assumption will no longer exist. Hence a
`morphology-based method was designed in our
`approach for extracting high contrast areas as license
`plate candidates. This feasible feature is invariant to
`different changes like lighting, rotation, translation,
`and complicated backgrounds.
`2. The template-based method was often adopted for
`character recognition. However, the quality of video
`sequence is not usually clear like that of a static
`
`
`
`picture. To improve the recognition of character, the
`method of shape contexts was therefore involved in
`our approach to resist the deformation.
`3. Many researches of license plate recognition in
`video sequence still use the single picture captured
`from a video camera to achieve the result. For the
`sequence contains a lot of information, we proposed
`the verifying technique of multi-frames to enhance
`the correct rate of recognition.
`As a result, the proposed method has good abilities to recognize
`license plates even under cluttered background.
`
`
`IV. CONCLUSIONS
`In this paper, we have presented an approach to recognize the
`license plate according to multi-frames learning. There are two
`schemes used usually to detect license plate in video. The first
`is for the single image trigged when a car passes, and the second
`is to find the license plate from video sequence captured at the
`stop toll collection. In our approach, the input sequences are
`shot without car stopping.
`Besides, our system applies the SVM on the strength of
`high accuracy to locate the position of license plate, and uses
`the shape contexts which can resist the skew and deformed
`situations to recognize characters of a license plate. Our
`experiments show that it is very robust and high accuracy to
`recognize the characters. However, the disadvantages of SVM
`are of huge data and poor speed, we overcome it by using
`morphological operations.
`In the near future, we will consider another classifier
`promising speed and accuracy to make our system more useful.
`Moreover, the current system can only recognize license plates,
`by combining the database and hardware it is possible to build a
`toll collection system for example, for real
`complete
`application.
`
`REFERENCES
`[1] S. L. Chang, L. S. Chen, Y. C. Chung, and S. W. Chen, “Automatic
`IEEE Transactions on
`Intelligent
`License Plate Recognition,”
`Transportation Systems, Vol. 5, No. 1, pp. 42-53, March 2004.
`[2] T. D. Duan, D. A. Duc, and T. L. H. Du, “Combining Hough Transform
`and Contour Algorithm for detecting Vehicles’ License-Plates,”
`Proceedings of International Symposium on Intelligent Multimedia,
`Video and Speech Processing, pp.747-750, October 2004.
`[3] H. Fujiyoshi, T. Umezaki, and T. Imamura, “Area Extraction of License
`Plates Using an Artificial Neural Network,” Systems and Computers in
`Japan, Vol. 29, No. 11, pp.55-64, 1998.
`[4] K. B. Kim, S. W. Jang, and C. K. Kim, “Recognition of Car License Plate
`by Using Dynamical Threshlod Method and Enhanced Neural
`Networks,” Proceedings Lecture Notes in Computer Science, pp.
`309-319, 2003.
`J. W. Hsieh, S. H. Yu, and Y. S. Chen, “Morphology-based license plate
`detection in images of differently illuminated and oriented cars,” Journal
`of Electronic Imaging, Vol. 11, No. 4, 507-516, 2002.
`[6] Y. J. Wang and B. Z. Yuan, “Robust Face Tracking by Motion Energy
`and Genetic Algorithms,” IEEE ICSP, Vol. 1, pp.695-698, August 2002.
`[7] V. Vapnik, The Nature of Statistical Learning Theory. New York,
`Spring-Verlag, 1995.
`[8] K. M. Lin and C. J. Lin, “A Study on Reduced Support Vector
`Machines,” IEEE Transactions on Neural Networks, Vol. 14, No. 6,
`November 2003.
`[9] M. Sonka, V. Hlavac, and R. Boyle, Image Processing, Analysis and
`Machine Vision, London, U. K., Chapman & Hall, 1993.
`
`[5]
`
`IAENG International Journal of Computer Science, 36:1, IJCS_36_1_04
`______________________________________________________________________________________
`
`(Advance online publication: 17 February 2009)
`
`

`

`
`
`Jun-Wei Hsieh received the Bachelor’s degree in
`Computer Science from TongHai University, Taiwan,
`in 1990, and received the Ph. D. degree in Computer
`Engineering from the National Central University,
`Taiwan, in 1995. He got the Phai-Tao-Phai award
`when he graduated. From 1996 to 2000, he was a
`Researcher Fellow at the Industrial Technology
`Researcher Institute, Hsinchu, Taiwan, and managed
`a team to develop video-related technologies. He is
`presently an Associate Professor at the Department of
`Electrical Engineering, Yuan Ze University of
`Taiwan. He received the Best Paper Awards from the
`Image Processing and Pattern Recognition society of Taiwan in 2005, 2006,
`and 2007, respectively. His research interests include content-based multimedia
`databases, video indexing and retrieval, computer vision, and pattern
`recognition.
`
`
`
`
`Yung-Sheng Chen was born in Taiwan, R.O.C., on
`June 30, 1961. He received the B.S. degree from
`Chung Yuan Christian University, Chung-Li, Taiwan,
`in 1983 and the M.S. and Ph.D. degrees from
`National Tsing Hua University, Hsinchu, Taiwan, in
`1985, and 1989, respectively, all in electrical
`engineering.
`In 1991, he joined the Electrical Engineering
`Department, Yuan Ze Institute of Technology,
`Chung-Li, where he is now a Professor. His research
`interests include human visual perception, computer vision and graphics, circuit
`design, and website design.
`Dr. Chen received the Best Paper Award from the Chinese Institute of
`Engineers in 1989 and an Outstanding Teaching Award from Yuan Ze
`University in 2005. He has been listed in the Who's Who of the World since
`1998 and awarded with The Millennium Medal from The Who's Who Institute
`in 2001. He is a member of the IEEE, and the IPPR of Taiwan, R.O.C.
`
`
`I-Chen Tsai was born in Taiwan, R.O.C., on August
`17, 1981. She received the B.S. degree and the M.S.
`degree from Yuan Ze University, Chung-Li, Taiwan,
`in 2003 and 2005, respectively, both in electrical
`engineering. She is currently working as a firmware
`engineer in Gueishan industrial park, Taoyuan,
`Taiwan, R.O.C.
`Her research interests include image processing
`and pattern recognition
`
`
`
`Jui-Chen Wu was born in Taiwan, R.O.C., on
`November 22, 1976. She received the B.S. degree
`from Chung Yuan Christian University and the M.S.
`degree from Yuan Ze University, Chung-Li, Taiwan,
`in 2000 and 2002, respectively. She is currently
`working toward the Ph.D. degree in the Department
`of Electrical Engineering, Yuan Ze University,
`Chung-Li, Taiwan, R.O.C.
`Her research interests include image processing
`and pattern recognition.
`
`
`
`
`[10] S. Belongie, J. Malik, J. Puzicha, “Shape Matching and Object
`Recognition Using Shape Contexts,” IEEE Transaction on Pattern
`Analysis and Machine Intelligence, Vol. 24, No. 24, pp. 509-522, April
`2002.
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`IAENG International Journal of Computer Science, 36:1, IJCS_36_1_04
`______________________________________________________________________________________
`
`(Advance online publication: 17 February 2009)
`
`

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket