`
`SAMSUNG EXHIBIT 1006
`Samsung v. Image Processing Techs.
`
`
`
`U.S. Patent
`
`Aug. 24, 1993
`
`Sheet 1 of 10
`
`F/g/
`
`/
`
`(PRIOR ART)
`
`INPUT
`
`TRANSDUCER
`
`(PRIOR ART)
`
`Fig.2
`
`(PRIOR ART)
`
`F/g3
`
`Hg. 4
`
`(PRIOR ART)
`NO.OFSAMPLES
`
`
`BIRCH
`
`Fig.5
`
`(PRIOR ART)
`
`
`
`
`
`GRAINPROMINENCEF2
`
`BRIGHTNESS F1
`
`SAMSUNG EXHIBIT 1006
`
`Page 2 of 17
`
`SAMSUNG EXHIBIT 1006
`Page 2 of 17
`
`
`
`US. Patent
`
`Aug. 24, 1993
`
`Sheet 2 of 10
`
`5,239
`
`594
`
`’
`
`MN
`
`.223\oz_z_<E#JQ.Ill.
`NM...Mml-
`
`a
`
`k\
`
`v.
`
`
`mQPoSu—waV
`$55.5...—
`
`map—Km“.
`
`,\
`
`5a;
`
`mmoamz/umk
`
`gt
`
`SAMSUNG EXHIBIT 1006
`
`PageBof‘l?
`
`SAMSUNG EXHIBIT 1006
`Page 3 of 17
`
`
`
`
`US. Patent
`
`Aug. 24, 1993
`
`Sheet 3 of 10
`
`5,239,594
`
`997
`
`fi1
`fi 2
`
`fij
`
`SELF-
`ORGANIZING
`CLASSIFIER
`
`fiN
`
`.
`
`.
`
`rBJ
`
`' CORRECTS
`CLASS
`SIGNAL
`
`
`
`WEIGHTS UPDATE
`
`
`SBNAL?
`
`23
`
`CORRECT CLASS/J
`
`SIGNALL
`
`29
`
`SAMSUNG EXHIBIT 1006
`
`Page 4 of 17
`
`SAMSUNG EXHIBIT 1006
`Page 4 of 17
`
`
`
`US. Patent
`
`Aug. 24, 1993
`
`Sheet 4 of 10
`
`5,239,594
`
`Hg. 9
`
`WEIGHT UPDATE SIGNAL/
`
`1:“
`23
`
`CLASS
`SIGNAL
`
`SAMSUNG EXHIBIT 1006
`
`Page 5 of 17
`
`SAMSUNG EXHIBIT 1006
`Page 5 of 17
`
`
`
`US. Patent
`
`Aug. 24, 1993
`
`Sheet 5 of 10
`
`5,239,594
`
`
`
`SAMSUNG EXHIBIT 1006
`
`Page 6 of 17
`
`SAMSUNG EXHIBIT 1006
`Page 6 of 17
`
`
`
`US. Patent
`
`Aug. 24, 1993
`
`Sheet 6 of 10
`
`5,239,594
`
` 27
`
`
`
`SAMSUNG EXHIBIT 1006
`
`Page 7 of 17
`
`SAMSUNG EXHIBIT 1006
`Page 7 of 17
`
`
`
`US. Patent
`
`I
`
`Aug. 24, 1993
`
`Sheet 7 of 10
`
`5,239,594
`
`ZI8
`I— — — — _ _ _ _ _ — — _ — ' 49
`l
`48/33
`:
`
`r”
`
`/2/
`P
`
`, l I I I I
`
`l
`
`I
`l
`}
`l
`i
`I
`
`I I
`
`CLASS '
`
`SELECTOR
`
`
`
`l l
`
`’
`
`lI
`
`I
`I
`I
`
`r1M
`.
`NM
`
`5
`
`/33
`TM
`
`l
`g
`L_ _ _ _ E _ _ _ _ _ _ _ __|
`
`19
`
`£3L
`
`CORRECT CLASS
`SIGNAL
`
`LEARNING
`TRlGGER
`
`/22
`Tr
`
`SAMSUNG EXHIBIT 1006
`
`Page 8 of 17
`
`SAMSUNG EXHIBIT 1006
`Page 8 of 17
`
`
`
`US. Patent
`
`Aug. 24, 1993
`
`Sheet 8 of 10
`
`5,239,594
`
`
`
`SAMSUNG EXHIBIT 1006
`
`Page 9 of 17
`
`SAMSUNG EXHIBIT 1006
`Page 9 of 17
`
`
`
`US. Patent
`
`Aug. 24, 1993
`
`Sheet 9 of 10
`
`5,239,594
`
`F/g/6
`
`
`
`SAMSUNG EXHIBIT 1006
`
`Page 10 of 17
`
`SAMSUNG EXHIBIT 1006
`Page 10 of 17
`
`
`
`US. Patent
`
`Aug. 24, 1993
`
`Sheet 10 of 10
`
`5,239,594
`
`59/7
`
`INPUT TRANSWCER GENERATES S
`
`DETERMINE FEATURE VECTORS
`
`GENERATE RESPONSE VECTORS
`
`62
`
`63
`
`64
`
`65
`
`DETERMINE OJTPUTS T112,....;I'M
`
`'
`
`-
`
`6‘6
`
`DHERMINE Cmin1,Cmin2,Tmin1 AND Tminz
`
`67
`
`NO
`
`LEARNING MOE
`P
`
`EYS
`
`
`
`
`
`INPUT TRAINING SIGNAL
`
`'
`
`69
`
`CORRECT CLASS
`#Cmim
`
`OR
`P= REJECT
`
`TRANSFER CORRECT CLASS TO
`SELF-ORGANIZING CLASSIFIERS
`
`
`
`MODIFY WEIGHTING VECTORS
`IN SEQUENCE
`'
`
`7/
`
`Z
`
`~°
`
`YES
`
`m
`
`SAMSUNG EXHIBIT 1006
`
`Page 11 of 17
`
`SAMSUNG EXHIBIT 1006
`Page 11 of 17
`
`
`
`1
`
`5,239,594
`
`SELF-ORGANIZING PATTERN CLASSIFICATION
`NEURAL NETWORK SYSTEM
`
`This application is a continuation of application Ser.
`No. 654,424, filed Feb. 12, 1991, now abandoned.
`
`BACKGROUND OF THE INVENTION
`
`1. Field of the Invention
`
`The present invention relates generally to pattern
`classification systems and, more particularly, to a com-
`pound pattern classification system using neural net-
`works which is able to vary a response signal by learn-
`ing from repeatedly input pattern signals to provide
`correct classification results.
`2. Description of the Prior Art
`Pattern classification systems, such as character or
`voice recognition systems, separate and identify classes
`of incoming pattern signals. FIG. 1 shows a conven-
`tional pattern classification system such as described by
`Richard O. Duda and Peter E. Hart in Pattern CIaSsifi-
`cation and Scene Analysis, Wiley-Interscience Publish-
`ers, pp. 2—4. This classification system includes an input
`transducer 1, such as a television camera, which per-
`forms opto-electronic conversion of characters to gen-
`erate pattern signals S providing characteristic informa-
`tion about the characters. The system further includes a
`feature extractor 2 which receives the pattern signals S
`and generates feature vectors F useful for classifying
`the characters. The system is also provided with a clas-
`sifier 3 which classifies the characters and generates
`classification responses P based on the distributions of
`the feature vectors F. In order to make such classifiers,
`pattern recognition techniques, such as a linear discrimi-
`nation method, have been developed. However, the
`classification systems using these techniques are unable
`to learn by adjusting classes to account for new input
`patterns or to create new classes. Consequently,
`it is
`necessary to manually develop the information for clas-
`sifying pattern signals and manually incorporate the
`information into the system. This manual development
`and incorporation diminishes the efficiency of the sys-
`tem and provides another potential source for error (i.e.
`human error).
`'
`In order to solve this problem, many self-organizing
`pattern classifiers have been proposed which are able to
`organize themselves correctly to separate a given num-
`ber of pattern signals into their classes. An example of a
`self-organizing pattern classifier is that which make use
`of a back propagation learning method such as shown
`by Richard P. Lippmann in “An Introduction to Com-
`puting with Neural Nets,” IEEE ASSP Magazine,
`April 1987, Vol. 4, No. 2, pp. 4-22. The back propaga-
`tion technique is an iterative gradient algorithum that
`seeks to minimize the mean square error between actual
`output and desired output. Another example of a self-
`organizing pattern classifier is the learning vector quan-
`tization 2 technique such as shown by Teuvo Kohonen,
`Gyorgy Bama, and Ronald Chrisley, “Statistical Pat-
`tern Recognition with Neural Networks: Benchmark-
`ing Studies,” Proceedings of IEEE International Confer-
`ence on Neural Networks, Jul. 24-27 1988, Vol. 1, pp.
`61-68.
`These self-organizing pattern classifiers suffer the
`drawback that when they make a wrong classification,
`they modify the information about stored weighting
`data to attempt to yield more accurate results. FIG. 2
`shows the distributions of two classes CA and C3 in a
`
`5
`
`10
`
`15
`
`20
`
`25
`
`3O
`
`35
`
`45
`
`50
`
`55
`
`65
`
`2
`two dimensional vector space defined by feature axes
`X1 and X2. The above self-organizing classifiers are
`able to make correct boundaries 9 by using the back
`propagation learning method or learning vector guanti-
`zation 2 technique (referenced above) to separate the
`two classes CA and CB.
`As long as the distributions of the respective feature
`vectors F, each consisting of N elements fl, f2, .
`.
`. , fN,
`do not overlap each other, the above classifiers are able
`to learn to provide correct classification with high clas-
`sification rates. However, as FIG. 3 shows, when the
`distributions 10 and 11 of the feature vectors F of two
`classes CA and C3 overlap each other in an area 12, none
`of the above learning techniques make it possible to
`separate these two classes.
`When a large number of classes are identified, such as
`are used with. classifying Chinese characters, it is rare
`for a feature vector of a given class to not overlap with
`feature vectors of other classes (hereinafter such feature
`vector will be referred to as a “single aspect feature
`vector”). Thus,
`the above described self»organizing
`classifiers which have been designed for single aspect
`feature vectors fail to provide high recognition rates for
`multiple aspect feature vectors.
`One approach to overcoming this problem of over-
`lapping feature vectors of different classes is to utilize
`multiple features. FIG. 4 provides an example wherein
`a single feature is used. In particular, it shows the distri-
`butions 13 and 14 of brightness features F1 for ash wood
`and birch wood respectively, as described in Pattern
`Classification and Scene Analysis, at pp. 2—4. FIG. 5
`provides an example wherein multiple features are used.
`FIG. 5 shows the distributions 15 and 16 of ash wood
`
`to the
`respectively, with respect
`and birch wood,
`brightness feature F1 and the grain prominence feature
`F2. In FIG. 4, there is a large overlapping area in the
`brightness feature F1 of the ash wood 13 and the birch
`wood 14. As such, it is impossible to make correct clas-
`sification using only the brightness feature F1. How-
`ever, as shown in FIG. 5, by using both the brightness
`feature F1 and the grain prominence feature F2, it is
`possible to classify these two objects correctly. In this
`way, by inputting two or more feature vectors, the use
`of multiple features does not suffer the drawback de-
`scribed above for single feature approaches.
`However, a drawback with the use of multiple fea-
`ture vectors is that the features of feature vectors F1,
`F2, .
`.
`.
`, FN (where N is a positive integer) are not
`generally related. As a result, not only large areas of
`memory but also large amounts of computing time are
`necessary to input the N feature vectors F1, F2, . . . , FN
`into the self organizing pattern classifier. For example,
`suppose that there are no relations among the three
`feature vectors F1, F2, and F3 which are used to iden-
`tify an object A. Further suppose that the object A has
`four different instances F11, F12, F13, and F14 of the
`feature vector F1, four different instances F21, F22,
`F23, and F24 at the feature vector F2, and four different
`instances F31, F32, F33, and F34 of the feature vector
`F3. Then, it is possible to represent the object A by
`using a vector F as follows:
`
`F={Fli, sz, F3k)}
`
`there are 64 (i.e.
`wherein i, j, k: 1, 2, 3, 4. Thus,
`4X4X4) different instances, and 192 (i.e. 64X 3) vectors
`are required to represent the object A.
`
`SAMSUNG EXHIBIT 1006
`
`Page 12 of 17
`
`SAMSUNG EXHIBIT 1006
`Page 12 of 17
`
`
`
`3
`Accordingly, it is an object of the invention to pro-
`vide a compound self-organizing pattern classification
`system that effectively utilizes a plurality of different
`feature vectors.
`It is another object of the invention to provide a
`compound self-organizing pattern classification system
`which is able to classify pattern signals with accuracies
`higher than those of respective pattern classifiers by
`making compound classification based on the outputs of
`a number of independent self-organizing pattern classifi-
`ers into which a number of different feature vectors are
`input.
`
`SUMMARY OF THE INVENTION
`
`According to the present invention, a self-organizing
`pattern classification neural network system classifies
`incoming pattern signals into classes. The system in-
`cludes feature extractors for extracting different feature
`vectors from an incoming pattern signal. For instance, if
`the input is visual data focusing on a piece of wood, the
`feature extractors might extract the features of grain
`prominence and brightness from the visual data. The
`feature extractors are coupled to self-organizing neural
`network classifiers. A separate neural network classifier
`is provided for each of the feature extractors. The clas-
`sifiers receive the feature vectors and generate response
`vectors comprising a plurality of responsive scalars
`corresponding to the respective classes. The response
`scalar is forwarded to a discriminator which receives
`the response vectors for each class and generates a
`classification response. The classification response in-
`cludes information indicative of whether a classification
`is possible and also information indicating an identified
`class. Lastly, the system includes a learning trigger for
`transferring a correct class signal to the self-organizing
`classifiers based on a class of the training signal and
`based on the classification response.
`It is preferred that each self-organizing classifier is
`comprised of a neural networks having input nodes for
`receiving feature scalars of each of the feature vectors
`and a plurality of intermediate nodes for receiving the
`feature scalars for said input node. The intermediate
`nodes also generate a plurality of intermediate outputs
`that are received by output nodes of a given class.
`Hence, the intermediate nodes of a particular class are
`all coupled to a single output node. The output node
`determines a smallest
`intermediate output amongst
`those received from the intermediate node. It transfers
`this intermediate output to the discriminator as a re-
`sponse scalar. The classifier also includes a self-organiz-
`ing‘selector for receiving the smallest intermediate out-
`put and a node number of said intermediate node which
`gives the smallest intermediate output. The self—organiz-
`ing selector determines a weight update signal based on
`the node number and intermediate output from this
`intermediate node. It also determines the correct class
`signal.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`5
`
`10
`
`15
`
`25
`
`30
`
`35
`
`45
`
`55
`
`FIG. 1 is a block diagram of a conventional pattern
`classification system;
`FIG. 2 is a plot illustrating how conventional ap-
`proaches can distinguish between non-overlapping
`classes.
`
`65
`
`FIG. 3 is a plot illustrating how conventional ap-
`proaches
`cannot distinguish between overlapping
`classes.
`
`5,239,594
`
`4
`
`FIG. 4 is a plot of histograms of brightness of ash
`wood versus birch wood.
`
`FIG. Sis a plot illustrating how the features of bright-
`ness and grain prominence together accurately distin-
`guish between ash wood and birch wood.
`.
`FIG. 6 is a block diagram of a self-organizing pattern
`classification neural network system according to an
`embodiment of the invention.
`FIG. 7 is an input output diagram of a self-organizing
`pattern classifier useful for the classification system of
`FIG. 6.
`FIG. 8 is a block diagram of the self-organizing pat—
`tern classifier of FIG. 7.
`FIG. 9 is a block diagram of an intermediate node of
`the classification system of FIG. 8.
`FIG. 10 is an example of a block diagram of the self-
`organizing classifier of FIG. 8.
`FIG. 11 is a plot of a vector space illustrating inter-
`mediate nodes for two different classes.
`FIG. 12 is a block diagram of a self-organizing classi-
`fier in which a new intermediate node is added.
`FIG. 13 is a graph illustrating an example wherein
`templates for different classes are too closely situated.
`FIG. 14 is a block diagram of a discriminator useful
`for the classification system of FIG. 6.
`FIG. 15 is a block diagram of a class summing node
`useful for the discriminator of FIG. 14.
`FIG. 16 is a block diagram of a compound self-organ-
`izing pattern classification neural network system using
`a sequential digital computer according to an embodi-
`ment of the invention.
`FIG. 17 is a flowchart useful for explaining operation
`of the classification system of FIG. 16.
`
`DETAILED DESCRIPTION OF THE
`PREFERRED EMBODIMENT
`
`In accordance with a preferred embodiment of the
`present invention depicted in FIG. 6, a self-organizing
`pattern classification neural network system includes an
`input transducer 1, K feature extractors 2 (where K is a
`positive integer), K self-organizing classifiers 17, a dis-
`criminator l8; and a learning trigger 19, all of which are
`interconnected as shown.
`The neural network system operates in either a classi»
`fication mode wherein pattern signals are classified or a
`learning mode wherein the weighting vectors stored in
`the self~organizing classifiers 17 are modified.
`In the classification mode,
`the input transducer 1
`generates pattern signal vectors S which represent the
`object to be classified. For example, when a printed
`character on paper is to be classified, the input trans-
`ducer 1 generates, by opto-electronic conversion a pat-
`tern signal vector S of a bit-mapped image. In this bit-
`mapped image, pixel locations where the letter located
`is represented by values of “1” whereas the other pixel
`locations are represented by values of “0”.
`.
`‘
`The pattern signal vector S is then transferred to the
`K feature extractors 2 in parallel. The K feature extrac-
`tors 2 generate K different feature vectors F1, F2, .
`.
`.
`,
`Fit from the pattern vector 5. These feature vectors
`vary with the object to be identified. The objects to be
`identified may include characters or voices. The feature
`vectors are generated by using techniques well known
`in the prior art. In the character recognition field, for
`example, a characteristic loci or crossing add distance
`feature may be extracted as a feature vector F by em-
`ploying the technique described in C. Y. Suen, M.
`Berthod, and S. Mori in “Automatic Recognition of
`
`SAMSUNG EXHIBIT 1006
`
`'
`
`Page 13 of 17
`
`SAMSUNG EXHIBIT 1006
`Page 13 of 17
`
`
`
`5,239,594
`
`5
`Handprinted Characters—the State of the Art,” Pro-
`ceedings of IEEE, Vol. 68, No. 4, April 1980, pp.
`469—487.
`, FK are then
`.
`.
`.
`The K feature vectors F1, F2,
`transferred to the corresponding K self-organizing clas-
`sifiers 17. As FIG. 7 shows,
`the i-th self-organizing
`classifier 17 receives a feature vector Fi composed of N
`feature scalars fil, £12, .
`.
`. , fiN and generates a response
`vector Ri composed of M response scalars ri1, ri2, .
`. . ,
`riM. These M response scalars ril, ri2, .
`.
`.
`, riM corre-
`spond to M classes, and a response scalar rij indicates
`how far apart the pattern signal S is from the class Cj in
`terms of the feature vector Fi. For example, when 26
`Arabic characters “A” through “Z” are classified, the
`response vector R is composed of 26 response scalars.
`
`R={ril, fi2. .
`
`.
`
`. , d26}
`
`, ri26 indicate how remote the
`.
`.
`.
`wherein ril, ri2,
`pattern signal is from the respective letters “A”, “B”, .
`.
`. , “Z”. The K response vectors R1, R2, .
`.
`. , RK of the
`K self-organizing classifiers 17 are then transferred to
`the discriminator 18, wherein the linear sum of response
`scalars rli, r2i, . .
`. , rki of response vectors R1, R2. .
`. Rk
`corresponding to a class Ci is taken to determine total
`outputs T1, T2, .
`.
`.
`, TM for classes C1, C2, .
`.
`.
`, CM.
`The class which gives the smallest total output is deter-
`mined by the discriminator 18 and output as a classifica-
`tion result P 21.
`
`In the learning mode, a classification result P 21 is
`determined in the same way as in the classification
`mode. As FIG. 6 shows, the classification result P 21 is
`then transferred to the learning trigger 19, wherein
`whether a correct class signal L 23 is transferred to the
`self-organizing classifiers 17 is determined based on the
`classification result P 21 and a training signal Tr 22
`which is externally supplied by the user. If the correct
`class signal L 23 is transferred, the correct class given
`by the training signal Tr 22 is transferred to all of the
`self-organizing classifiers 17, wherein the correct class
`signal L 23 is compared with the output at each output
`node for modifying the weighting vectors therein.
`FIG. 8 depicts the self organizing classifier in more
`detail. A suitable classifier is described in copending
`patent application, “Self-organizing Neural Network
`for Pattern Classification”, Ser. No. 07/654,800. This
`copending application has the same inventor, the same
`assignee and was filed on even date herewith as the
`present application. The self-organizing classifier 17
`includes N (where N is a positive integer) input nodes
`24 functioning as buffers for receiving N feature scalars
`fl, f2, . . . , N of a feature vector F. The classifier 17 also
`includes intermediate nodes 25, each receiving an N-
`dimensional feature vector F from the N input nodes 24
`via N signal lines 30 and generating an intermediate
`output 28. The classifier 17 additionally includes M
`(where M is a positive integer) output nodes 26, each
`receiving intermediate outputs from the intermediate
`nodes 25 of a class via signal lines 31. Each output node
`26 determines the smallest output among those it re-
`ceives and transfers the node number i of the intermedi-
`ate node that sent the smallest output along with the
`smallest output Oi to the self-organizing selector 27 via
`signal lines 33B. The node number and output value are
`also sent to the discriminator 18 via a signal line 33A as
`a response scalar ri of a response vector R. The classi-
`fier 17 further includes a self-organizing selector 27 for
`generating a weight update signal 29 for updating tem-
`plates encoded in the intermediate nodes on signal lines
`
`6
`32. The weight update signal 29 is based on the number
`i the output Oi of the intermediate node and the correct
`class signal L 23 supplied from outside.
`FIG. 9 shows a more detailed view of intermediate
`node Ui. Ui includes N element comparators 34, each
`having a weighting scalar Wil, Wi2,
`.
`.
`.
`, WiN of a
`weighting vector Wi, that is compared with a corre-
`sponding element of the feature vector F by the element
`comparator. The results of the comparison indicate the
`difference between the weighting vector and the feature
`vector. The intermediate node Ui also includes an adder
`
`35 for summing the results of comparison performed by
`an element comparator 34 via signal lines 37. Lastly, the
`intermediate node Ui 25 includes a square root com-
`puter 36 for calculating the square root of summed
`results produced by the adder 35. The square root com-
`puter receives the summed results over a signal line 38.
`In operation, the self-organizing classifier 17 receives
`a feature vector F composed of N feature scalars f1, f2,
`. . . , W and generates a response vector Ri composed of
`M response scalars ril, ri2, .
`. . , riM which correspond
`to the M classes to be Separated. More specifically, the
`input nodes 24 receive the N feature scalars f1, f2, .
`.
`. ,
`fN of a feature vector F from the feature extractor 2.
`
`The intermediate nodes 25 generate an intermediate
`output 28 based on the feature vector F and the
`weighting vectors of the element comparators 34. This
`is accomplished by matching the weighting vectors
`stored in the intermediate node 25 and the feature vec-
`
`tor F as described below. That is, the weighting vectors
`stored in the element comparators 34 of the intermedi-
`ate node 25 function as a template which represents the
`features of a letter.
`
`For example, the intermediate output Oi of the i-th
`intermediate node Ui is given by the following expres-
`sron:
`
`l N
`0i=l/N LEIW—Mj’fl)
`
`(1)
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`45
`
`wherein fj is the j-th feature scalar of the feature vector
`F and Wij is the j-th weighting scalar stored in the i-th
`intermediate node Ui.
`
`50
`
`55
`
`65
`
`The intermediate output Oi is computed in the inter-
`mediate node Ui as shown in FIG. 9. That is, the differ-
`ence between each feature scalar f] of the feature vector
`F and each weighting scalar Wij of the weighting vec-
`tor stored in the intermediate node Ui is squared in the
`element comparator 34. The computed results are trans-
`ferred to the adder 35 via the signal lines 37. The square
`root computer 36 computes the square root of the sum-
`ming result from the summer 35 and transfer the inter-
`mediate output Oi via the signal line 31 to the output
`node 26 of the class represented by the intermediate
`node Ui.
`
`Since the weighting vectors functioning as a template
`representative of each class are stored in the intermedi-
`ate node 25, the number of intermediate nodes is greater
`than that of the classes to be separated. In other words,
`there are two or more intermediate nodes for each class,
`indicating the distribution of feature vectors of the class.
`These intermediate nodes are comparable to templates
`of the multitemplate technique known in the pattern
`recognition field. That is, the output Oi of the i-th inter-
`mediate node Ui corresponds to the matching between
`
`SAMSUNG EXHIBIT 1006
`
`Page 14 of 17
`
`SAMSUNG EXHIBIT 1006
`Page 14 of 17
`
`
`
`5,239,594
`
`7
`the feature vector F and the template represented by the
`weighting vector Wi, indicating the Euclidian distance
`between the feature vector F and the template in the
`vector space. Consequently, the smaller the intermedi-
`ate output Oi, the closer the feature vector F and the
`template represented by the weighting vector Wi.
`The output node 26 selects an intermediate node
`which gives the smallest intermediate output among the
`intermediate nodes 25 of the class and transfers a re-
`sponse scalar of the response vector R 20 to the self-
`organizing selector 27 via the signal line 33B.
`FIG. 10 shows a self organizing classifier 17 for re-
`ceiving a 2-dimensional feature vector Fx={fxl, fx2}
`and separating two classes CA and C3. Intermediate
`nodes UAI, UAz. U23, and UM. and U31, U32, and U33
`represent respective classes CA and 03. Output nodes
`VA and V3 represent these classes CA and CB. As FIG.
`11 shows, when the feature vector Fx is inputted to the
`self-organizing classifier 17,
`the intermediate outputs
`0A1, 0A2, 0A3: and 0A4, and 031. 032. and 033 are
`computed according to the aforementioned expression
`( 1),
`indicating the respective distances between the
`feature vector F and the templates represented by the
`weighting vectors of the respective intermediate nodes.
`The output node VA of the class CA‘ selects the smallest
`output 0,4] as a representative of the class and transfers
`it as an element r1 of response vector R to the discrimi-
`nator 18. It also transfers the node number “Al” and the
`output 0,4] to the self-organizing selector 27. The out-
`put node V3 of the class C3 selects the smallest output
`032 as a representative of the class CB and transfers it as
`an element r2 of response vector R to the discriminator
`18, and the node number “B2” and the output 032 to the
`self organizing selector 27.
`the
`27 modifies
`The
`self-organizing
`selector
`weighting vector in the intermediate node based on the
`correct class signal L supplied by learning trigger 19.
`More specifically, upon receipt of the correct class
`signal L 23, the self-organizing selector 27 selects, as a
`response signal, the class corresponding to the smallest
`intermediate output among the intermediate outputs
`representing respective classes. It then compares the
`class of the smallest intermediate output with the class
`given by the correct class signal L 23.
`If the class of the response signal is identical with the
`class of the correct class signal L 23, the self-organizing
`classifier is determined to make a correct classification,
`and no modification is made to the weighting vectors in
`the intermediate node.
`
`If the class of the response signal is different from the
`class of the correct class signal L 23, on the other hand,
`the self-organizing classifier modifies the weighting
`vectors of the intermediate node depending on which of
`the following causes for the incorrect classification
`brought about incorrect classification:
`(1) the incoming pattern signal was very remote from
`the template represented by the weighting vector of the
`intermediate node of the correct class in the vector
`space;
`(2) there is a weighting vector of an intermediate
`node, having a class other than the correct class, which
`is very close in the vector space to the weighting vector
`in the intermediate node of the correct class; and
`(3) none of the above.
`If there does not exist an output node of a class identi-
`cal with the correct class, a warning message is output-
`ted, and the learning process is terminated. If there
`exists an output node of a class identical with the cor-
`
`8
`rect class, the self-organizing selector 27 carries out the
`following process:
`It is determined whether the output Oj of the inter-
`mediate node Uj of a class which is identical with the
`correct class satisfies the following expression:
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`45
`
`50
`
`‘55
`
`60
`
`65
`
`Ojéthl
`
`(2)
`
`wherein thl is a predetermined threshold constant. For
`more information on thl, see the copending patent ap-
`plication entitled “Self-organizing Neural Network for
`Pattern Classification” referenced above. The expres-
`sion (2) indicates that the Euclidian distance in a vector
`space between the feature vector and the template rep-
`resented by the weighting vector of an intermediate
`node Uj ofthe correct class is greater than or equal to
`thl. A large value is used for the constant thl. If the
`condition is satisfied, it means that an incoming feature
`vector is very remote in the vector space from the tem-
`plate represented by the weighting vector of an inter-
`mediate node of the correct class which has been regis—
`tered. Consequently, if the condition is satisfied, a new
`network consisting of an intermediate node 39, N signal
`lines 40 for connecting the input nodes 24 to the inter-
`mediate node 39, and a signal line 41 for connecting the
`intermediate node 39 to the output node 26 of the cor-
`reCt class are added to the network as shown in FIG. 12.
`The weighting vector of the intermediate node 39 is
`realized by assigning the N scalars f1, f2, .
`.
`. fN of the
`feature vector F as the elements of the weighting vec-
`tor.
`
`If the expression (2) is not satisfied, the smallest out-
`put Oi among the intermediate outputs for the respec-
`tive output nodes is determined. The intermediate node
`Ui that produced the smallest output Oi is also deter-
`mined. The output Oj of an intermediate node Uj of a
`class obtained from the output node of the correct class
`is also determined. Then, it is determined whether these
`two outputs Oi and Oj satisfy the following expression:
`
`Oj—Oiéch
`
`(3)
`
`wherein th2 is a predetermined threshold constant. For
`more information on th2, see copending patent applica-
`tion entitled “Self-organizing Neural Network for Pat-
`ten Classification”. If expression (3) is satisfied,
`the
`classification results are incorrect due to the template
`represented by the weighting vector in the intermediate
`node Uj of the correct class being close to the template
`of Ui of the wrong class. In this case, the weighting
`vectors of the intermediate nodes Ui and Uj are modi-
`fied according to the following expressions:
`
`Weight of Ui: Wik=Wik—a[fk—Wik] for k: 1, . ..
`N
`
`Weight oij: ij:ij+a[flt—ij] for k=l, .
`N (4)
`
`.
`
`.
`
`wherein fk is the k-th feature scalar of the feature vector
`F, Wik is the k-th scalar of the weighting vector in the
`i-th intermediate node Ui, and a is a sufficiently small
`positive real number. a is described in more detail in the
`copending patent application referenced above.
`The above modifications to the weighting vectors are
`illustrated in FIG. 13, wherein the intermediate node Ui
`42 is the node which does not belong to the correct class
`and the intermediate node Uj 43 is the node which
`belongs to the correct class. The feature vector is desig-
`
`SAMSUNG EXHIBIT 1006
`
`Page 15 of 17
`
`SAMSUNG EXHIBIT 1006
`Page 15 of 17
`
`
`
`9
`nated as Fx 44 and the window W 45 is between the
`intermediate nodes Ui and Uj, and 46 and 47 are arrows
`indicating the directions in which the intermediate
`nodes Ui and Uj are moved. When expression (3) is
`satsified, the feature vector Fx falls within the window
`W 45 between the intermediate nodes Ui and Uj. This
`implies that the weighting vector in the intermediate
`node Uj, which belongs to the correct class, and the
`weighting victor in the intermediate node Ui, which
`does not belong to the correct class, are very close.
`The first equation of the expression (4) directs the
`modification of the weighting vector of the intermedi-
`ate node Ui so that the template represented by the
`weighting vector of the intermediate node Ui is sepa-
`rated further from the feature vector Fx in the vector
`space as shown by the arrow 46. The second equation of
`the expression (4) directs the modification of the
`weighting vector of the intermediate node Uj so that the
`template represented by the weighting vector of the
`intermediate node Uj is brought closer to the feature
`vector Fx in the vector space as shown by the arrow 47.
`These modifications to the weighting vectors are re-
`peated so as to clearly place the input signal in the cor-
`rect class to facilitate higher classification rates.
`If neither expression (2) nor expression (3) are satis-
`fied, the weighting vector of the intermediate node Uj is
`modified according to the following equation:
`
`Weight oij: ij=ij+a[fk——ij] for k: l, .
`N (5)
`
`.
`
`.
`
`wherein fk is the k-th feature scalar of the feature vector
`F, Wik the k-th weighting scalar of the weighting vec-
`tor in the i-th intermediate node Ui, and a is a suffi-
`ciently small positive real number.
`Expression (5) is the same as the second equation of
`expression (4), indicating that the weighting vector of
`the intermediate node Uj is modified so that the tem-
`plate represented by the weighting vector of the inter-
`mediate node Uj is brought closer to the feature vector
`F in the vector space.
`In FIG. 14, the discriminator 18 includes M class
`summing nodes 48 (where M is a positive integer) and a
`class selector 49. It receives K response vectors R1, R2,
`.
`.
`. , Rk from the K self-organizing classifiers at the class
`summing nodes 48 for each class. A linear summation of
`response scalars rli, r2i, .
`.
`. , rji, .
`.
`.
`, rki and weighted
`.
`weighting scalars ul, u2, .
`.
`, uk is made in the summer
`node 48 to provide a total output Ti 53 for the class Ci
`according to the following equation:
`
`k
`Ti =(.2 ("i '0‘!)
`
`(6)
`
`wherein uj is a positive constant representative of the
`class Ci and set so that 2uj= 1.0.
`The total output Ti for the class Ci is then transferred
`to the class selector 49. The class selector 49 selects the
`smallest total output, T