throbber
SPEECH RECOGNITION CIRCUIT USING PARALLEL PROCESSORS
`
`The present invention generally relates to a speech recognition circuit which uses
`
`parallel processors for processing the input speech data in parallel.
`
`Conventional large vocabulary speech recognition can be divided into two processes:
`
`front end processing to generate processed speech parameters such as feature vectors,
`
`followed by a search process which attempts to find the most likely set of words spoken
`
`from a given vocabulary (lexicon).
`
`The fi'ont end processing generally represents no problem for current processing
`
`systems. However, for large vocabulary, Speaker independent speech recognition, it is
`
`the search process that presents the biggest challenge. An article by Deshmukh et a1
`
`entitled "Hierarchical Search for Large-Vocabulary Conversational Speech
`
`Recognition" HERE Signal Processing Magazine, September 1999, pages 84 to 107),
`the content of which is hereby incorporated by reference, discusses the general concepts
`
`of large vocabulary speech recognition. As discussed in this paper, one algorithm for
`
`perfonning the search is the Viterbi algorithm. The Viterbi algorithm is a parallel or
`
`breadth first search through a transition network of states of Hidden Markov Models.
`
`An acoustic model for words in a lexicon are represented as statesiof Hidden Markov
`
`Models. These states represent phones or n phones in a phone model of the words. The
`
`search requires the evaluation ofpossible word matches. It is known that such a search
`
`is computationally. intensive.
`
`In order to speed up the processing performed during such a search in a speech
`recognition system, parallel processing has been explored. In an article by
`M K Ravishankar entitled "Parallel Implementation of Fast Beam Search for Speaker—
`
`Independent Continuous Speech Recognition" (Indian Institute of Science, Bangalor,
`India, July 16, 1993) a multi-threaded implementation of a fast beam search algorithm is
`
`disclosed. The multi-threading implementation requires a significant amount of
`
`communication and synchronization among threads. In an MSC project report by
`
`

`

`2
`
`R Dujari entitled "Parallel Viterbi Search Algorithm for Speech Recognition" (MIT,
`February 1992) the parallel processing of input speech parameters is disclosed in which
`
`a lexical network is split statically among processors.
`
`It is an object of the present invention to provide an improved circuit which can perform
`
`parallel processing of speech parameters.
`
`In accordance with a first embodiment of the present invention, a speech recognition
`
`circuit comprises an input port such as input buffer for receiving parameterized speech
`data such as feature vectors. A lexical memory arrangement is provided Which contains
`
`lexicon data for word recognition. The lexical data comprises a plurality of lexical tree
`
`data structures representing a plurality of lexical trees. Each lexical tree data structure
`
`comprises a model ofwords having common prefix components and an initial
`component which is unique as an initial component for lexical trees. A plurality of
`lexical tree processors are connected in parallel to the input port and perform parallel
`lexical tree processing for word recognition by accessing the lexical data in’ the lexical
`memory arrangement A results memory arrangement is connected to the lexical tree
`processors for storing processing results from the lexical tree processors and lexical tree
`identifiers to identify lexical trees to be processed by the lexical tree processors. A
`controller controls the lexical tree processors to process lexical trees identified in the
`
`results memory arrangement by performing parallel processing of a plurality of lexical
`
`tree data structures.
`
`Thus in accordance with this embodiment ofthe present invention, the processing in
`
`order to perform word recognition is distributed across the processors by controlling the
`processors to perform processing on different lexical trees. The controller controls the
`processor by the processes to provide for efficient process management by distributing
`
`’lexical processing to appropriate processors.
`
`The lexical tree data structure can comprise a phone model of words, wherein the
`
`components comprise phones. For reduced storage, the lexical tree data structure can
`comprise a mono phone lexical tree. The mono phone lexical tree can be used to
`generate context dependent phone models dynamically. This enables the use of context ‘
`dependent phone models for matching and hence increased accuracy whilst not
`
`

`

`3
`
`increasing memory requirements. Alternatively, the lexical tree data structure can
`
`comprise context dependent phone models.
`
`The processing performed by each processor in one embodiment comprises the
`comparison of the speech parameters with the lexical data, e.g. phone models or data
`derived from the lexical data (e.g. dynamically generated context dependent phone
`
`models) to identify words as a word recognition event and to send information
`identifying the identified words to the results memory as the processing results. In this
`embodiment a language model processor arrangement can be provided for providing a
`language model output for modifying the processing results at a word recognition event
`by a lexical tree processor. The modification can either take place at each lexical tree
`
`processor, or at the language model processing arrangement.
`
`In one embodiment each lexical tree processor deterrnirnes an output score for words in
`
`the processing results at word recognition events. Thus in this embodiment the
`language model processing arrangement can modify the score using a score for a
`
`language model for n preceding words, where n isan integer.
`
`In one embodiment the controller instructs a lexical tree processor to process a lexical
`
`tree by passing a lexical tree identifier for the lexical tree and history data for a
`recognition path associated with the lexical tree from the results memory. The history
`data preferably includes an accumulated score for the recognition path. This enables a
`score to be determined based on the score for the recognition path to accumulate a new
`
`score during recognition carried out using the lexical tree data structure. The scores can
`be output in the processing results to the results memory during the processing ofthe
`speech. parameters so that the scores can be used for pruning.
`
`In one embodiment of the present invention, each lexical tree processor operates on
`more than one lexical tree at the same time, e.g; two lexical trees represented by two
`different lexical tree data structures, or two lexical trees represented by the same data
`structure but displaced in time (which canebe termed to instances ofthe same lexical
`
`tree).
`
`

`

`4
`
`At word recognition events, the controller determines new lexical tree identifiers for
`
`storing in the results memory for words identified in the results memory for respective
`word events. In order to reduce the processing, the controller can prune the new lexical
`
`tree identifiers to reduce the number oflexical trees which are required to be processed.
`
`This pruning can be achieved using context dependant n phones to reduce the number of
`possible next phones. The number can be further reduced by using a language model
`
`look ahead technique.
`
`In one embodiment of the present invention, the lexical tree processors are arranged in
`
`groups or clusters. The lexical memory arrangement comprises a plurality of partial
`lexical memories. Each partial lexical memory is connected to one of the groups of
`
`lexical tree processors and contains part of the lexical data. Thus a group of lexical tree
`processors and apartial lexical memory form a cluster. Each lexical tree processor is
`operative to process the speech parameters using a partial lexical memory and the
`controller controls each lexical tree processor to process a lexical tree corresponding to
`
`partial lexical data in a corresponding partial lexical memory.
`
`In another embodiment of the present invention the lexical memory arrangement
`
`comprises a plurality of partial lexical memories. Each partial lexical memory being
`connected to one of the lexical tree processors and containing part of the lexical data.
`
`Each lexical tree processor processes the speech parameters using a corresponding
`partial lexical memory and the controller is operative to control each lexical tree
`processor to process a lexical tree corresponding to partial lexical data in a
`
`corresponding partial lexical memory.
`
`In one embodiment of the present invention the lexical memory arrangement stores the
`
`lexical tree data structures as Hidden Markov Models and the lexical tree processors are
`operative to perform the Viterbi search algorithm using each respective lexical tree data
`structure. Thus in this way, this embodiment ofthe present invention provides a
`
`parallel Viterbi lexical tree search process for speech recognition.
`
`The first aspect ofthe present invention is a special purpose circuit built for performing
`the speech recognition search process in which there are a plurality of processors for
`performing parallel lexical tree processing on individual lexical tree processors.
`
`

`

`In another aspect of the present invention a speech recognition circuit comprises an
`
`input port such as an input buffer for receiving parameterized Speech data such as
`feature vectors. A plurality of lexical memories are provided which contain in
`combination complete lexical data for word recognition. Each lexical memory contains
`
`part ofthe complete lexical data. A plurality of processors are provided connected in
`parallel to the input port for processing the speech parameters in parallel. The
`processors are arranged in groups in which each group is connected to a corresponding
`lexical memory to form a cluster. A controller controls each processor to process the
`
`speech parameters using partial lexical data read from a corresponding lexical memory.
`
`The results of processing the speech parameters are output from the processors as
`
`recognition data.
`
`Thus this aspect ofthe present invention provides a circuit in which speech recognition
`processing is performed in parallel by groups of processors operating in parallel in
`which each group accesses a common memory of lexical data. This aspect of the
`present invention provides the advantage of parallel processing of speech parameters
`and benefits from a limited segmentation of the lexical data. By providing a plurality of
`
`processors in a group with a common memory, flexibility in the processing is provided
`without being bandwidth limited by the interface to the memory that would occur if
`only a single memory were used for all processors. The arrangement is more flexible
`than the parallel processing arrangement in which each processor only has access to its
`own iocal memory and requires fewer memory interfaces (i.e. chip pins). Each
`processor Within a group can access the same lexical data as any other processor in the
`group. Thecontroller can thus control the parallel processing of input speech
`parameters in a more flexible manner. For example, it allows more than one processor
`to process input speech parameters using the same lexical data in a lexical memory.
`This is because the lexical data is segmented into domains which are accessible by
`
`multiple processors.
`
`In a preferred embodiment this aspect of the present invention is used in combination
`with the first aspect ofthe present invention. In such an arrangement each processor
`performs lexical tree processing and the lexical data stored in each lexical memory
`
`

`

`6
`
`comprises lexical tree data structures which each comprise a model of words having
`common prefix components and an initial component thatis unique.
`
`In preferred embodiments ofthe second aspect ofthe present invention, the preferred
`embodiments ofthe first aspect ofthe present invention are incorporated.
`
`Embodiments of the present invention will now be described with reference to the
`
`accompanying drawings in which:
`
`Figure 1 is a diagram ofa speech data processing circuit for generating parameterized
`
`speech data (feature vectors);
`
`Figure 2 is a diagram cf a speech recognition circuit in accordance with an embodiment
`
`of the present invention;
`
`Figures 3a and 3b are schematic diagrams illustrating lexical tree structures;
`
`Figure 4 is a flow diagram illustrating the process performed by a lexical tree processor
`to determine a temporary lexical tree score in accordance with an embodiment ofthe
`
`present invention;
`
`Figure 5 is a flow diagram illustrating the process performed by the lexical tree
`processor for processing the input feature vectors in accordance with an embodiment of,
`
`the present invention; and
`
`Figure 6 is a flow diagram illustrating the process performed by the controller in
`accordance with an embodiment of the present invention.
`
`Figure 1 illustrates a typical circuit for the parameterization of input speech data. In this
`embodiment the parameters generated are speech vectors.
`
`A microphone 1 records speech in an analogue form and this is input through an anti—
`aliasing filter 2 to an analogue-to-digital converter 3 which samples the speech at
`48 kHz at 20 bits per sample. The digitized output signal is normalized (4) to generated
`
`

`

`7
`
`a 10 millisecond data frame every 5 milliseconds with 5, milliseconds overlap (5). A
`
`pre—emphasis operation 6 is applied to the data followed by a hamming window 7. The
`
`data is then fast Fourier transformed (FF’I') using a 512 point fast Fourier transform (8:)
`
`before being filtered by filter bank 9 into 12 frequencies. The energy in the data fi'ame
`
`5 is. also recorded (13) as an additional feature and together with the 12 frequency
`
`outputs of the filter bank 9, 13 feature vectors (10) are thus produced and these are
`output as part ofthe 39 feature vectors 14; First and second derivatives (11 and 12) are
`
`taken of the 13 feature vectors 10 to complete the generation of the 39 feature vectors
`
`l4.
`
`, The arrangement illustrated in Figure 1 is purely given for illustration. The present
`
`invention encompasses any means by which speech and data can be parameterized to a
`
`suitable form for input to the search process as will be described in more detail
`
`hereinafier.
`
`Figure 2 is a schematic diagram of a speech recognition circuit in accordance with an
`
`embodiment of the present invention for performing the search process. The
`parameterized speech data, which in this embodiment comprise feature vectors, are
`input to a feature vector buffer 20. The feature vector buffer 20 is provided to buffer the
`
`incoming feature vectors to allow lexical tree processors 21 to read and process the
`
`feature vectors in the buffer 20 via a feature vector bus 24. A plurality k of lexical tree
`
`processors 21 are arranged in a respective lexical tree processor cluster 22. Each lexical
`tree processor cluster 22 has an acoustic model memory 23 in which is stored lexical
`
`data for use by the lexical tree processors 21 within the lexical tree processor cluster 22.
`
`Each lexical tree processor 21 in the lexical tree processor cluster 22 is connected to the
`
`acoustic model memory 23 within the lexical tree processor 22. There are N lexical tree
`
`processor clusters and thus there are Nk lexical tree processors 21 connected by the
`feature vector bus 247to the feature vector buffer 20. Each lexical tree processor '21 is
`
`capable ofprocessing a different lexical tree and thus Nk lexical trees can be processed
`
`in parallel. The acoustic model memories 23 store as a whole a complete set of lexical
`data, i.e. lexical tree data structures for use in the lexical tree processing by the lexical
`
`tree processors 21. Each acoustic model memory 23 contains part or a segment of the
`lexical tree data. Since lexical tree processors 21 in a lexical tree processor cluster 22
`
`access the same acoustic model memory 23, it is possible for more than one lexical tree
`
`

`

`8
`
`processor‘21 to process the same lexical data. This provides for some degree of
`flexibility in the controlling ofthe processing by the lexical tree processors 21. Further,
`
`the acoustic model memories 23 need not contain only one copy of the lexical data. It is—
`
`possible to build in a redundancy in the data to further enhance the flexibility. This
`avoids any bottleneck in the processing due to the search processing focusing on a small
`number of lexical trees.
`
`A results memory 25 is provided for storing processing results from the lexical tree
`
`processors 21 which are received over the path score and history bus 26. The results
`
`memory 25 also stores information on lexical trees to identify which lexical trees are to
`
`be processed. A search controller 27 is provided to control the processing performed by
`
`the lexical tree processors 21 in dependence upon a program and data stored in program
`
`and data memory 28. The search controller reads the path scores and lexical tree
`
`identifiers from the results memory and controls the lexical tree processors accordingly.
`
`’
`
`A language model processor 29 is provided which is connected toeach lexical tree
`
`processor 21 by a language model bus 30. The language model processor 29 accesses a
`
`language model memory 31 to read language model data for provision to lexical tree
`
`processors 21 in response to language model data requests. External control of the
`
`language model memory 31 is provided by a word constrains input. The language
`
`model processor 29 determines a score for a word occurring following N previous
`
`words using N grams. When a lexical tree processor requires a language model score a
`
`request is sent to the language, model processor 29 over the language model bus 30
`
`identifying the currentword and the N—l previous words. A language model score for
`
`the N gram can be returned to the lexical tree processor 21 for the modification of the
`
`score at the end of a branch of lexical tree processing. The lexieaLtree processor can
`
`modify the score in accordance with the language model and output a score to the
`results memory 25 for a word at the end of a branch (ifthe lexical tree processing. Thus
`
`the results memory stores the results as an ordered list of scores for words together with
`
`their histories.
`
`The results memory 25 stores the following data:
`
`1.
`
`V Initial lexical tree data. This comprises pointers to an initial set of lexical trees.
`
`No history data is associated with the initial set of lexical trees. The initial set of lexical
`
`

`

`9
`
`trees is predetermined and stored in the results memory 25 based on the most likely
`
`initial phones of anutterance. This initial lexical tree data is required to initialize the
`
`search process.
`
`2.
`
`History data for search results. This comprises a record of a recognition path
`
`through the lexical tree recognition process performed by the lexical tree processors 21.
`
`The history data includes the current word, the previous N-l words, the current
`accumulated score, the phone history (for use in the determination of likely next lexical
`
`trees using cross word context dependent tri—phones), and an identifier or pointer to the
`
`lexical tree used for identifying the word.
`
`3.
`
`Best scores for best paths being processed by each lexical tree processor 21.
`
`This information enables the search controller'27 to monitor the processing being
`
`performed by lexical tree processors. 21 to determine whether a global pruning strategy
`should be applied in order to reassign processing performed by a lexical tree processor
`
`if its best score for its best path is below a threshold or well below the best scores for
`
`the paths being processed by other lexical tree processors 21 ,
`
`Temporary lexical tree scores. These comprise tree scores which are determined
`4.
`, as temporary scores to prune the next lexical trees to be processed at word ends. The
`temporary lexical tree scores include lexical tree identifiers or pointers to identify the
`next lexical trees to be processed. The scores enable the pruning of this list.
`
`5.
`
`Pruning threshold. This can be a global threshold value for use in the pruning of
`
`the lexical trees globally, or a local threshold value for use by a lexical processor for
`
`locally pruning the processing performed by the lexical processor 21.
`
`The acoustic model memory 23 stores a Hidden Markov Model for acoustically
`
`modelling words as lexical trees. The acoustic model memory 23 stores a plurality of
`lexicaltree data structures. Each lexical tree data structure comprises an n phone model
`
`of anumber of words having common prefix phones. The first node of the lexical tree
`(the root) comprises a common In phone to all words in the lexical tree and uniquely
`identifies the lexical tree.
`
`

`

`10
`
`Each lexical tree processor 21 includes on=hoard memory (or local memory) to be used
`
`during the lexical tree processing. This working memory has to store all ofthe
`parameters currently working on including current scores for all paths being processed
`within the lexical tree, and previous N—l words for the lexical tree. The local storage of
`the previous N—l words enables the lexical tree processor 21, when a word end is
`reached along a branch ofthe lexical tree, to send a request for the language model
`score for an N gram by sending the identity ofthe N—l previous words together with the
`
`word identified at the end of the branch.
`
`Figure 3a and 3b schematically illustrate lexical trees which can be processed by the
`lexical tree processor 21 during the recognition ofthe two words HARD ROCK. In
`Figure 3a a previous lexical tree terminated at a branch recognizing the werd YOU and
`terminating with the mono phone uw which is associated with tWo context dependent
`tIi-phones y—uw+k and y-uw+h. Thus the context dependent tri-phone associated with
`the last phone in the lexical tree word model for YOU indicates tWo possible next
`lexical trees, i.e. the lexical trees beginning with the mono phone k and h and having a
`left context phone of uw. As can be see in Figure 3a this word end YOU therefore leads
`to two possible next lexical trees. These two lexical trees are traversed in parallel by
`two different lexical tree processors accumulating scores for matching of input feature
`
`‘ vectors into Hidden Markov Models ofthe context dependent tri—phones associated with
`
`each node in the tree. When the end of the branch is reached, a word end event is
`
`reached and a word is recognized. As can be seen in Figure 3a, in this example since
`
`four words are of similar phone length, it is possible for the search strategy based on
`inputting feature vectors in parallel to simultaneously reach a number of possible word
`ends. These possible word ends are sent as processing results to the results memory.
`The results memory stores the accumulated score at the word end together with phone
`history to identify the last phone and its associated context dependent tri-phones. In this,
`example, considering the branch Where theword HARD is recognized, the last phone is
`d which has associated with it two context dependent tri-phones r-d+l and r-d+r. Thus
`
`the search controller 27 can identify next possible lexical trees using the next phone in
`
`the context dependent tri-phone. In this case, as can be seen in Figure 3b, the next
`possible lexical trees begin with the phones r and l and have associated with them
`context dependent tri-phones d—r-l-ao and d-l+ao respectively. Thus these are the next
`lexical trees that require processing following the end node or word event detected in
`
`

`

`11
`
`the processing lexical tree 2. Figure 3b thus represents the processing thatis required at
`the end ofFigure 3a at the word node for HARD in processing the second lexical tree.
`
`As can be seen in Figure 3b, the two lexical trees are processed in parallel by traversing
`through the branches to reach word ends by sequentially entering in feature vectors in
`parallel to lexical tree processors processing the respectiVe lexical trees. When the word
`end is reached, the accumulated score is output to the results memory as described
`
`before. In this case since it is the last word, the important context dependent tri-phone
`associated withthe final phone has silence (sil) as the following phone. The final phone
`can in fact have 50 context dependent tri-phones associated with it ifthere are 50
`
`possible nextiphones (including silence). Figure 3b only illustrates the relevant one for
`
`the end of the utterance.
`
`Figure 4 is a flow diagram illustrating the processing performed by a lexical tree
`processor 21 in order to determine a temporary lexical tree score for a lexical tree.
`When a word end is identified by a lexical tree processor, the processing results are sent
`
`to the results memory 25. The results memory identifies the last phone of the
`recognized words and thus enables the search controller 27 to identify possible next
`lexical trees using context dependent tri—phones as illustrated in Figure 3b. Although in
`theory if there are 50 phones there are 502 possible lexical trees, due to the use ofthe
`context dependant triphones, only 100-200 lexical trees are likely to be identified as
`possible lexical trees. The search controller needs to further prune out the lexical trees
`which are unlikely to generate likely paths with high scores, In order to do this,
`instructions are sent to the lexical tree processors to determine a temporary lexical tree
`score which can be used to prune out lexical trees from the processing which have a low
`score.
`
`The processing by the lexical tree processor to generate the temporary lexical tree score
`is shown in Figure 4 .and for this process, following initialization, a lexical tree
`processor 21 awaits receipt of data for a new lexical tree from the search controller 27
`{step 82); The data comprises a lexical tree pointer on an identifier to lexical tree data
`stored in the acoustic model memory 23, i.e. the lexical tree data structure, previous N—l
`words (for use in language model score determination by the language model processor
`
`

`

`12
`
`29), and the current path score, The previous N-l words include the previously
`recognized word for which a score has been accumulated.
`
`The lexical tree processor 21 then attempts to read the next feature vector from the
`feature vector buffer 20 (step S3) and ifthis is not available, an error occurs (step S4).
`
`If the feature vector is available in the feature vector buffer 20, the lexical tree processor
`
`21 reads the feature vector from the buffer 20 (step SS) and evaluates the state
`
`transitions for the first lexical tree node using the acoustic model data in the acoustic
`
`model memory 23 (step S6). Since the state transitions for the first node will require
`several feature vectors. to complete, a score is determined for a state transition of the
`
`first node in the lexical tree (step S7). The lexical tree processor 21 then sends a request
`
`to the language model processor 29 for a language model score. The request includes
`theiprevious N-l words and all ofthe possible words represented by the lexical tree data
`structure in the acoustic model memory 23 (step SS). The language model processor
`
`returns scores for each ofthe N grams ending in the words represented by the lexical
`tree. The lexical tree processor thus receives language model scores for the words in the
`lexical tree and picks the highest score (step 59). Alternatively, the language model
`processor 29 can select the highest 11 gram score and return this to the lexical tree
`processor for the determination ofthe temporary lexical tree score. The temporary
`lexical tree score for the lexical tree is then generated using the score determined for the
`
`first state transition of the first lexical tree node and the highest language model score
`(step 810). The temporary lexical tree score is then sent by the lexical tree processor 2i
`to the results memory 25 ove.r the bus 26 (step 81 1). In this mode ofprocessing, the
`processor then awaits the next data of the new lexical tree from the search controller
`
`(step 82).
`
`This process is just one ofthe processes performed by the lexical tree processor. The
`main processing performed by the lexical tree processor is the lexical tree evaluation.
`This process will now be described with reference to the flow diagram ofFigure 5.
`
`After the start ofthe process (step 820) a lexical tree processor awaits receipt of data for
`a lexical tree from the search controller (step 321). The data comprises a lexical tree
`pointer to a lexical tree data structure stored in the acoustic model memory 23, the
`previous N-l words in the recognition path, and the current accumulated score.
`
`

`

`13
`
`The lexical tree processor then attempts to read the next feature vector from the feature
`
`vector buffer 20 (step $22). If this is not available, an error occurs (step 523). If the
`
`feature vector is available in the feature vector buffer 20, the feature vector is read from
`
`the feature vector buffer 20 by the lexical tree processor 21 (step 824) and the lexical
`
`tree processor 21 evaluates the state transitions for each path using the acoustic model
`
`data in the acoustic model memory 23 (step 825). Scores for the paths are determined
`
`in accordance with the conventional Viterbi search technique and the best score for the
`
`best path is sent to the results memory 25 while the path histories are stored locally in
`
`the on-board memory ofthe lexical tree processor 21 (step 826). Pruning is applied to
`
`the lexical tree by the lexical tree processor to delete paths in order to keep the breadth
`of the search manageable (step 827). The pruning applied locally by the lexical tree
`
`processor can beipurely on the basis of a local threshold, which can be provided by the
`
`search controller 27, or itcan‘be determined on a relative-basis dependent upon the
`
`range of scores for paths being processed within the lexical tree processor 2l . If the
`path has not reached a word end (step 828) the lexical tree processor 21 attempts to read
`
`the next feature vector from the feature vector buffer 20 (step 822). If the path reaches
`
`a word end (step 828) the score must be modified by the language model score (step
`
`829). There are two ways in which this can be done, in this embodiment the lexical tree
`processor 21 sends a request to the language model processor 29 for a language model
`score. The request includes the current word and the N—l previous words. The
`
`language model processor thus retumsithe language model score and the language
`
`model score is used to modify the current accumulated score at the word end;
`
`Alternatively, the lexical tree processor can send the language model processor 29 not
`
`just the current word and the N-l previous words, but also the current accumulated
`
`score. The language model processor then determines the language model score using
`
`the language model memory 31 and modifies theiaccuinulated score using the language
`
`model score. The language model processor 29 can then return the modified score to
`the lexical tree processor 21 which can then pass it to the results memory 25, or a
`
`connection between the language model processor 29 and the results memory 25 (not
`
`shown) can enable the language model processor 29 to send the'score directly to the
`
`results memory 25 for the lexical tree processor 21. In this latter case, the language
`
`' model processor 29 must also receive a lexical tree pointer to identify the lexical tree for
`
`which the score applies.
`
`

`

`14
`
`Assuming in this embodiment that the lexical tree processor calculates the modified
`
`score (step 829); the score and history is then sent by the lexical tree processor 21 to the
`
`results memory 25 (step S30). The history data sent to the results memory 25 comprises
`
`the lexical tree pointer identifying the lexical tree, the modified score at the word end,
`the phone history identifying at least the last phone to allow for context dependent tri-
`phone determination of next lexical trees, and the identity of the word identified in the
`
`evaluation process.
`
`The lexical tree processor then deletes the path and history data in its on-board memory
`
`(step SS 1) and determines ifthere are any paths still lefi to be processed (step S32). If
`so, the lexical tree processor 21 tries to access the next feature vector available in the
`
`feature vector buffer 20 (step 822). Ifthere are no paths left to be processed, a message
`
`is sent by the lexical tree processor to the search controller 27 to indicate that the lexical
`
`tree has been processed (step S33). In this level of processing the lexical tree processor
`
`will then await the next data for a lexical tree fi'om the search controller 27.
`
`The flow diagram ofFigure 5 illustrates the processing ofa single instance of a lexical
`
`' tree. The

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket