`
`U. S. CONTINUATION PATENT APPLICATION FOR
`
`CHARACTER RECOGNITION METHOD, APPARATUS AND DEVICE
`
`Inventor:
`
`Ryan S. DIXON, residing in
`Mountain View, California
`
`Assignee:
`
`Apple Inc.
`One Apple Park Way
`Cupertino, CA 95014
`
`Entity:
`
`Undiscounted
`
`MORGAN, LEWIS & BOCKIUS LLP
`
`600 Anton Blvd, Suite 1800
`
`Costa Mesa, CA 92626-7653
`714.830.0600
`
`
`
`Attorney Docket NO: 122202—6225 (9292735 U SCl)
`
`CHARACTER RECOGNITION
`
`METHOD, APPARATUS AND DEVICE
`
`BACKGROUND
`
`With proliferation of mobile devices with touch sensitive screens, users are enjoying
`
`numerous applications of numerous kinds that can be run on their devices. Examples of such
`
`devices include smartphones, tablets, watches, remote controls, etc. Many of these applications
`
`require users to enter alphanumeric characters to provide their inputs. However, the mobile
`
`devices have small display screens and hence are space constrained for displaying a keyboard or
`
`displaying an area to receive the alphanumeric character inputs.
`
`
`
`Attorney Docket No: 122202—6225 (9292735 U SCl)
`
`BRIEF SUMMARY
`
`Some embodiments of the invention provide a novel method for recognizing characters
`
`that are input through touch strokes on a touch-sensitive sensor (e.g., a touch-sensitive display
`
`screen or a touch-sensitive surface) of a device (e.g., a mobile device, a remote control, a
`
`trackpad, etc.). In some embodiments, the sensor has a space-constrained area for receiving the
`
`touch input. In some embodiments, the method places no limitations on where the user can write
`
`in the space provided by the device. As such, successive characters might not follow each other
`
`in the space. In fact, later characters might overlap earlier characters or they might appear before
`
`earlier characters.
`
`In some embodiments, the method detects a plurality of stroke inputs, with each stroke
`
`input having a touch-down location and a touch-up location on the sensor surface. The method
`
`creates a stroke-segmentation graph with several nodes and several edges between the nodes.
`
`Each node corresponds to at least one stroke’s touch-down location or one stroke’s touch-up
`
`location. In some embodiments, each interior node of the graph corresponds to a touch-up
`
`location of an earlier stroke and a touch-down location of the next subsequent stroke.
`
`In some embodiments, the method not only defines edges in the stroke-segmentation
`
`graph between pairs of adjacent nodes but also defines, whenever applicable, edges between
`
`pairs of non-adjacent nodes. The method defines edges between pairs of non-adjacent nodes
`
`because users may use multiple strokes to input a single character. Specifically, while a user
`
`might input one character with one single contiguous stroke, the user might need to use multiple
`
`different strokes to input another character, which requires the method to explore edges between
`
`non-contiguous nodes in the stroke-segmentation graph and to define such edges when they
`
`could be viably used to define one or more characters. These characters are referred to below as
`
`multi-stroke characters.
`
`For each edge in the stroke-segmentation graph, the method defines a set of candidate
`
`characters represented by the edge and associates a score for each candidate character. To
`
`represent the set of candidate characters, the method of some embodiments creates a candidate-
`
`character graph from the stroke-segmentation graph. The candidate-character graph has the same
`
`nodes as the stroke-segmentation graph, but for each edge of the node graph, the candidate-
`
`character graph can include one or more candidate edges, with each candidate edge representing
`
`
`
`Attorney Docket No: 122202—6225 (9292735 U SCl)
`
`one candidate character. Each candidate edge in these embodiments is assigned the score of its
`
`associated candidate character.
`
`In some embodiments,
`
`the method explores different paths through the candidate-
`
`character graph to identify a string of one or more characters that most likely represents the
`
`string (e. g., the word, the number, the alphanumeric entry, etc.) that the user inputs through a set
`
`of touch stroke inputs on the touch-sensitive sensor. Each explored path traverses through a set
`
`of nodes in the candidate-character graph by using a set of candidate-character edges defined in
`
`this graph. The method in some embodiments computes a score for each path based on the scores
`
`of the candidate-character edges that the path uses. The method then selects the path with the
`
`best computed path score (e. g., the highest path score when a higher score is a better score, or the
`
`lowest path score when a lower score is a better score) as the path that includes the characters
`
`that most likely correspond to the current character string that the user has entered through the
`
`set of touch stroke inputs on the sensor.
`
`Each time the method adds a new edge to the stroke-segmentation graph, the method in
`
`some embodiments defines a set of candidate-character edges to the candidate-character graph,
`
`scores the newly defined candidate-character edges and then re-examines the paths through the
`
`candidate-character edges to identify the path that represents the most likely character string that
`
`the user is in the process of entering or has completed its entry. After assessing the paths that
`
`best represent individual input strings (e.g., individual words), the method of some embodiments
`
`performs a multi-string (e. g., a multi-word) analysis on the paths to identify the set of paths that
`
`best represent the multi-string input that the user has entered through the set of touch stroke
`
`inputs on the sensor.
`
`To define the edges in the stroke-segmentation graph, and to define the edges in the
`
`candidate-character graph, the method of some embodiments accounts for several different types
`
`of constraints. For instance, in some embodiments, the method accounts for spatial, temporal,
`
`language and stroke count constraints that limit the number of edges defined in the stroke-
`
`segmentation graph and/or the candidate-character graph. For example, if two successive strokes
`
`are l and 3, and they occur within a first temporal duration and a second location proximity, the
`
`method would define an edge between two non-adjacent nodes in the stroke graph, and this edge
`
`can be used to define an edge in the candidate-character graph to represent B. On the other hand,
`
`when the two successive strokes for l and 3 do not occur within the first temporal duration or are
`
`
`
`Attorney Docket No: 122202—6225 (9292735 U SCl)
`
`not written within the second location proximity, the method in some embodiments does not
`
`define an edge between the two non-adjacent pair of nodes. In still other embodiments, the
`
`method creates the edge in the stroke-segmentation graph that treats these two strokes (for l and
`
`3) as one multi-stroke input (i.e., does not filter out this edge because of spatial and temporal
`
`constraints), but then uses the spatial and/or temporal constraints to determine whether it should
`
`generate a candidate-character edge in the candidate graph to represent the candidate B.
`
`In some embodiments, the method does not consider single stroke combinations greater
`
`than a particular value (e. g., 3 single strokes), as the number of generated multi-stroke characters
`
`would be too large to analyze quickly, especially given that the device that performs the
`
`character recognition method might have limited resources in general or might have limited
`
`resources to devote to the character
`
`recognition operation. The stroke count
`
`in some
`
`embodiments is dependent on the language being detected. Also, the detected language is also
`
`used to limit the number of candidate edges that would be generated for an edge in a stroke-
`
`segmentation graph.
`
`The preceding Summary is intended to serve as a brief introduction to some embodiments
`
`of the invention. It is not meant to be an introduction or overview of all-inventive subject matter
`
`disclosed in this document. The Detailed Description that follows and the Drawings that are
`
`referred to in the Detailed Description will further describe the embodiments described in the
`
`Summary as well as other embodiments. Accordingly,
`
`to understand all
`
`the embodiments
`
`described by this document, a full review of the Summary, Detailed Description and the
`
`Drawings is needed. Moreover,
`
`the claimed subject matters are not to be limited by the
`
`illustrative details in the Summary, Detailed Description and the Drawings, but rather are to be
`
`defined by the appended claims, because the claimed subject matters can be embodied in other
`
`specific forms without departing from the spirit of the subject matters.
`
`
`
`Attorney Docket No: 122202—6225 (9292735 U SCl)
`
`BRIEF DESCRIPTION OF DRAWINGS
`
`The novel features of the invention are set forth in the appended claims. However, for
`
`purposes of explanation, several embodiments of the invention are set forth in the following
`
`figures.
`
`Figure 1 illustrates an example of a stroke-segmentation graph 100.
`
`Figure 2 illustrates an example of a candidate-character graph, which is created by a
`
`candidate graph generator of a character recognizer of some embodiments.
`
`Figure 3 illustrates an example of a string selector of the character recognizer of some
`
`embodiments exploring different paths through the generated candidate graph before selecting
`
`the path as the best path through this graph.
`
`Figure 4 illustrates a more detailed view of the character
`
`recognizer of some
`
`embodiments of the invention.
`
`Figure 5 illustrates
`
`a process performed by the character
`
`recognizer of some
`
`embodiments.
`
`Figure 6 illustrates an example of a character recognizer that has such a multi-string
`
`selector.
`
`Figures 7-9 illustrate an example of a smart watch that executes the character recognizer
`
`of some embodiments of the invention.
`
`Figure 10 is an example of an architecture of a mobile computing device that implements
`
`some embodiments of the invention.
`
`Figure 11 conceptually illustrates another example of an electronic system with which
`
`some embodiments of the invention are implemented.
`
`
`
`Attorney Docket No: 122202—6225 (929273 USCl)
`
`DETAILED DESCRIPTION
`
`In the following detailed description of the invention, numerous details, examples, and
`
`embodiments of the invention are set forth and described. However, it will be clear and apparent
`
`to one skilled in the art that the invention is not limited to the embodiments set forth and that the
`
`invention may be practiced without some of the specific details and examples discussed.
`
`Some embodiments of the invention provide a device that executes a character
`
`recognition process to recognize characters that are input through touch strokes on a touch-
`
`sensitive sensor. In some embodiments, the device is a mobile device (e.g., a smartphone, a
`
`tablet, a watch, etc.) with a touch-sensitive display screen. In other embodiments, the device is a
`
`computer, trackpad or other device with a touch-sensitive sensor. In still other embodiments, the
`
`device is a remote with a touch-sensitive section, or is a media-streaming device that has such a
`
`remote.
`
`In some embodiments, the sensor has a space-constrained area for receiving the touch
`
`input. In some embodiments, the device’s character-recognition process places no limitations on
`
`where the user can write in the space provided by the device. As such, successive characters
`
`might not follow each other in the space. In fact, later characters might overlap earlier characters
`
`or they might appear before earlier characters.
`
`In some embodiments, the character-recognition process detects a plurality of stroke
`
`inputs, with each stroke input having a touch-down location and a touch-up location on the
`
`sensor surface. A stroke’s touch-down location corresponds to the location on the sensor surface
`
`that the stroke input initially touch, while the stroke’s touch-up location corresponds to the
`
`location on the sensor surface where this touch input ended.
`
`The character-recognition process creates a stroke-segmentation graph with several nodes
`
`and several edges between the nodes. Figure 1 illustrates an example of a stroke-segmentation
`
`graph 100. This graph is created by a stroke-segment graph generator 110 of a character
`
`recognizing module 105 (called character recognizer) that performs the character-recognition
`
`process of some embodiments.
`
`The stroke-segment graph generator 110 has created this graph in response to the user
`
`entering Ba on a touch-sensitive surface 150 through three strokes 180, 182, and 184. Each node
`
`in this graph 100 corresponds to at least one stroke’s touch-down location or one stroke’s touch-
`
`
`
`Attorney Docket No: 122202—6225 (9292735 U SCI)
`
`up location. In some embodiments, each interior node of the graph corresponds to a touch-up
`
`location of an earlier stroke and a touch-down location of the next subsequent stroke.
`
`In some embodiments, the graph generator 110 not only defines edges in the stroke-
`
`segmentation graph between pairs of adjacent nodes but also defines, whenever applicable, edges
`
`between pairs of non-adjacent nodes. The graph generator 110 defines edges between pairs of
`
`non-adjacent nodes because users may use multiple strokes to input a single character.
`
`Specifically, while a user might input one character with one single contiguous stroke, the user
`
`might need to use multiple different strokes to input another character, which requires the graph
`
`generator 110 to explore edges between non-contiguous nodes in the stroke-segmentation graph
`
`and to define such edges when they could be viably used to define one or more characters.
`
`Characters that are input through a single stroke are referred to below as single-stroke characters,
`
`while characters that require multiple input strokes are referred to below as multi-stroke
`
`characters.
`
`Figure 1 illustrates several examples of edges between contiguous nodes and non-
`
`contiguous nodes in the stroke-segmentation graph. For instance,
`
`the edge 112 is an edge
`
`between contiguous nodes 102 and 104, while the edge 118 is an edge between non-contiguous
`
`nodes 102 and 106. An edge between a contiguous node pair in the graph 100 represents a single
`
`input stroke, while an edge between a non-contiguous node pair in this graph represents a multi-
`
`stroke input.
`
`In some embodiments, the stroke-segment graph generator 110 does not create an edge in
`
`the stroke-segmentation graph that connects a pair of non-contiguous nodes that are more than a
`
`certain number of nodes from each other (e.g., do not define edges between non-contiguous pair
`
`of nodes that have more than 2 intervening nodes between them). This is because the graph
`
`generator 110 does not consider single stroke combinations greater than a particular value (e. g., 3
`
`single strokes), as the number of generated multi-stroke characters would be too large to analyze
`
`quickly, especially given that the device that performs the character recognition process might
`
`have limited resources in general or might have limited resources to devote to the character
`
`recognition process. The stroke count in some embodiments is dependent on the language being
`
`detected.
`
`In some embodiments, the stroke-segment graph generator 110 does not create an edge in
`
`the stroke-segmentation graph to represent a multi-stroke character between two non-contiguous
`
`
`
`Attorney Docket No: 122202—6225 (9292735 U SCl)
`
`nodes (e.g., nodes 104 and 108) when more than a threshold amount of time exists between two
`
`of the strokes in the multi-stroke input (i.e., two of the strokes do not satisfy a timing constraint).
`
`Also, in some embodiments, the stroke graph generator 110 does not generate such an edge
`
`between two non-contiguous nodes when two of the strokes of the multi-stroke input are too far
`
`apart (i.e., two of the strokes do not satisfy a spatial constraint).
`
`For each edge in the stroke-segmentation graph, the character recognizer 105 defines a
`
`set of candidate characters represented by the edge and associates a score for each candidate
`
`character. To represent
`
`the set of candidate characters,
`
`the character recognizer of some
`
`embodiments creates a candidate-character graph from the stroke-segmentation graph. Figure 2
`
`illustrates an example of a candidate-character graph 200, which is created by a candidate graph
`
`generator 210 of the character recognizer 105 of some embodiments.
`
`The candidate graph generator 210 creates the candidate graph 200 from the stroke-
`
`segmentation graph 100 (i.e., after the user enters Ba on the touch-sensitive surface 150 through
`
`the three strokes 180, 182, and 184). The candidate-character graph has the same number of
`
`nodes 202-208 as the stroke-segmentation graph, but for each edge of the node graph,
`
`the
`
`candidate-character graph can include one or more candidate graph edges, with each candidate
`
`edge representing one candidate character. Each candidate edge in these embodiments is
`
`assigned the score of its associated candidate character.
`
`As shown, the candidate graph 200 has two edges 222 and 224 for the edge 112 in the
`
`stroke graph 100, and three edges 228, 230 and 232 for the edge 116 in the stroke graph 100. The
`
`two edges 222 and 224 respectively specify the number 1 and lower case L as candidate
`
`characters for the first stroke 180 represented by edge 112, while the three edges 228, 230 and
`
`232 respectively specify the lower case A, the number 0, and the lower case 0 for the third
`
`stroke 184. Candidate edge 234 specif1es upper case B for the multi-stroke input
`
`that
`
`is
`
`represented by the edge 118 in the stroke graph 100 to represent the combination of the first two
`
`strokes 180 and 182. Candidate edge 236 specifies the number 3 to represent the second stroke
`
`182, which is represented as edge 114 in the stroke graph 100.
`
`In some embodiments, the character-recognition process explores different paths through
`
`the candidate-character graph to identify a string of one or more characters that most likely
`
`represents the string (e. g., the word, the number, the alphanumeric entry, etc.) that the user inputs
`
`through a set of touch stroke inputs on the touch-sensitive sensor. Each explored path traverses
`
`
`
`Attorney Docket No: 122202—6225 (9292735 U SCl)
`
`through a set of nodes in the candidate-character graph by using a set of candidate-character
`
`edges defined in this graph. The character-recognition process in some embodiments computes a
`
`score for each path based on the scores of the candidate-character edges that the path uses. The
`
`character-recognition process then selects the path with the best computed path score (e. g., the
`
`highest path score when a higher score is a better score, or the lowest path score when a lower
`
`score is a better score) as the path that includes the characters that most likely correspond to the
`
`current character string that the user has entered through the set of touch stroke inputs on the
`
`sensor.
`
`Figure 3 illustrates an example of a string selector 310 of the character recognizer 105 of
`
`some embodiments exploring different paths 3 12-322 through the generated candidate graph 200
`
`before selecting the path 322 as the best path through this graph. Unlike the other paths 312-320,
`
`this figure illustrates the path 322 twice, (1) once 322a on its own to show its identification and
`
`scoring while all the paths are being identified and scored, and (2) once 322b with thick lines
`
`after it has been selected as the current best path.
`
`In the latter representation 322b, the path 322 is illustrated on a graph that shows the
`
`other candidate edges in the candidate graph with dashed lines. This latter representation is
`
`meant to highlight that the string selector 310 might select different paths that uses different sets
`
`of edges in the candidate graph after subsequent strokes are entered by the user. The path 322
`
`just happens to be the current best path through the candidate graph.
`
`More specifically, in some embodiments, each time the stroke graph generator 110 adds a
`
`new node and a new edge to the stroke-segmentation graph 100, the candidate graph generator
`
`210 (1) defines a new node and a set of candidate-character edges to the candidate-character
`
`graph 200 (which corresponds to the new node and new edge in the stroke graph 100), and (2)
`
`scores the newly defined candidate-character edges. The string selector 310 then re-examines the
`
`paths through the candidate-character edges to identify the path that represents the most likely
`
`character string that the user is in the process of entering or has completed its entry. These
`
`explored paths use one of the edges that the candidate graph generator 210 just generated for the
`
`new stroke.
`
`After assessing the paths that best represent indiVidual input strings (e.g.,
`
`indiVidual
`
`words), the character-recognition process of some embodiments performs a multi-string (e.g., a
`
`multi-word) analysis on the paths to identify the set of paths that best represent the multi-string
`
`
`
`Attorney Docket No: 122202—6225 (9292735 U SCl)
`
`input that the user has entered through the set of touch stroke inputs on the sensor. As further
`
`described below, the character recognizer 105 in some embodiments has a multi-string selector
`
`that identifies the set of paths that specifies the multi-string input that best represents the touch
`
`stroke inputs that the user has entered thus far. In some embodiments, the character recognizer
`
`also has a string presenter that identifies two or more of the best paths through the generated
`
`candidate graph to present a collection of two or more candidate strings or candidate multi-string
`
`entries on a user interface (UI) so that the user can select one of the presented strings and cut
`
`short the entries. Based on pre-tabulated strings, the string presenter in some embodiments auto-
`
`completes strings or multi-string entries, as further described below.
`
`To define the edges in the stroke-segmentation graph, and to define the edges in the
`
`candidate-character graph,
`
`the character recognizer 105 of some embodiments accounts for
`
`several different
`
`types of constraints. For instance,
`
`in some embodiments,
`
`the character
`
`recognizer accounts for spatial, temporal, language and stroke count constraints that limit the
`
`number of edges defined in the stroke-segmentation graph and/or the candidate-character graph.
`
`For example, if two successive strokes are l and 3 occur within a first temporal duration and a
`
`second location proximity, the character recognizer would define an edge between two non-
`
`adjacent nodes of the stroke graph, and this edge can be used to define an edge in the candidate-
`
`character graph to represent B.
`
`On the other hand, when the two successive strokes for l and 3 do not occur within the
`
`first temporal duration or are not written within the second location proximity, the character
`
`recognizer in some embodiments does not define an edge between the two non-adjacent pair of
`
`nodes. In still other embodiments,
`
`the character recognizer creates the edge in the stroke-
`
`segmentation graph that treats these two strokes (for l and 3) as one multi-stroke input (i.e., does
`
`not filter out this edge because of spatial and temporal constraints), but then uses the spatial
`
`and/or temporal constraints during the character recognition phase to determine that it should not
`
`generate a candidate-character edge in the candidate graph to represent the candidate B. Also, the
`
`detected language is also used to limit the number of candidate edges that would be generated for
`
`an edge in a stroke-segmentation graph.
`
`Figure 4 illustrates a more detailed view of the character recognizer 400 of some
`
`embodiments of the invention. As shown,
`
`the character recognizer includes a stroke input
`
`detector 405, a stroke-segmentation graph generator 110, a candidate-character graph generator
`
`10
`
`
`
`Attorney Docket No: 122202—6225 (9292735 U SCl)
`
`210, a string selector 310, and a string presenter 410. To perform their operations, these modules
`
`use constraints stored in storages 430, 432, and 434, and store the results of their operations in
`
`storages 440, 442, and 444.
`
`The operation of the character recognizer 400 will be described by reference to the
`
`process 500 of Figure 5. This process shows the sequence of operations that the character
`
`recognizer 400 performs each time it identifies a new stroke that the user inputs through the
`
`touch sensitive sensor of the device. Once a user starts a new stroke by touching down on this
`
`sensor,
`
`the stroke input detector 405 receives (at 505) data from the touch input from the
`
`device’s OS framework for capturing and processing touch input from the touch sensitive sensor.
`
`When the user ends the stroke (i.e., performs a touch-up operation), the stroke input detector 405
`
`(at 505) identifies the end of the stroke, informs the stroke-segment graph generator 110 of this
`
`stroke, and supplies this generator with information that describes this stroke. The supplied
`
`information in some embodiments includes the touch-down and touch-up locations of the stroke,
`
`the time associated with the touch-down and touch-up events, and the geometry that was
`
`produced by the stroke (which in some embodiments is expressed in terms of a collection of
`
`sample points along the stroke).
`
`In some embodiments, each node of the stroke graph 100, other than the first and last
`
`node, represents both a touch-up event of an earlier stroke and a touch-down event of the next
`
`subsequent stroke. Accordingly,
`
`for the newly detected stroke,
`
`the stroke-segment graph
`
`generator 110 (at 510) (1) updates the attributes of the last previously added node of the stroke
`
`graph 100 to include the attributes (e.g., location, time, etc.) of the newly reported touch-down
`
`event, and (2) adds a new node to the stroke graph 100 and adds the attributes (e.g., location,
`
`time, etc.) of the newly reported touch-up event to this new node’s attributes. As shown, the
`
`stroke graph generator 110 stores the stroke graph 100 in storage 440 in some embodiments.
`
`At 510, the stroke graph generator 110 also adds one or more edges that connect the new
`
`node to previously defined neighboring and non-neighboring nodes that could serve as the basis
`
`to define viable candidate-character edges in the candidate-character graph 200. To define the
`
`edges in the stroke-segmentation graph, the stroke graph generator 110 of some embodiments
`
`accounts for several different types of constraints that are stored in the constraint storage 430 and
`
`language model storage 432.
`
`11
`
`
`
`Attorney Docket No: 122202—6225 (9292735 U SCl)
`
`One of these constraints is the stroke count constraint stored in the storage 430. This
`
`constraint specifies the largest number of single stroke combinations that the stroke graph
`
`generator 110 can combine into a single multi-stroke input that it represents as one edge that
`
`connects two non-adjacent nodes (also called non-contiguous or non-neighboring nodes in this
`
`document) in the stroke graph 100. This constraint causes the stroke-segment graph generator
`
`110 to not even consider any edge that would connect a pair of non-contiguous nodes that are
`
`more than a certain number of nodes from each other (i.e., that have more than N intervening
`
`nodes between them). This constraint frees the stroke graph generator 110 from exploring multi-
`
`stroke combination that would be too large to analyze quickly, especially given that the device
`
`that performs the character recognition process might have limited resources in general or might
`
`have limited resources to devote to the character recognition process.
`
`Spatial and timing constraints in the storage 430 are other constraints that cause the
`
`stroke graph generator 110 from generating edges between non-adjacent node pairs that are
`
`within the threshold node distance specified stroke count constraint. For example, when two
`
`successive strokes are l and 3, and they do not occur within a temporal duration threshold and a
`
`location proximity threshold, the stroke graph generator 110 does not define an edge between the
`
`two non-adjacent nodes of the stroke graph 100 (they correspond to the touch-down event of the
`
`1 stroke and the touch-up event of the 3 stroke). On the other hand, the spatial and timing
`
`constraints of the stroke graph generator 110 might allow this multi-stroke edge to be created,
`
`but the spatial and/or timing constraints of the candidate graph generator 210 might then
`
`preclude a candidate character to be defined for this edge. In some embodiments, the stroke
`
`graph generator 110 and the candidate graph generator 210 use different spatial and/or timing
`
`constraints.
`
`After the stroke graph generator identifies (at 510) new node and associated set of edges
`
`for the newly detected stroke, it directs (at 515) the candidate graph generator 210 to identify a
`
`set of one or more characters for each newly identified edge and compute a score for each
`
`identified candidate character. As shown, the candidate graph generator 210 accesses the storage
`
`440 to retrieve the stroke graph 100 that the stroke graph generator 110 just updated, and
`
`accesses the storage 445 to store and access the candidate character graph 200 that it generates to
`
`maintain the candidate characters that it identifies. As mentioned above, the candidate-character
`
`graph 200 in some embodiments has the same nodes as the stroke-segmentation graph, but for
`
`12
`
`
`
`Attorney Docket No: 122202—6225 (9292735 U SCI)
`
`each edge of the node graph, the candidate-character graph can include one or more candidate
`
`graph edges, with each candidate edge representing one candidate character. Each candidate edge
`
`in these embodiments is assigned a score that quantifies the likelihood that edge’s associated
`
`candidate character is the actual character that the user entered or wanted to enter.
`
`As shown in Figure 4, the character graph generator 210 accounts for several different
`
`types of constraints that are stored in the constraint storage 434 and the language model storage
`
`432,
`
`in order to define the edges in its character candidate graph 200. Spatial and timing
`
`constraints in the storage 434 are two types of constraints that the character graph generator 210
`
`uses in some embodiments to identify candidate characters, and to generate candidate character
`
`edges in the character graph 200, for edges in the stroke graph 100.
`
`For example, as described above, when the user inputs 1 and 3 in two successive strokes
`
`that are within a threshold time period of each other and are within a certain spatial distance of
`
`each other, the character graph generator 210 would define B as a candidate character for the
`
`combination of these strokes and defines an edge in the candidate graph 200 between the two
`
`nodes that represent the touch-down event for input 1 and the touch-up event for input 3. On the
`
`other hand, when these two strokes do not meet either the temporal or spatial constraint, the
`
`character graph generator 210 does not generate an edge in its candidate graph to represent B as
`
`a candidate character, as too much time passed between the strokes, or these strokes were too far
`
`apart, to imply that the user wanted them to be treated as one multi-stroke input. In some
`
`embodiments, the stroke graph generator and candidate graph generator identify the temporal
`
`and spatial properties of a node, an edge, or a stroke based on the attributes that are stored for
`
`these constructs in the stroke graph 100 and the character graph 200.
`
`In some embodiments, the spatial constraints that the character graph generator 210 uses
`
`include geometric constraints that allow this generator to differentiate between upper and lower
`
`case characters (e. g., between 0 and 0), between characters and punctuation, and between
`
`different punctuation (e. g., comma versus apostrophe). The character graph generator 210
`
`generates edges for the characters that it differentiates with the geometric constraints (e.g.,
`
`generates candidate edges for O and 0) but assigns scores to these edges commensurate with the
`
`likelihood of their accuracy based on the geometric constraints.
`
`In some embodiments, the constraint storage 434 also includes application constraints
`
`that can limit the type of characters that the character graph generator 210 defines for its
`
`13
`
`
`
`Attorney Docket No: 122202—6225 (9292735 U SCI)
`
`character graph 200. The character graph generator in some embodiments is biased towards
`
`predicting the user’s intent instead of acting as a typewriter and just reproducing the user’s input.
`
`For instance, users often enter certain chara

Accessing this document will incur an additional charge of $.
After purchase, you can access this document again without charge.
Accept $ ChargeStill Working On It
This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.
Give it another minute or two to complete, and then try the refresh button.
A few More Minutes ... Still Working
It can take up to 5 minutes for us to download a document if the court servers are running slowly.
Thank you for your continued patience.

This document could not be displayed.
We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.
You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.
Set your membership
status to view this document.
With a Docket Alarm membership, you'll
get a whole lot more, including:
- Up-to-date information for this case.
- Email alerts whenever there is an update.
- Full text search for other cases.
- Get email alerts whenever a new case matches your search.

One Moment Please
The filing “” is large (MB) and is being downloaded.
Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!
If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document
We are unable to display this document, it may be under a court ordered seal.
If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.
Access Government Site