throbber
IN THE UNITED STATES PATENT AND TRADEMARK OFFICE
`
`
`In re Patent of: Vidya Narayanan, et al.
`U.S. Patent No.:
`8,768,865 Attorney Docket No.: 39521-0042IP2
`Issue Date:
`July 1, 2014
`
`Appl. Serial No.: 13/269,516
`
`Filing Date:
`October 7, 2011
`
`Title:
`LEARNING SITUATIONS VIA PATTERN MATCHING
`
`
`
`
`
`
`
`DECLARATION OF DR. JAMES F. ALLEN
`
`1. My name is Dr. James Allen. I am the John H. Dessauer Professor of
`
`Computer Science for the University of Rochester, a position I have held since
`
`1992. I have been employed by the University of Rochester since 1978. I
`
`regularly teach undergraduate- and graduate-level courses in natural language
`
`understanding covering topics including English phrase structure, parsing,
`
`semantic analysis, speech acts, knowledge representation, and natural language
`
`system design. My curriculum vitae is provided (as Exhibit 1004).
`
`2.
`
`I received Bachelor of Science, Master of Science, and Doctor of
`
`Philosophy Degrees in Computer Science from the University of Toronto.
`
`3.
`
`I am an expert in the field of artificial intelligence. I served on the
`
`Editorial Board of AI Magazine for seven years and as Editor-in- Chief of the
`
`foremost journal in natural language processing, Computational Linguistics, for ten
`
`
`
`1
`
`APPLE 1021
`
`

`

`U.S. Patent No. 8,768,865
`
`years. I serve as Associate Director for the Florida Institute for Human and
`
`Machine Cognition, a position I have held since 2006. I also served on the
`
`Scientific Advisory Board for the Vulcan/Allen Institute for Artificial Intelligence,
`
`a position I held from 2012 until the Board’s dissolution at the end of 2013.
`
`4.
`
`In addition, I have supervised 30 PhD dissertations in Artificial
`
`Intelligence and many of my students are now faculty at distinguished universities
`
`and occupy key positions in tech companies such as Google and IBM.
`
`5.
`
`Throughout my career I have received a variety of awards. I received
`
`one of the first Presidential Young Investigator Awards between 1984 and 1989. I
`
`am a Founding Fellow of the Association for the Advancement of Artificial
`
`Intelligence (AAAI) and delivered the keynote address at foremost conference on
`
`Artificial Intelligence in 1998. I also received the best paper award from the same
`
`conference in 2007. I was elected as a Fellow of the Cognitive Science Society in
`
`August 2014. I have received well over $30 million in research funding from
`
`agencies such as the National Science Foundation, the Defense Advanced Research
`
`Projects Agency, and the Office of Naval Research.
`
`6. My work is extensively cited in the field. Overall there are over
`
`50,000 citations to my work in leading journals and conferences. My paper
`
`"Maintaining Knowledge About Temporal Intervals" (CACM, 1983) is regularly
`
`
`
`2
`
`

`

`U.S. Patent No. 8,768,865
`
`included in lists of the most-cited papers in Computer Science, and has received
`
`over 10,000 citations.
`
`7.
`
`I have made influential contributions in the field of Artificial
`
`Intelligence in a number of areas, including temporal reasoning, the representation
`
`of action and time, plan and intention recognition, and models of communication
`
`(e.g., plan-based models of conversation).
`
`8.
`
`I have been retained on behalf of Apple Inc. to offer technical
`
`opinions relating to U.S. Patent No. 8,768,865 (the ‘865 Patent), and prior art
`
`references relating to its subject matter. I have reviewed the ‘865 Patent and
`
`relevant excerpts of the prosecution history of the ‘865 Patent. Additionally, I
`
`have reviewed the following:
`
`a. Wang et al, “A Framework of Energy Efficient Mobile Sensing for
`
`Automatic User State Recognition”, Proceedings of the 7th
`
`international conference on Mobile systems, applications, and
`
`services, pp. 179-192 , Kraków, Poland — June 22 - 25, 2009
`
`(“Wang” or APPLE-1005);
`
`b. “Qualcomm Incorporated Compliant for Patent Infringement,” filed
`
`on November 29th, 2017, from Case No. 3:17-cv-02402-WQH-MDD
`
`filed in S.D. CA. (“Compliant” or APPLE-1006)
`
`
`
`3
`
`

`

`U.S. Patent No. 8,768,865
`
`c. Exhibit 865 of “Qualcomm Inc.’s Patent Initial Infringement
`
`Contentions,” filed on March 2nd, 2018, from Case No. 3:17-cv-
`
`02402-WQH-MDD filed in S.D. CA. (“Infringement Contentions” or
`
`APPLE-1007)
`
`d. U.S. Patent Application Publication No. 2010/0217533 to Nadkarni et
`
`al. (“Nadkarni” or APPLE-1008)
`
`e. U.S. Patent Application Publication No. US 2008/0297513 to
`
`Greenhill et al. (“Greenhill” or APPLE-1009),
`
`f. Webpage of “Nokia N95 8GB - Full phone specifications”
`
`(Archive.org version dated 05/26/2009,
`
`http://web.archive.org/web/20090526054459/http://www.gsmarena.co
`
`m:80/nokia_n95_8gb-2088.php) (“Nokia N95” or APPLE-1010)
`
`g. U.S. Patent No. US 8,676,224 to Louch (“Louch” or APPLE-1011)
`
`h. U.S. Patent Application Publication No. 2011/0066383 to Jangle et al.
`
`(“Jangle” or APPLE-1012)
`
`i. U.S. Patent No. 9575776 to De Andrade Cajahyba et al. (“De Andrade
`
`Cajahyba” or APPLE-1013)
`
`j. U.S. Patent Application Publication No. 2011/0081634 to Kurata
`
`(“Kurata” or APPLE-1014)
`
`4
`
`
`
`

`

`U.S. Patent No. 8,768,865
`
`k. Declaration of Mr. Chris Butler for Nokia N95 (APPLE-1015)
`
`l. Declaration of Mr. Scott Delman for Wang (APPLE-1016)
`
`m. Cohn, D., Caruana, R., & McCallum, A. Semi-supervised clustering
`
`with user feedback in Constrained Clustering: Advances in
`
`Algorithms, Theory, and Applications, CRC Press, pp17-32, (2009)
`
`(“Cohn” or APPLE-1017)
`
`n. Ruzzelli, A., Nicolas, C. Schoofs, A., O;”Hare, G. Real-time
`
`recognition and profiling of appliances through a single electricity
`
`sensor, Proc. 7th Annual IEEE Conference on Sensor Mesh (SECON),
`
`Boston. MA 2010 (“Ruzzelli” or APPLE-1018)
`
`o. Cilla, R., Particio, M., Garcia, J., Berlanga, A., and Molina, J.
`
`Recognizing Human Activities from Sensors Using Hidden Markov
`
`Models Constructed by Feature Selection, Algorithms 2009, 2: pp282-
`
`300 (“Cilla” or APPLE-1019)
`
`p. The seventh edition of the Authoritative Dictionary of IEEE
`
`Standards Terms (2000) (APPLE-1020)
`
`9.
`
`Counsel has informed me that I should consider these materials
`
`through the lens of a person having ordinary skill in the art related to the ‘865
`
`Patent at the time of the earliest purported priority date of the ‘865 Patent, and I
`
`
`
`5
`
`

`

`U.S. Patent No. 8,768,865
`
`have done so during my review of these materials. I understand that the ‘865
`
`Patent claims priority to US Provisional Application No. 61/434,400, which was
`
`filed on January 19, 2011. It is therefore my understanding that the priority date of
`
`January 19, 2011 (hereinafter the “Critical Date”) represents the earliest possible
`
`priority date to which the ‘865 patent is entitled.
`
`10. A person having ordinary skill in the art as of the Critical Date
`
`(hereinafter “POSITA”) would have had a Bachelor of Science degree in either
`
`computer science or electrical engineering, together with at least two years of study
`
`in an advanced degree program in artificial intelligence, machine learning, or
`
`pattern recognition, or comparable work experience. I base this on my own
`
`practical and educational experiences, including my knowledge of colleagues and
`
`others at the time.
`
`11.
`
`I am familiar with the knowledge and capabilities of a POSITA as
`
`noted above. Specifically, my experience working with industry, undergraduate
`
`and post-graduate students, colleagues from academia, and designers and engineers
`
`practicing in industry has allowed me to become directly and personally familiar
`
`with the level of skill of individuals and the general state of the art.
`
`12.
`
`I have no financial interest in either party or in the outcome of this
`
`proceeding. I am being compensated for my work as an expert on an hourly basis,
`
`
`
`6
`
`

`

`U.S. Patent No. 8,768,865
`
`for all tasks involved. My compensation is not dependent in any manner on the
`
`outcome of these proceedings or on the content of my opinions.
`
`13. My opinions, as explained below, are based on my education,
`
`experience, and background in the fields discussed above. Unless otherwise stated,
`
`my testimony below refers to the knowledge of a POSITA in the fields as of the
`
`Critical Date, which I understand to be January 19, 2011.
`
`
`
`Brief Overview of the Technology
`
`14. The technology in question involves activity recognition (sometimes
`
`called state recognition), namely, automatically identifying what a person (or
`
`device) is doing based on data acquired from a set of sensors. Activity recognition
`
`can be viewed as a specific example of pattern recognition technology, which
`
`involves recognizing patterns in data, where in this case the patterns are based on
`
`activities a person can perform. A common technique for activity/state/pattern
`
`recognition involves machine learning. A known application of this technology by
`
`the Critical Date includes creating more effective mobile devices (such as
`
`smartphones) that can adjust their behaviors based on what it recognizes the user is
`
`doing. For example, Wang, entitled “A Framework of Energy Efficient Mobile
`
`Sensing for Automatic User State Recognition,” is a publication of the 7th
`
`international conference on Mobile systems, applications, and services held in
`
`
`
`7
`
`

`

`U.S. Patent No. 8,768,865
`
`Kraków, Poland on June 22-25, 2009. See APPLE-1016. Wang had implemented
`
`“an Energy Efficient Mobile Sensing System (EEMSS)” on a smartphone that can
`
`“automatically adjust the ring tone profile to appropriate volume … according to
`
`the surroundings.” APPLE-1005, Title, p1c2. Or, as another example, Louch
`
`discloses controlling a mobile phone “by the orientation and position of the mobile
`
`device.” Louch, Abstract. This technology had been used to enable a device to
`
`identify an activity or situation (e.g., the user is at work) from the available sensor
`
`data available to it (which might be, e.g., “Wifi, Bluetooth, audio, video, light
`
`sensors, accelerometers, and so on.” APPLE-1005, p1c2).
`
`15. As mentioned above, activity recognition involves mapping from a set
`
`of input signals (sensor data) to a high level description of some situation or
`
`activity that the user is engaged in. In developing a system to perform this task,
`
`underlying patterns in the data correspond to each activity/situation needs to be
`
`recognized. This is where machine learning comes in. Machine learning focuses on
`
`identifying underlying patterns in data that correspond to specified labels. There
`
`have been many different techniques, each useful for certain types of data,
`
`including Naïve Bayes, k-nearest neighbor, and Hidden Markov Models. For
`
`example, Cilla describes “building Hidden Markov Models to classify different
`
`human activities using video sensors.” APPLE-1019, Abstract. These general
`
`
`
`8
`
`

`

`U.S. Patent No. 8,768,865
`
`models, however, are often too data and compute intensive to be usable on mobile
`
`devices (APPLE-1005, p8c2).
`
`16. Activity recognition systems start from raw signals coming from the
`
`sensors that may have a high degree of “noise”, which is a term for the
`
`uninformative variance in the signals that tend to mask the parts of the signal that
`
`are informative. Thus one of the typical first phases is to convert the raw signals
`
`into more abstract representations (often called features) that reduce or eliminate
`
`the noise. For instance, an acoustic signal coming from a microphone is a stream of
`
`measurements of the strength of the signal sampled at thousands of times a second.
`
`There are well known signal processing algorithms that can convert such signals
`
`into more useful information, such as the overall power of the signal which then
`
`might be used to recognize a “Background Sound” feature to have a value “silent”
`
`or “loud” as disclosed by Wang (APPLE-1005, p9c1). As another example, Louch
`
`notes “various functions of the mobile device may be implement … including in
`
`one or more signal processing … integrated circuits” APPLE-1011, 8:11-13. A
`
`simple example would be processing the signal from a accelerometers to determine
`
`if the device it still or moving , e.g., “sensing motion (e.g., acceleration above a
`
`threshold value)” APPLE-1011, 3:8-9. Cilla describes a wide range of features
`
`that they extract from video signals, including “bounding box properties” and “Hu
`
`
`
`9
`
`

`

`U.S. Patent No. 8,768,865
`
`invariant moments … are shape descriptors” (APPLE-1019, p5-6). As another
`
`example, this first phase of processing might map raw GPS data into a Motion
`
`feature with values “still”, “moving slowly” or “moving fast” (APPLE-1005, p8c1
`
`and Table 1) .
`
`17. The next typical phase of activity recognition is mapping these
`
`features to possible activities being performed (often called classification). To
`
`accomplish this, each activity can be represented as a pattern of features that
`
`indicate the activity is underway (see APPLE-1005, Table 1). This is where
`
`machine learning typically comes into play as in general it is hard to hand-define
`
`these patterns. The typical process for learning the patterns involves assembling a
`
`set of training examples that provide the sensor data, together with a label that
`
`indicates the activity that was performed. For example, Wang states “With
`
`accelerometer as the main sensing source, activity recognition is usually
`
`formulated as a classification problem where the training data is collected with
`
`experimenters wearing one or more accelerometer sensors … Different kinds of
`
`classifiers can be trained and compared in terms of accuracy of classification”
`
`(APPLE-1005, p2c2). As another example, Louch states “in some
`
`implementations, the mobile device 100 “learns” particular characteristics or
`
`patterns of the state of the device.” APPLE-1011, 10:3-4.
`
`
`
`10
`
`

`

`U.S. Patent No. 8,768,865
`
`18. Machine learning algorithms may define a model that can identify the
`
`patterns that statistically link the inputs to the activities. For instance, as a highly
`
`simplified example, if in most cases of some training examples, whenever I am
`
`walking the speed derived from a GPS signal typically indicates that I am moving
`
`between 1 and 3 miles per hour, then in a new circumstance, if my speed is 2 mph,
`
`then the algorithm will indicate that it is likely that I am walking. Of course, real
`
`examples are much more complex than this and often involve combinations of
`
`evidence from many different sensors.
`
`19. Another component in a typical activity recognition algorithm
`
`captures knowledge of how different activities relate to each other. For instance,
`
`Wang describes “a sensor management scheme which defined user states and
`
`state transitions” (APPLE-1005, p2c1). As another example, Jangle describes
`
`how elemental motions are combined into activities that are combined into
`
`behaviors (APPLE-1012, Figure 3). As an intuitive example, a device might learn
`
`that the activity of walking to work is typically following by the activity of buying
`
`a coffee. And when working, a typical activity is having a meeting. This
`
`transitional model of activities helps identify what is happening, especially when
`
`the evidence for the input signals is poor.
`
`
`
`11
`
`

`

`U.S. Patent No. 8,768,865
`
`20. To make this more concrete, I prepared the below Figure 1 to
`
`graphically show the typical components of activity recognition using examples
`
`from Louch.
`
`
`
`21. Declaration-Figure 1: Typical Components of an Activity
`
`Recognition System (using Louch as an example)
`
`22.
`
`In this case, we have a set of sensors (e.g., “one or more sensors (e.g.,
`
`accelerometer, gyro, light sensor, proximity sensor) integrated into the mobile
`
`device 100.” APPLE-1011, 2:20-22.) that are processed to produce a set of feature
`
`values (called “sensor inputs” in Louch. APPLE-1011, 2:37, 67). These feature
`
`values are used as the input to a pattern matching process, which uses patterns
`
`associated with states, as well as transition information in an activity model or a
`
`state machine, to match the observations and draw some conclusion about what
`
`state the user is in.
`
`
`
`12
`
`

`

`U.S. Patent No. 8,768,865
`
`23. Note that knowledge of what activities have been performed
`
`previously also serve as input to the pattern matching, identifying both what
`
`patterns are most relevant to match, and also what activities are expected next. For
`
`example, Louch discloses that “A state machine can track various combinations of
`
`inputs which can cause a state change to occur” APPLE-1011, 2:54-56. Figure 1
`
`shows a possible state of the processing once the system has recognized that the
`
`phone is at rest and the pattern shown in the activity model indicates that a likely
`
`next state is that the phone is picked up. APPLE-1011, 3:1-11. Additionally,
`
`Nadkarni discloses that “by knowing that the previous human activity was
`
`walking, certain signatures can intelligently be eliminated from the possible
`
`matches of the present activity that occurs subsequent to the previous human
`
`activity (walking)” APPLE-1008, ¶0035. Also, Wang discloses that “sensor
`
`management is achieved by assigning new sensors based on previous sensor
`
`readings in order to detect state transition” APPLE-1005, p4c1.
`
`24.
`
`I prepared the below Declaration-Figure 2 to illustrate an example
`
`architecture for a machine learning system. There are two main phases: in the
`
`training phase, data (“training data”) that provides examples of the task to be
`
`performed is processed using a learning algorithm that ultimately produces a
`
`statistical model of the data (“the model”). After the model is trained, then new
`
`
`
`13
`
`

`

`input can be processed and the model computes the most likely answer (i.e., the
`
`interpretation that statistically best matches the training data).
`
`U.S. Patent No. 8,768,865
`
`25. Declaration-Figure 2: An Example Architecture of Machine
`
`
`
`Learning Systems
`
`26. Machine learning tasks are typically classified by characteristics of
`
`the data presented to the “training” of the model. While there are numerous
`
`references on machine learning in the literature prior to the Critical Date, broadly,
`
`there are three different methods common in the field, supervised learning,
`
`unsupervised learning, or a combination of the two. They are discussed as below:
`
`27. Supervised learning: The computer is presented with example inputs
`
`and their desired outputs based on a person’s opinion, and the goal is to learn a
`
`general rule that maps inputs to the correct outputs. See “supervised learning …
`
`takes a set of examples with class labels, and returns a function that maps examples
`
`
`
`14
`
`

`

`U.S. Patent No. 8,768,865
`
`to class labels” (APPLE-1017, p18). See also “supervised methods build class
`
`models using labelled data” (APPLE-1019, p7). For instance, the learning system
`
`might be given some sensor data plus a label that identifies what was going on as
`
`the data was collected (e.g., the user is walking to work). As an example, Wang
`
`states “We collected accelerometer data in 53 different experiments … within each
`
`empirical interval, the person tags the ground truth of his/her activity information”
`
`APPLE-1005, p8c2. A learning algorithm then combines all the training examples
`
`tagged as, say, walking to work, and attempt to find characteristics that distinguish
`
`walking to work from all the other possible activities in the training set.
`
`28. Unsupervised learning: No labels are given to the learning algorithm,
`
`leaving it on its own to find structure from its sensor input. “unsupervised learning
`
`takes an unlabeled collection of data and, without intervention or additional
`
`knowledge, partitions it into sets of examples such that examples within clusters
`
`are more “similar” than examples between clusters” (APPLE-1017, p18).
`
`Unsupervised learning can be a goal in itself (discovering hidden patterns in data)
`
`or a means towards an end (feature learning). In general unsupervised learning
`
`requires considerably more data to produce effective results, and the patterns
`
`ultimately extracted might not be the ones that are relevant to one’s application. As
`
`
`
`15
`
`

`

`U.S. Patent No. 8,768,865
`
`a result, purely unsupervised learning is rare in activity recognition work as
`
`supervised learning provides much better detection accuracy.
`
`29. There are variants that fall between supervised and unsupervised
`
`learning. A system might get partial labeling of some data to create an input model
`
`and then subsequently improve its models by training on additional unlabeled data
`
`(a method often called bootstrapping). Also note that models can be incrementally
`
`improved by allowing user feedback to identify incorrect classifications and/or
`
`identify the correct classification. For example, see “Our approach … assumes that
`
`the human user has in their mind the criteria that enable them to evaluate the
`
`quality of the clustering” and “The main goal … of semi-supervised clustering is to
`
`allow the human to “steer” the clustering process” (APPLE-1017, Abstract, p18).
`
`As another example, Ruzzeli in 2010 describes a machine learning system that
`
`“uses a .. feedback mechanism to improve the accuracy by allowing the user to
`
`notify the system of an incorrect guess” (APPLE-1018, p5c2-p6c1). In these cases,
`
`the system incrementally adds new labelled data to its training set to extend its
`
`learning.
`
`30. Using one or more of the above learning techniques, a model, namely,
`
`an instantiated computational system that can automatically classify new input, can
`
`be obtained. In other words, “Modeling” refers to building a model, often from
`
`
`
`16
`
`

`

`U.S. Patent No. 8,768,865
`
`labelled data, which results in a “Model”, that is then used in “Classification” or
`
`“pattern matching,” namely, the labeling or recognition of patterns in new data.
`
`There are many types of models, but a quite common one at the Critical Date was
`
`Neural Networks. As an example, the system described by Ruzzeli involves “the
`
`user … generating a database of .. signatures” followed by “train[ing] an artificial
`
`neural network that is then employed to recognize … activities.” (APPLE-1018,
`
`Abstract).
`
`31. These techniques would have all be well known to a POSITA by the
`
`Critical Date. Furthermore, they would have known that when using a machine
`
`learning approach to pattern recognition, these techniques can be combined to
`
`improve the performance of the learning systems.
`
`
`
`Brief Overview of the ‘865 Patent
`
`32. The ‘865 patent relates to the use of machine learning to identify and
`
`recognize situations based on sensor data available on mobile communication
`
`devices such as smartphones. APPLE-1001, 1:20-24. The intended application of
`
`such technology is to enable mobile devices to better anticipate and response to
`
`user’s needs. The ‘865 Patent gives an example application of its purported
`
`invention:
`
`a mobile device may ring louder in response to an incoming call if a
`
`learned situation indicates a noisy ambient environment, or may
`
`17
`
`
`
`

`

`U.S. Patent No. 8,768,865
`
`silence a ringer and route an incoming call to voice mail if a learned
`
`situation indicates that a user may not want to be disturbed, or may
`
`launch an application if a learned situation indicates a user’s intent to
`
`use the application, or the like.
`
`APPLE-1001, 8:61-66.
`
`
`
`33. The ‘865 patent acknowledges that this is a “popular and growing
`
`market trend in sensor-enabled technology” APPLE-1001, 1:42-47. The ‘865
`
`patent also acknowledges that this is not a new area and “continues to be an area of
`
`continuous development” APPLE-1001, 6:36-41.
`
`34. Prior to the Critical Date, there existed numerous products,
`
`publications, and patents that implemented or described the functionality claimed
`
`in the ‘865 patent. Thus, the methodology of the ‘865 patent was well-known in
`
`the prior art. Further, to the extent there was any problem to be solved in the ‘865
`
`patent, it had already been solved in the prior art systems before the Critical Date.
`
`35. A challenge that the patent claims to address is that “continually
`
`tracking or monitoring all or most varying parameters … of sensor information
`
`may be computationally intensive, resource-consuming, at times intractable,”
`
`especially for mobile devices, APPLE-1001,7:58-63. The ‘865 patent suggests that
`
`the device only monitor a subset of the parameters/sensor streams if it can identify
`
`what information is relevant in the current context.
`
`18
`
`
`
`

`

`U.S. Patent No. 8,768,865
`
`36. By doing so, the ‘865 Patent alleges that “more tractable approach
`
`may facilitate or support machine learning … such that an appropriate action may
`
`be initiated by a mobile device in real time.” APPLE-1001, 8:54-60. To make this
`
`more concrete, I prepared the below Declaration-Figure 3 that graphically shows
`
`the ‘865 patent in the light of typical components of activity recognition, using an
`
`example from the ‘865 patent. APPLE-1001 7:22-36.
`
`
`
`37.
`
` Declaration-Figure 3: An illustration of the ‘865 patents in terms
`
`of a generic activity recognition framework
`
`38.
`
`In this case we have a set of sensors (“suite of sensors” APPLE-1001,
`
`7:25-28) that are processed to produce a set of features (“a relatively large number
`
`of varying parameters or variables associated with a multi-dimensional sensor
`
`information stream” APPLE-1001, 7:45-48). This information is then used by
`
`pattern matching to match the observations and draw some conclusion about what
`
`action is being performed (“Such … an event-related pattern may be fixed, for
`
`
`
`19
`
`

`

`U.S. Patent No. 8,768,865
`
`example, by associating corresponding parameters or variables having a … pattern
`
`to represent the condition or event” APPLE-1001, 8:18-21).
`
`39.
`
` The ‘865 Patent attempts to address the problem that “because of the
`
`increased dimensionality of the information stream … finding exact or
`
`approximate matches to a template … may be rather difficult”. APPLE-1001, 7:40-
`
`45) and “continually tracking … all or most varying parameters … may be
`
`computationally intensive … at times intractable” (APPLE-1001, 7:58-62). The
`
`‘865 Patent proposes to consider “a subset ... of varying parameters … associated
`
`with a condition or event”. APPLE-1001, 8:12-14. As we will discuss below,
`
`however, this technique was known in prior art and is in fact expressly disclosed
`
`by Louch as well as some other references (e.g., Wang and Nadkarni).
`
`40. FIG. 4 (reproduced below) is a representative process 400 of the ‘865
`
`Patent and aligns with the claims (see comparison between claims and FIG. 4 of
`
`the ‘865 Patent below). The ‘865 Patent describes the representative process 400
`
`includes the following steps: At 402, one or more input signals (e.g., GPS, WiFi,
`
`microphone) associated with a mobile device are monitored. The “896 patent
`
`admits this is typical: “since typical pattern recognition approaches generally
`
`employ … algorithms that work with a fixed known number of information
`
`sources, pattern recognition with respect to a multi-dimensional information stream
`
`
`
`20
`
`

`

`U.S. Patent No. 8,768,865
`
`acquired … via a suite of sensors may present a number of challenges” APPLE-
`
`1001, 7:3-8.
`
`41. At 404, at least one condition or event of interest is detected based on
`
`the monitored input signals. Note that the ‘865 Patent broadly defines the term
`
`“condition” to encompass almost any time, event, state, or action: “a condition or
`
`event of interest may include, for example, a time of day, day of week, state or
`
`action of a host application, action of a user operating a mobile device … or the
`
`like." APPLE-1001, 14:60-64. The step 404 of detecting such a time, event, state,
`
`or action based on at least one monitored input sensor signals would again be a
`
`routine, commonsensical technique, well within the knowledge of a POSITA at the
`
`time of the patent, and in fact found in virtually all activity recognition systems.
`
`For example, Wang discloses detecting a user action of walking or riding vehicle
`
`based on monitored input sensor signals from a GPS sensor. See APPLE-1005,
`
`p5c1, p8c1. As another example, Nadkarni discloses detecting motion of an
`
`animate object based on monitored input sensor signals from sensors such as
`
`accelerometers. See APPLE-1008, Abstract. Cilla discloses “Human activity
`
`recognition from sensors” (APPLE-1019, Abstract). See also
`
`To improve the user experience in the use of a portable device,
`
`techniques are used for “context characterization, i.e., from a range of
`
`conditions possible to detect by the system, such as time
`
`21
`
`
`
`

`

`U.S. Patent No. 8,768,865
`
`(date/time), current location, motion, etc., as well as the historical
`
`use of the device, 1 a certain grouping of actions and settings, called
`
`“context” are selected automatically or manually, modifying and
`
`setting from that moment the way of user interacts with the device.
`
`APPLE-1013, Abstract.
`
`
`
`The movement/state recognition unit 108 is means for detecting a
`
`movement/state pattern by using the sensor data. Accordingly,
`
`when the sensor data is input from the motion sensor 102, the
`
`movement/state recognition unit 108 detects a behaviour/state
`
`pattern based on the input sensor data. A movement/state pattern
`
`that can be detected by the movement/state recognition unit 108 is
`
`“walking,” “running,” “still,” “jumping,” “train (aboard/not
`
`aboard)” and “elevator (aboard/not aboard/ascending/descending),”
`
`for example.
`
`APPLE-1014, ¶0102.
`
`
`
`42. At 406, a first pattern is identified based on these detected conditions
`
`or events. Again, this is a commonsense part of any pattern recognition system,
`
`well within the knowledge of a POSITA. In fact, the very goal of pattern
`
`1.
`
`
`
`1 Bold represents emphasis added by me in each citation, unless otherwise
`
`specified.
`
`
`
`22
`
`

`

`U.S. Patent No. 8,768,865
`
`recognition systems is to identify patterns. The ‘865 patent admits as much:
`
`“Typically,… one or more patterns to be identified may, for example, be
`
`represented via one or more vectors of observations in multiple dimensions”
`
`APPLE-1001, 6:46-48.
`
`43. At 408, “one or more varying parameters or variables are fixed in
`
`some manner, such as in a suitable subset having one or more signal sample
`
`values”. See Apple-1001, 15:7-8. Furthermore, this subset is associated with the
`
`pattern matched in 406. This was also a common part of pattern recognition
`
`systems as varying parameters are typically assigned values as the result of signal
`
`processing or classification, and not all parameters are fixed at any one time. For
`
`example, Wang describes a Motion parameter with possible values including
`
`“Moving Slowly” and “Moving Fast” and describes how this parameter is fixed
`
`from the sensor data (“The classification module is the consumer of raw data …
`
`The classification module returns user activity and position features such as
`
`‘moving fast’”. APPLE-1005, p6c1). Furthermore, Wang states that the pattern for
`
`Walking State (i.e., a possible first pattern) is associated with only a subset of the
`
`varying parameters. Namely, it is associated with the Location and Motion
`
`parameters, and the Background Sound parameter is not applicable (Wang, Table
`
`1). As another example, Louch describes fixing parameters associated with a
`
`
`
`23
`
`

`

`U.S. Patent No. 8,768,865
`
`pattern by learning in disclosing that “the device 100 can ‘learn’ by recording a
`
`detected state of the device 10, e.g., a trajectory of a motion, or a signature of
`
`proximity” APPLE-1011, 10:8-10.
`
`44. At 410, a process is initiated to attempt to recognize a second pattern
`
`by monitoring these input signals based, at least in part, on the first pattern
`
`matched in 406. See APPLE-1001,15:18-20. This step had also been known and
`
`implemented, for example, at least in prior-art systems that were capable of
`
`detecting a state transition or a sequence of motions such as Wang and Jangle. See
`
`e.g., APPLE-1005, Abstract (see APPLE-1016); APPLE-1012, Abstract.
`
`Detecting a state transition from a first state to a second state necessitates
`
`recognizing a second pattern corresponding to a second state based on the first
`
`pattern corresponding to the first state. See e.g., APPLE-1005, Abstract. Wang
`
`discloses “If the user is at “State2” and “Sensor2” returns “Sensor reading 2” …
`
`“Sensor3” will be turned on immediately to further detect the user

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket