throbber
DECLARATION OF SCOTT ANDREWS
`
`I, Scott Andrews, declare as follows:
`
`I hold a B.Sc. degree in Electrical Engineering from University of
`
`
`
`1.
`
`California–Irvine and a M.Sc. degree in Electronic Engineering from Stanford
`
`University. In various positions at, among others, TRW and Toyota, I have been
`
`responsible for research and development projects relating to, among others,
`
`numerous vehicle navigation systems, information systems, and user interface
`
`systems. My qualifications are further set forth in my curriculum vitae (Exhibit A).
`
`I have been retained by Volkswagen Group of America, Inc. in connection with its
`
`petition for inter partes review of U.S. Patent No. 8,719,038 (the “’038 patent”). I
`
`have over 35 years of experience in fields relevant to the ’038 patent, including
`
`telecommunications systems and navigation systems.
`
`2.
`
`I have reviewed the ’038 patent, as well as its prosecution history and the
`
`prior art cited during its prosecution. I have also reviewed U.S. Patent No.
`
`6,249,740 (“Ito”), the Richard Lind et al. publication, The Network Vehicle – A
`
`Glimpse
`
`into
`
`the Future of Mobile Multi-Media, 17th DASC, The
`
`AIAA/IEEE/SAE Digital Avionics Systems Conference – Bellevue, WA – Oct.
`
`31-Nov. 7, 1998 – Proceedings (“Lind”), U.S. Patent No. 6,230,123 (“Class”),
`
`European Patent Application Publication No. 0 829 704 (“Fujiwara”), U.S. Patent
`
`No. 6,064,323 (“Ishii”), U.S. Patent No. 6,157,705 (“Perrone”), U.S. Patent No.
`
`1
`
`1
`
`VWGoA - Ex. 1002
`Volkswagen Group of America, Inc., Petitioner
`
`

`
`6,201,544 (“Ezaki”), U.S. Patent No. 5,283,559 (“Kalendra”), U.S. Patent No.
`
`5,274,560 (“LaRue”), and “Plaintiff and Counter-Defendant West View Research,
`
`LLC’s Revised Disclosure of Asserted Claims and Infringement Contentions,
`
`Pursuant to Patent L.R. 3.1 and the June 10, 2015 Court Order” (“Infringement
`
`Contentions”), dated June 26, 2015.
`
`
`
`3.
`
`The ’038 Patent
`
`The ’038 patent describes an information system for use in an elevator,
`
`although the ’038 patent states that the disclosed systems and methods may also be
`
`useful in other similar types of personnel transport devices (i.e., devices that
`
`transport large numbers of people and equipment between two locations on a
`
`routine basis) such as trams, shuttles, and moving walkways. ’038 patent, col. 6,
`
`lines 63 to 66, col. 2, lines 30 to 35. A touch screen display 113 generates a variety
`
`of different messages or display formats based on the user’s input and query—for
`
`example, a building directory. ’038 patent, col. 8, lines 43 to 45 and col. 9, line 35
`
`to col. 11, line 34. The user can speak the specific name of the party they wish to
`
`find, and the digitized speech is compared to the contents of a directory file to find
`
`any matches. ’038 patent, col. 10, lines 7 to 16. Any matching fields within the
`
`entries of the directory file are provided to the user, either audibly via a speech
`
`synthesis module 112 and speaker 111, or visually via the display 113. ’038 patent,
`
`col. 10, lines 17 to 19. The user can also add defining information to the initial
`
`2
`
`2
`
`

`
`query statement to form a Boolean search statement. Ex. ’038 patent, col. 10, lines
`
`47 to 50. A location graphic file is displayed on the display device 113 as a floor
`
`map graphic 502 illustrating the location of the selected person or firm. ’038
`
`patent, col. 11, lines 15 to 17.
`
`
`
`4.
`
`The Disclosures of Lind, Ito, and Class – Claims 1, 4, 16, 22, 54, and 66
`
`Lind, Ito, and Class disclose a “[c]omputer readable configured to aid a user
`
`in locating an organization or entity” that comprises “a storage medium having a
`
`computer program configured to run on a processor” (claim 1), a “[c]omputerized
`
`information apparatus configured to aid a user in locating an organization or
`
`entity” (claim 22), and a “[s]mart computerized apparatus capable of interactive
`
`information exchange with a human user” (claims 54 and 66). Lind discloses a
`
`vehicle system that includes a “network computer” that is part of the “on-board
`
`system.” Lind, page I21-2. Additionally, Lind discloses “three displays for the
`
`driver,” one of which is a “touch-screen LCD” located on the vehicle’s center
`
`console that “serves as a user interface.” Id. at page I21-3. The system includes a
`
`speech recognition and text-to-speech system that “allows the driver to access
`
`virtually all the vehicle’s features through voice commands and enables the vehicle
`
`to talk back using synthesized speech;” for example, the user can request “travel
`
`directions” and also use the system to “locate a restaurant or hotel.” Lind, pages
`
`I21-2, I21-3. The touch-screen LCD in Lind can display navigation maps, as
`
`3
`
`3
`
`

`
`shown in Figure 9. Id., I21-3, I21-7. Lind also discloses that the user can “request
`
`travel directions” by voice command that the system can “display the appropriate
`
`maps or simply provide route directions.” Lind, pages I21-3, I21-7. Ito discloses an
`
`apparatus that includes both a processing section and a display. Ito, col. 9, lines 51
`
`to 67. The display in Ito is a touch panel display, which is located in the vehicle.
`
`Id. at col. 10, lines 39 to 45. Ito also discloses a “processing section 101” with “a
`
`CPU as its main component” and a “program storage section 102” that “serves as a
`
`memory for storing programs which will be executed by a processing section.” Ito,
`
`col. 9, lines 51 to 67. Ito discloses that the user operates an input section 105 to
`
`“input information about the destination, such as the facility name.” Id. at col. 15,
`
`lines 50 to 54. And Ito describes, for example, that input is received via “a touch
`
`panel provided on the display 106,” and further discloses displaying maps on a
`
`display device, including a recommended route to the selected destination on an
`
`output display. Ito, col. 10, lines 39 to 50, col. 16, lines 24 to 27. As both Ito and
`
`Lind disclose systems that include processors, they disclose “smart computerized
`
`apparatuses.” Class discloses “a method and apparatus for real-time speech input
`
`of a destination address into a navigation system.” Class, col. 1, lines 11 to 13.
`
`5.
`
`Lind, Ito, and Class disclose a computer program configured to “obtain a
`
`representation of a first speech input from the user, the first speech input relating to
`
`a name of a desired organization or entity” as claimed in claim 1 of the ’038 patent.
`
`4
`
`4
`
`

`
`Lind describes “advanced speech recognition software,” which allows the user to
`
`“locate a restaurant or hotel” and with which “the driver can: execute vehicle
`
`system commands such as lock doors, play CD, and change radio station, request
`
`travel directions and traffic updates from the Web or other sources, check e-mail
`
`and voicemail, request news, sports, and stock information.” Lind, pages I21-2 and
`
`I21-3. The speech recognition system, which obtains a representation of a first
`
`speech input from the user, is adapted to “receive voice commands” and to
`
`“understand most drivers instantly.” Lind, page I21-3. Ito discloses that, among the
`
`inputs the user may enter into the system, are “information about the destination,
`
`such as the facility name, telephone number and address thereof, and a route search
`
`request.” Ito, col. 15, lines 47 to 58. This input of facility name information is an
`
`input “relating to a name of a desired organization or entity.” Class discloses
`
`“input dialogues for speech input of a destination address for a navigation system.”
`
`Class, col. 6, lines 30 to 32. Class discloses that its system facilitates “more rapid
`
`speech entry of a desired destination address,” Class, col. 4, lines 9 to 10, and that
`
`the speech input of the destination location may be made by street name, place
`
`name, etc. As West View has acknowledged (and I agree), “all speech recognition
`
`systems inherently digitize the speaker’s analog voice.” Infringement Contentions,
`
`at 729.
`
`5
`
`5
`
`

`
`6.
`
`Lind, Ito, and Class disclose a computer program configured to “cause use
`
`of at least a speech recognition algorithm to process the representation to identify
`
`at least one word or phrase therein” as claimed in claim 1 of the ’038 patent. Lind
`
`discloses “advanced speech recognition software” to receive “voice commands”
`
`that “understand[s] most drivers instantly.” Lind, page I21-3. Additionally, Ito
`
`discloses that the user can input, using voice, “information about the destination,
`
`such as facility name, telephone number and address thereof.” Ito, col. 15, lines 50
`
`to 54. Class discloses “input dialogues for speech input of a destination address for
`
`a navigation system.” Class, col. 6, lines 30 to 32. The “advanced speech
`
`recognition software” in Lind, the data input device using voice recognition in Ito,
`
`and the “speech input” in Class are “speech recognition algorithms” as claimed in
`
`claim 1 of the ’038 patent. Additionally, when a user inputs information about the
`
`destination, such as the facility name described in Ito, the name of a restaurant or
`
`hotel described in Lind, or the place name described in Lind, the system processes
`
`the voice inputs and identifies at least one word or phrase therein. Further, and in
`
`addition to the disclosures of Lind, Ito, and Class, it was obvious to use voice-
`
`recognition software using predetermined voice recognition algorithms, such as the
`
`Hidden Markov Model, to process a representation of speech in order to identify a
`
`spoken word or phrase, as of the earliest priority date claimed by the ’038 patent.
`
`See, e.g., Ishii, col. 3, lines 11 to 18.
`
`6
`
`6
`
`

`
`7.
`
`Class discloses a computer program configured to “use at least the identified
`
`at least one word or phrase to identify a plurality of possible matches for the name”
`
`as claimed in claim 1 of the ’038 patent. Class discloses a disambiguation method
`
`in which the speech recognition engine identifies an ambiguity list that contains
`
`entries of place names that could match the input speech, sorted by probability.
`
`Class, col. 8, line 16 to col. 9, line 11. This list contains a plurality of possible
`
`matches for the input speech. Id. at col. 9, lines 6 to 11. Class describes a
`
`mechanism for resolving ambiguities that exist when using voice recognition
`
`software in the instance of, for example, homophonic names. Id. at col. 8, lines 18
`
`to 23. Thus, this ambiguity list is used to “identify a plurality of possible matches
`
`for the name.”
`
`8.
`
`Class and Ito disclose a computer program configured to “cause the user to
`
`be prompted to enter a subsequent input in order to aid in identification of one of
`
`the plurality of possible matches which best correlates to the desired organization
`
`or entity” as claimed in claim 1 of the ’038 patent. Class describes methods for
`
`resolving the ambiguity of the multiple potential matching destinations by
`
`requesting additional user input, either by asking whether a particular location is
`
`the desired destination (and expecting a “yes” or “no” answer) or by requesting the
`
`user select a destination from the list of potential matching destinations. Class, col.
`
`10, line 39 to col. 11, line 8. Ito also discloses a method for resolving ambiguities
`
`7
`
`7
`
`

`
`based on user inputs. If a user enters only the first several digits of a telephone area
`
`code as the information for the navigation destination, several facilities may match
`
`those digits. Ito, col. 16, lines 5 to 19. A list of matching facilities is “displayed at
`
`the vehicle” such that “the user views such facilities to decide whether or not the
`
`destination is included in the searched facilities, and then selects the appropriate
`
`destination from the plurality of searched facilities.” Id. at col. 16, lines 5 to 19.
`
`The display of multiple facilities prompts the user to respond by selecting one. Id.
`
`at col. 16, lines 11 to 17. These multiple inputs serve to clarify the first speech
`
`input.
`
`9.
`
`Class and Ito disclose a computer program configured to “receive data
`
`relating to the subsequent user input” as claimed in claim 1 of the ’038 patent.
`
`Class discloses that after requesting subsequent input to resolve the ambiguity of
`
`the multiple potential matches, the system receives the requested input. Class, col.
`
`8, lines 31 to 32, col. 9, lines 26 to 31, col. 10, line 39 to col. 11, line 8, col. 11,
`
`lines 34 to 43. Similarly, Ito discloses a method in which the user selects a desired
`
`destination from the list of multiple matching facilities that is displayed during the
`
`disambiguation step. Ito, col. 16, lines 5 to 19. These inputs disclosed in Class and
`
`Ito include the receipt of data relating to the subsequent user input.
`
`10. Class discloses a computer program configured to “based at least in part on
`
`the data, determine which of the plurality of possible matches is the one that best
`
`8
`
`8
`
`

`
`correlates” as claimed in claim 1 of the ’038 patent. Class discloses that the user
`
`can select a location that is identified from an ambiguity list, and that the selected
`
`location is determined as the destination location that best correlates to the desired
`
`location. Class, col. 17, lines 8 to 49; col. 18, lines 57 to 62. Class discloses
`
`examples of dialogs for determining which of the plurality of possible matches is
`
`the one that best correlates at, for example, col. 16, line 57 to col. 18, line 65 and
`
`col. 21, line 20 to col. 23, line 37.
`
`11. Class discloses a computer program configured to “determine a location
`
`associated with one of the possible matches that best correlates” as claimed in
`
`claim 1 of the ’038 patent. Class describes a disambiguation method to determine a
`
`destination location, such as a city. Class, col. 9, lines 29 to 31, col. 10, lines 34 to
`
`37, col. 11, lines 17 to 21, col. 18, lines 23 to 64. For example, after the user
`
`selects a destination from the ambiguity list, Class describes determining an
`
`address associated with the selected destination location. Class, col. 18, lines 56 to
`
`64. In addition, after arriving at the destination (such as a city) that best correlates
`
`through its disambiguation method, the system in Class determines—either
`
`through user interrogation or by default in case no street list is available for the
`
`destination—“a street or a special destination, for example the railroad station,
`
`airport, downtown, etc.,” “since only a complete destination address can be
`
`transferred to the navigation system.” Class, col. 7, lines 11 to 34.
`
`9
`
`9
`
`

`
`12.
`
`Lind and Ito disclose a computer program configured to “select and cause
`
`presentation of a visual representation of the location, as well as at least an
`
`immediate surroundings thereof, on a display viewable by the user” as claimed in
`
`claim 1 of the ’038 patent. Lind describes several display screens, including a
`
`center console that can display navigation maps viewable by the user. Lind, page
`
`I21-3, Fig. 9. Ito discloses that a visual representation of a location can be
`
`displayed, including the area surrounding the destination location, shown, for
`
`example, in Figures 9(A), 9(B), 40(C), and 44. Ito, col. 16, lines 24 to 27. In
`
`addition, Ito discloses an exemplary map of the area surrounding a departure point
`
`in Figure 9(B). Ito, col. 17, lines 4 to 19. Destination points and departure points
`
`are generally treated in the same manner (Ito, col. 14, lines 19 to 38) and, thus, to
`
`display the immediate surroundings of the destination on the display in a manner
`
`such as disclosed in Figure 9(B) would have been obvious.
`
`13.
`
`Ito discloses “the visual
`
`representation
`
`further comprising visual
`
`representations of one or more other organizations or entities proximate to the
`
`location” as claimed in claim 1 of the ’038 patent. Ito discloses that area guidance
`
`can be used to display “guidance information on the presence or absence of
`
`parking and various facilities in the area around the destination.” Ito, col. 14, lines
`
`19 to 38. Figures 40(C) and 44 show, for example, a department store, a fire
`
`station, and a bank relative to each other. Parking and various facilities in the area
`
`10
`
`10
`
`

`
`around the destination are “organizations or entities proximate” to the destination
`
`location.
`
`14. Class, Lind, and Ito disclose that “the prompt for the subsequent user input
`
`comprises a display of a listing of the plurality of possible matches on a touch-
`
`screen input and display device, such that the user can select one of the plurality of
`
`possible matches via a touch of the appropriate region of the touch-screen device”
`
`as claimed in claim 4 of the ’038 patent. Class describes that, in certain situations,
`
`the number of potential matches may be reduced by requesting additional user
`
`input by way of a list of the remaining matches that is either read out or displayed
`
`for the user’s selection. Class, col. 10, lines 57 to 59, col. 9, line 50 to col. 10, line
`
`11, col. 17, line 62 to col. 18, line 21. Lind describes a “center console’s touch-
`
`screen LCD” which “serves as a user interface for controlling nearly all of the
`
`Network Vehicle’s multimedia functions” including “navigation.” Lind, page I21-
`
`3. Ito similarly describes that input is received via “a touch panel provided on the
`
`display 106” with which a “user can use a finger or the like to touch an icon or the
`
`like displayed on the screen of the display 106.” Ito, col. 10, lines 39 to 50. Ito
`
`further describes that if there is a plurality of potentially matching facilities, a list
`
`of matching facilities is “displayed at the vehicle” such that “the user views such
`
`facilities to decide whether or not the destination is included in the searched
`
`facilities, and then selects the appropriate destination from the plurality of searched
`
`11
`
`11
`
`

`
`facilities.” Ito, col. 16, lines 5 to 19. Thus, a user touches the “appropriate area” of
`
`the display in order to select the appropriate destination.
`
`15.
`
`Lind, Ito, and Class disclose “the causation of use of at least a speech
`
`recognition algorithm, the use of at least the identified at least one word or phrase,
`
`the causation of the user to be prompted to enter a subsequent input, the receipt of
`
`the data relating to the subsequent user input, the determination of which of the
`
`plurality of possible matches is the one that best correlates, the determination of
`
`the location, and the selection of the visual representation, are each performed by
`
`at least one networked server in wireless communication with client device, the
`
`client device and the at least one server forming a client-server relationship” as
`
`claimed in claim 16 of the ’038 patent. Lind describes that advanced features of the
`
`network vehicle, including the navigation functionality, are enabled by “a client-
`
`server network architecture.” Lind, Abstract. Lind also describes wireless
`
`connection between the Network Vehicle and the Internet. Lind, page I21-2 (“A
`
`wireless modem provides the uplink out of the vehicle directly to Internet service
`
`providers. The downlink return path from the Internet to the Network Vehicle can
`
`come through either the satellite … or through the wireless modem.”). Ito
`
`describes that each of the claimed functions are performed by at least one
`
`networked server in wireless communication with the client device, where the
`
`client device and the server form a client-server relationship. For example, as
`
`12
`
`12
`
`

`
`shown in Figure 1 of Ito, the base apparatus communicates wirelessly with the
`
`vehicle navigation apparatus. Ito, col. 8, lines 58 to 62. These two devices form a
`
`client-server relationship, where the vehicle navigation apparatus is the client that
`
`retrieves information from the base apparatus, which acts as a server. Ito, col. 10,
`
`line 58 to col. 15, line 38. For example, the base apparatus “carries out a route
`
`search using data stored in a data base;” in this way, Ito explains, “there is no need
`
`for the vehicle navigation apparatus 100 to store map data or other data,” which
`
`“makes it possible to simplify the structure of the vehicle navigation apparatus
`
`100.” Ito, col. 8, lines 18 to 20, col. 8, lines 36 to 41, col. 11, lines 31 to 36. The
`
`base apparatus further determines “the departure point and destination required for
`
`a route search.” Ito, col. 10, lines 65 to 67. Ito describes that “in establishing the
`
`destination, the position data of the facility corresponding to the telephone number
`
`or the address transmitted from the vehicle navigation apparatus 100 is extracted
`
`from the data base 153.” Ito, col. 11, lines 21 to 24. The base apparatus further
`
`extracts “area guidance data” for “the surrounding area A3 around the destination
`
`PA,” and transmits those guidance data to the vehicle navigation apparatus 100; the
`
`guidance data includes map data, “data of landmarks,” and “data for landscape
`
`images.” Ito, col. 14, lines 19 to 23, col. 14, lines 49 to 54, col. 15, lines 15 to 19.
`
`Class discloses that “a remote database at a central location that can be accessed by
`
`corresponding communications devices such as a mobile radio network.” Class,
`
`13
`
`13
`
`

`
`col. 3, lines 58 to 60. Performing one or more of the recited functions by a server
`
`or by a client is no more than a simple design choice and is obvious in view of, for
`
`example, Perrone, at col. 15, lines 37 to 45.
`
`16.
`
`Lind and Ito disclose “the at least one server disposed geographically remote
`
`to the client device” as claimed in claim 16 of the ’038 patent. Lind discloses that
`
`the Network Vehicle can connect to Internet service providers via a wireless
`
`modem and via a satellite receiver. Lind, page I21-2. Ito discloses that the base
`
`apparatus is “arranged at a base” and the vehicle navigation apparatus is “mounted
`
`in a vehicle as a movable body.” Ito, col. 8, lines 13 to 16.
`
`17.
`
`Lind and Class disclose “a microphone” as claimed in claims 22, 54, and 66
`
`of the ’038 patent. Lind discusses a microphone as one of the devices controlled by
`
`the command and control application. Lind, page I2-16. Class also describes a
`
`microphone 5 by which users may enter speech commands. Class, col. 16, lines 36
`
`to 54.
`
`18.
`
`Lind and Ito disclose “a capacitive touch-screen input and display device” as
`
`claimed in claims 22, 54, and 66 of the ’038 patent. Lind discloses “three displays
`
`for the driver,” one of which is a “touch-screen LCD” located on the vehicle’s
`
`center console. Lind, page I21-3. The display in Ito is a touch panel display, as it
`
`discloses that a user can use a finger to touch an icon displayed on the screen of the
`
`display 106. Ito, col. 10, lines 39 to 45. As described, for example, by Kalendra,
`
`14
`
`14
`
`

`
`capacitive touch-screen LCD devices are among obvious variants of the touch-
`
`screen LCD device described by Lind and the LCD touch panel display described
`
`by Ito.
`
`19.
`
`Lind, Ito, and Class disclose “a processor in data communication with the
`
`display device” (claim 22) as well as “one or more processors” (claims 54 and 66).
`
`Lind describes a vehicle having microprocessors, and specifically a main processor
`
`running a “command and control application” that controls vehicle software and
`
`controls on-board devices. Lind, page I21-6. As shown in Figure 2, reproduced
`
`below, the network computer is in communication with the center console display:
`
`Ito also discloses a processing section that includes “a CPU as its main
`
`component.” Ito, col. 9, lines 52 to 67. As shown in Figure 1 of Ito, the processing
`
`section 101 is in data communication with the display 106:
`
`15
`
`15
`
`

`
`Additionally, Ito discloses that the processing unit executes programs, “such as a
`
`program for displaying routes on the display 106” (Ito, col. 9, lines 61 to 67) and
`
`that the navigation base apparatus also has “a processing unit including a CPU”
`
`(Ito, col. 8, lines 66 to 67) that is, indirectly, in wireless data communication with
`
`the display as shown in Fig. 1. Class discloses a “dialogue and process control 8”
`
`by which “data can be exchanged between the individual components of the device
`
`over corresponding connections 12 that can also be made in the form of a data
`
`bus.” Class, col. 16, lines 42 to 54.
`
`20.
`
`Lind, Ito, and Class disclose a “speech digitization apparatus in signal
`
`communication with the microphone” as claimed in claim 22. Class describes that
`
`“[s]peech dialogue system 1 comprises a speaker recognition device 7 for
`
`recognizing and classifying speech statements entered by a user using a
`
`microphone 5.” Class, col. 16, lines 41 to 44. Lind and Ito disclose voice
`
`recognition systems. Lind discloses a “microphone.” For example, Ito describes an
`16
`
`16
`
`

`
`input device that “us[es] voice recognition,” Ito, col. 10, lines 39 to 47, and Lind
`
`discloses a speech recognition system that “allows the driver to access virtually all
`
`the vehicle’s features through voice commands,” Lind, page I21-3. Lind discloses
`
`that a “command and control application, … running on the vehicle’s main
`
`processor, … controls devices such as … microphone, … and controls vehicle
`
`software, such as the voice recognition … applications.” Lind, page I21-6. As
`
`demonstrated, for example, by LaRue, at col. 5, line 17, voice recognition systems
`
`rely on microphones for obtaining voice or speech input. It is obvious from these
`
`disclosures that a microphone can be used to input a user’s voice into an
`
`automotive voice recognition system, such as the navigation system disclosed in
`
`Ito and the ViaVoice speech recognition system disclosed in Lind. As discussed
`
`above, West View has also acknowledged (and I agree) that “all speech recognition
`
`systems inherently digitize the speaker’s analog voice.” Infringement Contentions,
`
`at 729.
`
`21.
`
`Lind, Ito, and Class disclose “at least one audio speaker” (claim 22) and
`
`“speech synthesis apparatus and at least one speaker in signal communication
`
`therewith” (claims 22, 54, and 66) as claimed in the ’038 patent. Lind describes
`
`that the ViaVoice application “enables the vehicle to talk back using synthesized
`
`speech.” Lind, page I21-3. To hear the synthesized speech, an audio speaker is
`
`used. Additionally, as shown in Fig. 2, the on-board system of Lind includes
`
`17
`
`17
`
`

`
`multiple amplifiers/speakers. Lind, Fig. 2. Class describes “a speech output device
`
`10 that can deliver speech statements to a user by means of a loudspeaker 6.”
`
`Class, col. 16, lines 44 to 46, Fig. 10. Ito discloses “a program for outputting a
`
`route guidance voice via the voice output section 107.” Ito, col. 9, lines 65 to 67,
`
`col. 17, lines 5 to 6. These speech synthesis apparatuses are in signal
`
`communication with an audio speaker. Furthermore, it was obvious at the time the
`
`’038 patent was filed that, when using software to output speech, the data
`
`representing the speech is synthesized in order to be processed. See, e.g., Perrone,
`
`col. 16, lines 61 to 66.
`
`22.
`
`Lind and Ito disclose “a storage medium comprising at least one computer
`
`program configured to run on at least the processor” as claimed in claim 22 of the
`
`’038 patent. Lind describes a main processor running a “command and control
`
`application” that controls vehicle software and controls on-board devices and off-
`
`board communications. Lind, page I21-6. Ito describes that its “vehicle navigation
`
`apparatus” includes a “program storage section” that “serves as a memory for
`
`storing programs which will be executing by the processing section” (Ito, col. 9,
`
`lines 51 to 67), and that its “navigation base apparatus” includes a “system control
`
`section” which includes “a CPU and memories,” whereby “[t]he memories store
`
`the various programs which are to be carried out in the navigation base apparatus”
`
`(Ito, col. 8, line 66 to col. 9, line 5).
`
`18
`
`18
`
`

`
`23.
`
`Lind and Ito disclose “the visual representation further comprising visual
`
`representations of one or more organizations or entities proximate to the location,
`
`and directions to the location” as claimed in claim 22 of the ’038 patent. Lind
`
`discloses that the Network Vehicle can “display the appropriate maps or simply
`
`provide route directions on the head-up display.” Lind, page I21-7. Lind discloses
`
`the display of “directions to a location.” Ito discloses displaying maps on a display,
`
`including a recommended route to the selected destination. Ito, col. 16, lines 24 to
`
`27. Ito discloses an exemplary maps, which includes a representation of one or
`
`more organizations or entities proximate to the location of the destination (PA), for
`
`example, in Figures 9(A), 40(C), and 44.
`
`24.
`
`Ito, Lind, and Class disclose an “input apparatus configured to cause the
`
`computerized apparatus to enter a mode whereby a user can speak a name of an
`
`entity into a microphone in signal communication with the computerized
`
`apparatus, the entity being an entity to which the user wishes to navigate” as
`
`claimed in claims 54 and 66 of the ’038 patent. Lind discloses that the user can use
`
`voice recognition technology to “verbally ... locate a restaurant or hotel.” Lind,
`
`pages I21-2, I21-3, I21-6. As discussed above, Lind and Class disclose
`
`microphones in signal communications with computerized apparatuses. Ito
`
`describes that the user can “use his/her voice to input corresponding data and
`
`commands” to “a data input device using voice recognition.” Ito, col. 10, lines 39
`
`19
`
`19
`
`

`
`to 47. The input device in Ito is used “to input information about the destination,
`
`such as the facility name.” Ito, col. 15, lines 50 to 54. Additionally, Class discloses
`
`that a user may activate a “push-to-talk button” that causes the system to enter a
`
`mode where the system waits for an “admissible speech command.” Class, col. 6,
`
`lines 30 to 47. Class further describes that “the user enters the desired destination
`
`location by speech input” (Class, col. 8, lines 7 to 8) and that the device for
`
`performing the described methods includes a “[s]peech dialogue system 1 [that]
`
`comprises a speaker recognition device 7 for recognizing and classifying speech
`
`statements entered by a user using a microphone 5.” Class, col. 16, lines 41 to 44.
`
`25.
`
`Lind and Class disclose “at least one computer program operative to run on
`
`the one or more processors and configured to engage the user in an interactive
`
`audible exchange” as claimed in claims 54 and 66 of the ’038 patent. Lind
`
`discloses that a main processor running a “command and control application”
`
`controls on-board devices, and that using the on-board system, a user can provide
`
`input via voice recognition. Lind, pages I21-2, I21-6. The Network Vehicle
`
`described in Lind can also “talk back using synthesized speech.” Lind, page I21-2.
`
`The exchange of voice commands and synthesized speech is an “interactive
`
`audible exchange.” Class also describes that a “speech dialogue system” and
`
`“processing control 8” are used to “deliver speech statements to a user by means of
`
`a loudspeaker 6.” Class, col. 16, lines 41 to 48.
`
`20
`
`20
`
`

`
`26. Class discloses “causation of generation of an audible communication to the
`
`user via the speech synthesis apparatus in order to at least inform the user of the
`
`identification of the plurality of matches” as claimed in claim 54 of the ’037 patent.
`
`For example, in the disambiguation system described in Class and discussed above,
`
`the dialogue may include the step of informing the user “about the number of
`
`entries in the list and [the user] is asked in step 1445 whether or not the list should
`
`be read out.” Class, col. 9, lines 53 to 56. If the user answers “yes,” “the list is also
`
`read out by speech output.” Id., col. 9, lines 63 to 64.
`
`27. Class discloses the “receipt of a subsequent speech input, the subsequent
`
`speech input comprising at least one additional piece of information” (claim 54)
`
`and “receipt of a subsequent speech input, the subsequent speech input comprising
`
`at least one additional piece of information useful in identification of the entity”
`
`(claim 66) as claimed in the ’038 patent. Class discloses that after requesting
`
`subsequent input to resolve the ambiguity of multiple potential matches, the system
`
`receives the user’s subsequent speech input, such as confirmation of a location
`
`(Class, col. 8, lines 31 to 32, col. 9, lines 26 to 31, col. 10, lines 34 to 38, col. 10,
`
`lines 50 to 51), selection of a list entry (id., col. 10, lines 7 to 10, col. 10, lines 26
`
`to 29), or input of additional details (id., col. 11, lines 34 to 43). These “additional
`
`pieces of information” are “useful in identi

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket