throbber

`
`UNITED STATES PATENT AND TRADEMARK OFFICE
`
`UNITED STATES DEPARTMENT OF CONIMERCE
`United Slates Patent and Trademark Office
`Address: CONJNIISSIONER FOR PATENTS
`PO Box 1450
`Alexandria, Virginia 2231371450
`wwwuspto gov
`
`APPLICATION NO.
`FILING DATE
`FIRST NAMED INVENTOR
`ATTORNEY DOCKET NO.
`CONFIRMATION NO.
`
`16/155,523
`10/09/2018
`Alexander Kurganov
`101 15-07594 US
`9544
`
`Patent Law Works, LLP
`310 East 4500 South, Suite 400
`Salt Lake City, UT 84107
`
`CHAWAN' VUAY B
`
`2658
`
`01/22/2020
`
`ELECTRONIC
`
`Please find below and/or attached an Office communication concerning this application or proceeding.
`
`The time period for reply, if any, is set in the attached communication.
`
`Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the
`following e—mail address(es):
`
`docketing @patentlaw works.net
`patents @ patentlawworks .net
`
`PTOI.-90A (Rev. 04/07)
`
`Goggle Exhibit 1031
`Google V. Parus
`
`Google Ex 1031 - Page 1
`
`Google Ex 1031 - Page 1
`
`

`

`Off/09 ACIIO” Summary
`
`Application No.
`16/155,523
`Examiner
`VIJAY B CHAWAN
`
`Applicant(s)
`Kurganov, Alexander
`Art Unit
`AIA (FITF) Status
`2658
`NO
`
`- The MAILING DA TE ofthis communication appears on the cover sheet with the correspondence address —
`Period for Reply
`
`A SHORTENED STATUTORY PERIOD FOR REPLY IS SET TO EXPIRE g MONTHS FROM THE MAILING
`DATE OF THIS COMMUNICATION.
`Extensions of time may be available under the provisions of 37 CFR 1.136(a). In no event, however, may a reply be timely filed after SIX (6) MONTHS from the mailing
`date of this communication.
`If NO period for reply is specified above, the maximum statutory period will apply and will expire SIX (6) MONTHS from the mailing date ofthis communication.
`-
`- Failure to reply within the set or extended period for reply will, by statute, cause the application to become ABANDONED (35 U.S.C. § 133).
`Any reply received by the Office later than three months after the mailing date of this communication, even if timely filed, may reduce any earned patent term
`adjustment. See 37 CFR 1.704(b).
`
`Status
`
`1)[:] Responsive to communication(s) filed on
`
`[I A declaration(s)/affidavit(s) under 37 CFR 1.130(b) was/were filed on
`2a)C] This action is FINAL.
`2b)
`This action is non-final.
`
`3)D An election was made by the applicant in response to a restriction requirement set forth during the interview
`on
`; the restriction requirement and election have been incorporated into this action.
`
`4)C] Since this application is in condition for allowance except for formal matters, prosecution as to the merits is
`closed in accordance with the practice under Expat/7e Quay/e, 1935 CD. 11, 453 O.G. 213.
`
`Disposition of Claims*
`5
`Claim(s)
`
`1—20 is/are pending in the application.
`
`5a) Of the above Claim(s)
`
`is/are withdrawn from consideration.
`
`C] Claim(s)
`
`is/are allowed.
`
`Claim(s) 1—20 is/are rejected.
`
`) )
`
`E] Claim(s)
`
`is/are objected to.
`
`) )
`
`are subject to restriction and/or election requirement
`C] Claim(s)
`* If any claims have been determined allowable, you may be eligible to benefit from the Patent Prosecution Highway program at a
`participating intellectual property office for the corresponding application. For more information, please see
`httpfiwwwuspto.gQVZpatentstiniLeventstpphZindex.'sp or send an inquiry to PPeregbagMQusptgggv.
`
`Application Papers
`10):] The specification is objected to by the Examiner.
`
`is/are: a)[j accepted or b)Ij objected to by the Examiner.
`11):] The drawing(s) filed on
`Applicant may not request that any objection to the drawing(s) be held in abeyance. See 37 CFR 1.85( ).
`Replacement drawing sheet(s) including the correction is required if the drawing(s) is objected to. See 37 CFR 1.121 (d).
`
`Priority under 35 U.S.C. § 119
`12)D Acknowledgment is made of a claim for foreign priority under 35 U.S.C. § 119(
`Certified copies:
`
`)-(d) or (f).
`
`a)I:] All
`
`b)C] Some**
`
`c)l:] None of the:
`
`1C] Certified copies of the priority documents have been received.
`
`2C] Certified copies of the priority documents have been received in Application No.
`
`31:] Copies of the certified copies of the priority documents have been received in this National Stage
`application from the International Bureau (PCT Rule 17.2( )).
`** See the attached detailed Office action for a list of the certified copies not received.
`
`
`
`Attachment(s)
`
`1)
`
`Notice of References Cited (PTO-892)
`
`Information Disclosure Statement(s) (PTO/SB/OSa and/or PTO/SB/OSb)
`2)
`Paper No(s)/Mai| Date
`U.S. Patent and Trademark Office
`PTOL-326 (Rev. 11-13j
`
`Office Action Summary
`
`Part of Paper No./Mai| Date 20200108
`
`Google Ex 1031 - Page 2
`
`3) E] Interview Summary (PTO-413)
`Paper N0(s)/Mai| Date
`4) CI Other'
`
`Google Ex 1031 - Page 2
`
`

`

`Application/Control Number: 16/155,523
`Art Unit: 2658
`
`Page 2
`
`DETAILED ACTION
`
`Notice of Pre-AIA or AIA Status
`
`1.
`
`The present application is being examined under the pre—AIA first to invent provisions.
`
`Double Patenting
`
`2.
`
`The nonstatutory double patenting rejection is based on a judicially created doctrine
`
`grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or
`
`improper timewise extension of the "right to exclude” granted by a patent and to prevent
`
`possible harassment by multiple assignees. A nonstatutory double patenting rejection is
`
`appropriate where the conflicting claims are not identical, but at least one examined
`
`application claim is not patentably distinct from the reference claim(s) because the examined
`
`application claim is either anticipated by, or would have been obvious over, the reference
`
`claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman,
`
`11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed.
`
`Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d
`
`438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
`
`A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be
`
`used to overcome an actual or provisional rejection based on nonstatutory double patenting
`
`Google Ex 1031 - Page 3
`
`Google Ex 1031 - Page 3
`
`

`

`Application/Control Number: 16/155,523
`Art Unit: 2658
`
`Page 3
`
`provided the reference application or patent either is shown to be commonly owned with the
`
`examined application, or claims an invention made as a result of activities undertaken within
`
`the scope of a joint research agreement. See MPEP § 717.02 for applications subject to
`
`examination under the first inventor to file provisions ofthe AIA as explained in MPEP § 2159.
`
`See MPEP §§ 706.02(I)(1) — 706.02(I)(3) for applications not subject to examination under the
`
`first inventor to file provisions of the AIA. A terminal disclaimer must be signed in compliance
`
`with 37 CFR 1.321(b).
`
`The USPTO Internet website contains terminal disclaimer forms which may be used.
`
`Please visit www.uspto.gov/patent/patents—forms. The filing date ofthe application in which
`
`the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA/25, or
`
`PTO/AIA/26) should be used. A web—based eTerminaI Disclaimer may be filled out completely
`
`online using web—screens. An eTerminaI Disclaimer that meets all requirements is auto—
`
`processed and approved immediately upon submission. For more information about eTerminaI
`
` Disclaimers, refer to BK '
`
`3.
`
`Claims 1—20 are rejected on the ground of nonstatutory double patenting as being
`
`unpatentable over claims 1—28 of U.S. Patent No. 10,096,320. Although the claims at issue are
`
`not identical, they are not patentably distinct from each other because, the claims of the
`
`instant application are similar in scope and content ofthe patented claims issued to the same
`
`applicant.
`
`Google Ex 1031 - Page 4
`
`Google Ex 1031 - Page 4
`
`

`

`Application/Control Number: 16/155,523
`Art Unit: 2658
`
`
`Page 4
`
`Application No: 16/155,523
`
`Patent No: 10,096,320
`
`1. A system comprising: (a) at least one data
`
`1. A system comprising: (a) at least one data
`
`
`
`processor, the at least one data processor
`operatively coupled to a plurality of
`communication data networks; (b) at least
`one speaker—independent speech-recognition
`engine operatively coupled to the data
`
`processor; (c) memory accessible to the at
`least one data processor and storing at least:
`
`(i) an instruction set for querying of
`information to be retrieved from a plurality
`
`of sources coupled to the plurality of
`communication data networks, the
`
`instruction set comprising: an indication of
`the plurality of sources, each identified by a
`source identifier, and each identifying certain
`information to be retrieved from the source
`
`identifier, and (ii) at least one recognition
`grammar executable code corresponding to
`each instruction set and corresponding to
`
`data characterizing audio containing a
`naturally—spoken—speech command including
`
`an information request, (d) wherein the at
`least one speaker—independent—speech—
`recognition engine is adapted to: (1) receive
`the data characterizing audio containing the
`naturally—spoken—speech command from a
`voice—enabled device via a first of the
`
`plurality of communication data networks;
`
`(2) to recognize phenomes in the data
`characterizing audio containing naturally—
`
`spoken—speech commands to understand
`spoken words; and (3) to generate
`
`recognition results data, (e) wherein the at
`least one data processor is adapted to: (1)
`select the corresponding at least one
`
`recognition grammar executable code upon
`receiving the data characterizing the audio
`
`containing the naturally—spoken-speech
`command and to convert the data
`
`characterizing the audio containing the
`naturally—spoken—speech command into a
`
`processor, the at least one data processor
`operatively coupled to a plurality of
`communication data networks; (b) at least
`one speaker—independent speech—recognition
`engine operatively coupled to the data
`
`processor; (c) memory accessible to the at
`least one data processor and storing at least:
`
`(i) an instruction set querying of information
`to be retrieved from a plurality of sources,
`
`the instruction set comprising: an indication
`of the plurality of sources, each identified by
`
`a source identifier, and each identifying
`certain information to be retrieved from the
`
`source identifier, and (ii) at least one
`
`recognition grammar executable code
`corresponding to each instruction set and
`
`corresponding to data characterizing audio
`containing a naturally—spoken—speech
`
`command including an information request,
`(d) wherein the at least one speaker—
`
`independent—speech—recognition engine is
`adapted (i) to receive the data characterizing
`audio containing the naturally—spoken—
`
`speech command from a voice—enabled
`device via a first of the plurality of
`communication data networks, (ii) to
`
`recognize phenomes in the data
`
`characterizing audio containing naturally-
`spoken—speech commands to understand
`
`spoken words, and (iii) to generate
`recognition results data, (e) wherein the at
`
`least one data processor is adapted to (i)
`select the corresponding at least one
`recognition grammar executable code upon
`
`receiving the data characterizing the audio
`containing the naturally—spoken—speech
`command and to convert the data
`
`characterizing the audio containing the
`
`naturally—spoken—speech command into a
`data message for transmission to a network
`
`
`
`
`
`data message for transmission to a network
`
`interface adapted to access a second of the
`
`Google Ex 1031 - Page 5
`
`Google Ex 1031 - Page 5
`
`

`

`Application/Control Number: 16/155,523
`Art Unit: 2658
`
`
`Page 5
`
`
`
`interface adapted to access a second of the
`
`plurality of communication data networks;
`and (2) retrieve the instruction set
`
`plurality of communication data networks;
`and (ii) to retrieve the instruction set
`
`corresponding to the recognition grammar
`
`corresponding to the recognition grammar
`executable codes provided by the at least
`
`executable codes provided by the at least
`one speaker—independent—speech—
`
`one speaker—independent—speech—
`recognition engine and to access the
`
`recognition engine and to access the
`information source queried by the instruction
`
`information source queried by the instruction
`set to obtain at least a part ofthe
`information to be retrieved, and (f) at least
`one speech—synthesis device operatively
`coupled to the at least one data processor,
`
`set to obtain at least a part of the
`information to be retrieved, and (f) at least
`
`one speech—synthesis device operatively
`
`coupled to the at least one data processor,
`the at least one speech—synthesis device
`
`the at least one speech—synthesis device
`configured to produce an audio message
`
`configured to produce an audio message
`relating to any resulting information
`
`relating to any resulting information
`retrieved from the plurality of information
`
`sources including a text—to—speech conversion
`of at least certain data in said any resulting
`
`information retrieved from the plurality of
`information sources, and to convey the audio
`message via the voice—ena bled device.
`
`retrieved from the plurality of information
`sources, and to transmit the audio message
`to the voice—enabled device.
`
`2. The system of claim 1, wherein the
`plurality of communication data networks
`includes the Internet.
`
`2. The system of claim 1, wherein the
`plurality of communication data networks
`includes the Internet.
`
`3. The system of claim 1, wherein the
`plurality of communication data networks
`include a local—area network.
`
`3. The system of claim 1, wherein the
`plurality of communication data networks
`include a local—area network.
`
`4. The system of claim 1, wherein the voice—
`enabled device is a home device.
`
`5. The method of claim 1, wherein the voice—
`
`enabled device is at least one of a group of
`
`an IP telephone, a cellular phone, a personal
`computer, a media player appliance, and a
`
`4. The system of claim 1, wherein the voice—
`enabled device is a telephone.
`
`4. The system of claim 1, wherein the voice—
`enabled device is a telephone.
`
`10. The system of claim 1, wherein the voice—
`
`television or other video display device.
`
`enabled device is an IP telephone.
`
`11. The system of claim 1, wherein the voice—
`enabled device is a cellular phone.
`
`12. The system of claim 1, wherein the voice—
`enabled device is a personal computer.
`
`13. The system of claim 1, wherein the voice—
`
`enabled device is a media player appliance.
`
`Google Ex 1031 - Page 6
`
`
`
`Google Ex 1031 - Page 6
`
`

`

`Application/Control Number: 16/155,523
`Art Unit: 2658
`
`
`Page 6
`
`14. The system of claim 1, wherein the voice—
`enabled device is a television or other video
`
`display device.
`
`6. The system of claim 1, wherein the
`speaker—independent—speech—recognition
`engine is adapted to analyze the phonemes
`
`5. The system of claim 1, wherein the
`speaker—independent—speech—recognition
`engine is adapted to analyze the phonemes
`
`to recognize conversational naturally—spoken—
`speech commands.
`
`to recognize conversational naturally—spoken—
`speech commands.
`
`7. The system of claim 1, wherein the
`speaker—independent—speech—recognition
`
`6. The system of claim 1, wherein the
`speaker—independent—speech—recognition
`
`8. The system of claim 1, wherein the
`instruction set executable code further
`
`7. The system of claim 1, wherein the
`instruction set executable code further
`
`comprises: a content descriptor associated
`with each information—source identifier, the
`
`comprises: a content descriptor associated
`with each information—source identifier, the
`
`engine is adapted to recognize the naturally—
`engine is adapted to recognize the naturally—
`
`spoken—speech commands.
`spoken—speech commands.
`
`
`
`
`
`content descriptor pre—defining a portion of
`content descriptor pre—defining a portion of
`the information source containing the
`the information source containing the
`
`information to be retrieved. information to be retrieved.
`
`9. The system of claim 1, further comprising:
`
`8. The system of claim 1, further comprising:
`
`a database operatively connected to the data
`processor, the database adapted to store the
`
`a database operatively connected to the data
`processor, the database adapted to store the
`
`information gathered from the information
`sources in response to the information
`requests.
`
`information gathered from the information
`sources in response to the information
`requests.
`
`10. The system of claim 8, wherein each
`recognition grammar executable code and
`
`9. The system of claim 8, wherein each
`recognition grammar executable code and
`
`each instruction set for querying of
`information to be retrieved are stored in the
`database.
`
`each instruction set for querying of
`information to be retrieved are stored in the
`database.
`
`11. A method comprising: (a) providing at
`least one data processor, the data processor
`
`15. A method comprising: (a) providing at
`least one data processor, the data processor
`
`operatively coupled to a plurality of
`communication data networks; (b) providing
`
`operatively coupled to a plurality of
`communication data networks; (b) providing
`
`at least one speaker—independent—speech—
`recognition engine operatively coupled to the
`at least one data processor (c) providing
`memory accessible to the data processor
`storing at least: (i) an instruction set for
`querying of the information to be retrieved
`from a plurality of sources coupled to the
`
`at least one speaker—independent—speech—
`recognition engine operatively coupled to the
`at least one data processor (c) providing
`memory accessible to the data processor
`storing at least: (i) an instruction set for
`querying of the information to be retrieved
`from a plurality of sources, the instruction set
`
`plurality of communication data networks,
`the instruction set comprising: an indication
`
`comprising: an indication of the plurality of
`sources, each identified by a information—
`
`Google Ex 1031 - Page 7
`
`Google Ex 1031 - Page 7
`
`

`

`Application/Control Number: 16/155,523
`Art Unit: 2658
`
`
`Page 7
`
`of the plurality of sources, each identified by
`an information—source identifier, and each
`
`source identifier, and each identifying certain
`information to be retrieved from the
`
`identifying certain information to be
`retrieved from the information—source
`
`identifier, and (ii) at least one recognition
`grammar executable code corresponding to
`each instruction set and corresponding to
`
`data characterizing audio containing a
`naturally—spoken—speech command including
`an information request, (d) the at least one
`speaker—independent—speech—recognition
`engine: (i) receiving the data characterizing
`audio containing the naturally—spoken—
`speech command from the voice—enabled
`device via a first of the communication data
`
`networks, (ii) recognizing phenomes in the
`data characterizing audio containing the
`naturally—spoken—speech commands to
`
`information—source identifier, and (ii) at least
`
`one recognition grammar executable code
`corresponding to each instruction set and
`
`corresponding to data characterizing audio
`containing a naturally—spoken—speech
`
`command including an information request,
`(d) the at least one speaker—independent—
`speech—recognition engine: (i) receiving the
`data characterizing audio containing the
`naturally—spoken—speech command from the
`voice—enabled device via a first of the
`
`communication data networks, (ii)
`
`recognizing phenomes in the data
`characterizing audio containing the naturally—
`
`spoken—speech commands to understand
`spoken words, and (iii) generating
`
`understand spoken words, and (iii)
`
`recognition—results data, (e) the least one
`
`generating recognition—results data, (e) the
`least one data processor programmed to: (i)
`select the corresponding at least one
`recognition grammar executable code upon
`
`data processor programmed to: (i) select the
`corresponding at least one recognition
`
`grammar executable code upon receiving the
`data characterizing audio containing the
`
`receiving the data characterizing audio
`containing the naturally—spoken-speech
`command and convert the data
`
`characterizing audio containing naturally—
`
`spoken—speech command into a data
`message for transmission to a network
`interface adapted to access a second of the
`one communication networks; and (ii)
`
`retrieve the instruction set corresponding to
`
`the recognition grammar executable
`provided by the at least one spea ker—
`
`naturally—spoken—speech command and
`convert the data characterizing audio
`
`containing natu rally—spoken—speech
`command into a data message for
`
`transmission to a network interface adapted
`to access a second of the one communication
`
`networks; and (iii) retrieve the instruction set
`corresponding to the recognition grammar
`executable provided by the at least one
`
`speaker—independent—speech—recognition
`device and access the information source
`
`independent-speech—recognition device and
`access the information source identified by
`
`identified by the instruction set to obtain at
`least a part ofthe information to be retrieve;
`
`the instruction set to obtain at least a part of
`the information to be retrieved; and (f)
`
`and (f) providing at least one speech—
`synthesis device operatively connected to the
`
`providing at least one speech—synthesis
`device operatively connected to the at least
`one data processor, and by the at least one
`
`at least one data processor, and by the at
`least one speech—synthesis device: (i)
`produce an audio message relating to any
`
`resulting information retrieved from the
`speech—synthesis device adapted to: (i)
`produce an audio message relating to any
`plurality of information sources, and (ii)
`
`resulting information retrieved from the
`
`
`
`
`
`Google Ex 1031 - Page 8
`
`Google Ex 1031 - Page 8
`
`

`

`Application/Control Number: 16/155,523
`Art Unit: 2658
`
`
`Page 8
`
`plurality of information sources including
`
`text—to—speech conversion of said any
`resulting information retrieved from the
`
`plurality of information sources, and (ii)
`transmit the audio message via the voice—
`enabled device.
`
`transmit the audio message to the voice-
`enabled device.
`
`12. The method of claim 11, wherein the
`
`16. The method of claim 15, wherein the
`
`plurality of communication data networks
`includes the Internet.
`
`plurality of communication data networks
`includes the Internet.
`
`13. The method of claim 11, wherein the
`
`17. The method of claim 15, wherein the
`
`plurality of communication data networks
`plurality of communication data networks
`
`include a local—area network. include a local—area network.
`
`
`
`
`
`14. The method of claim 11, wherein the
`
`18. The method of claim 15, wherein the
`
`voice—enabled device is a telephone.
`15. The method of claim 11, wherein the
`
`voice—enabled device is a telephone.
`19. The method of claim 15, wherein the
`
`speaker—independent—speech—recognition
`
`speaker—independent—speech—recognition
`
`engine is adapted to analyze the phonemes
`to recognize conversational naturally—spoken—
`
`engine is adapted to analyze the phonemes
`to recognize conversational naturally—spoken—
`
`speech commands.
`16. The method of claim 11, wherein the
`
`speech commands.
`20. The method of claim 15, wherein the
`
`speaker—independent—speech—recognition
`
`speaker—independent—speech—recognition
`
`engine is adapted to recognize the naturally—
`spoken—speech commands.
`17. The method of claim 11, wherein the
`instruction set executable code further
`
`engine is adapted to recognize the naturally—
`spoken—speech commands.
`21. The method of claim 15, wherein the
`instruction set executable code further
`
`comprises: a content descriptor associated
`with each information—source identifier, the
`
`comprises: a content descriptor associated
`with each information—source identifier, the
`
`content descriptor pre—defining a portion of
`
`content descriptor pre—defining a portion of
`
`the information source containing the
`information to be retrieved.
`
`the information source containing the
`information to be retrieved.
`
`18. The method of claim 11, further
`
`22. The method of claim 15, further
`
`comprising: providing a database and
`operatively connecting the database to the
`
`comprising: providing a database and
`operatively connecting the database to the
`
`data processor and storing the information
`gathered from the information sources in
`
`data processor and storing the information
`gathered from the information sources in
`
`response to the information requests in the
`database.
`
`response to the information requests in the
`database.
`
`19. The method of claim 18, wherein each
`
`23. The method of claim 22, wherein each
`
`recognition grammar executable code and
`recognition grammar executable code and
`each instruction set are stored in the
`each instruction set are stored in the
`
`database.
`database.
`
`Google Ex 1031 - Page 9
`
`Google Ex 1031 - Page 9
`
`

`

`Application/Control Number: 16/155,523
`Art Unit: 2658
`
`
`Page 9
`
`24. The method of claim 15, wherein the
`
`voice—enabled device is an IP telephone.
`
`20. The method of claim 18, wherein the
`voice—enabled device is at least one of a
`
`group of an IP telephone, a cellular phone, a
`
`
`
`
`
`personal computer, a media player
`appliance, and a television or other video
`
`25. The method of claim 15, wherein the
`
`voice—enabled device is a cellular phone.
`
`display device.
`
`26. The method of claim 15, wherein the
`
`voice—enabled device is a personal computer.
`
`27. The method of claim 15, wherein the
`
`voice—enabled device is a media player
`appliance.
`
`28. The method of claim 15, wherein the
`voice—enabled device is a television or other
`
`video display device.
`
`4.
`
`Claims 1—8 and 11—20 are rejected on the ground of nonstatutory double patenting as
`
`being unpatentable over claimsl—4, 11-18 of U.S. Patent No. 10,380,981. Although the claims at
`
`issue are not identical, they are not patentably distinct from each other because, the claims of
`
`the instant application are similar in scope and content of the patented claims issued to the
`
`same applicant.
`
`Patent No: 10,320,981
`Application No: 16/155,523
`
`
`
`
`1. A system comprising: (a) at least one data
`
`processor, the at least one data processor
`operatively coupled to a plurality of
`communication data networks; (b) at least
`one speaker—independent speech-recognition
`engine operatively coupled to the data
`
`processor; (c) memory accessible to the at
`least one data processor and storing at least:
`
`(i) an instruction set for querying of
`information to be retrieved from a plurality
`
` 5. A voice—browsing system for retrieving
`
`information from an information source that
`
`is periodically updated with current
`information, by speech commands received
`
`from a particular user provided via a voice—
`enabled device after establishing a
`connection between the voice—enabled
`device and a media server of the voice—
`
`browsing system, said voice—browsing system
`comprising: (a) a speech—recognition engine
`
`
`
`Google Ex 1031 - Page 10
`
`Google Ex 1031 - Page 10
`
`

`

`Application/Control Number: 16/155,523
`Art Unit: 2658
`
`
`Page 10
`
`of sources coupled to the plurality of
`communication data networks, the
`
`instruction set comprising: an indication of
`
`the plurality of sources, each identified by a
`source identifier, and each identifying certain
`information to be retrieved from the source
`
`identifier, and (ii) at least one recognition
`
`including a processor and coupled to the
`
`media server, the media server initiating a
`voice—response application once the
`connection between the voice—enabled
`
`device and the voice—browsing system is
`
`established, the speech—recognition engine
`adapted to receive a speech command from
`
`grammar executable code corresponding to
`each instruction set and corresponding to
`data characterizing audio containing a
`
`a particular user via the voice—enabled
`device, the media server configured to
`identify and access the information source
`
`naturally—spoken—speech command including
`an information request, (d) wherein the at
`least one speaker—independent—speech—
`recognition engine is adapted to: (1) receive
`the data characterizing audio containing the
`naturally—spoken—speech command from a
`voice—enabled device via a first of the
`
`plurality of communication data networks;
`
`(2) to recognize phenomes in the data
`characterizing audio containing naturally—
`spoken—speech commands to understand
`
`spoken words; and (3) to generate
`recognition results data, (e) wherein the at
`
`least one data processor is adapted to: (1)
`select the corresponding at least one
`
`recognition grammar executable code upon
`receiving the data characterizing the audio
`
`containing the naturally—spoken—speech
`command and to convert the data
`
`characterizing the audio containing the
`
`via a network, the speech—recognition engine
`adapted to convert the speech command
`
`into a data message by selecting speech—
`recognition grammar established to
`
`correspond to the speech command received
`from the particular user and assigned to
`
`perform searches; (b) the media server
`further configured to select at least one
`information—source—retrieval instruction
`
`corresponding to the speech—recognition
`grammar established for the speech
`command, the at least one information—
`source—retrieval instruction stored in a
`
`database associated with the media server
`
`and adapted to retrieve information; (c) a
`web—browsing server coupled to the media
`server and adapted to access at least a
`
`portion of the information source to retrieve
`information indicated by the speech
`command, by using a processor of the web—
`
`naturally—spoken—speech command into a
`data message for transmission to a network
`
`browsing server, which processor (i)
`performs an instruction that requests
`
`interface adapted to access a second of the
`plurality of communication data networks;
`and (2) retrieve the instruction set
`
`information from an identified web page
`within the information source, and (ii) utilizes
`a command to execute a content extractor
`
`corresponding to the recognition grammar
`
`within the web—browsing server to separate a
`
`executable codes provided by the at least
`one speaker—independent—speech—
`
`portion of the information from other
`information, the information derived from
`
`uses a content—descriptor file containing a
`description of the portion of information and
`one speech—synthesis device operatively
`
`coupled to the at least one data processor,
`wherein the content—descriptor file indicates
`
`
`
`
`
`recognition engine and to access the
`information source queried by the instruction
`set to obtain at least a part ofthe
`information to be retrieved, and (f) at least
`
`only a portion of a web page containing
`information relevant to the speech
`command, wherein the content extractor
`
`Google Ex 1031 - Page 11
`
`Google Ex 1031 - Page 11
`
`

`

`Application/Control Number: 16/155,523
`Art Unit: 2658
`
`
`Page 11
`
`the at least one speech—synthesis device
`
`a location of a portion of the information
`
`configured to produce an audio message
`relating to any resulting information
`
`within the information source, and selecting,
`by the web—browsing server, an information
`
`retrieved from the plurality of information
`sources including a text—to-speech conversion
`
`type relevant from the information source
`and retrieving only a portion of the
`
`of at least certain data in said any resulting
`information retrieved from the plurality of
`
`information that is relevant according to the
`at least one information—source—retrieval
`
`information sources, and to convey the audio
`message via the voice—ena bled device.
`
`instruction; and (d) a speech—synthesis
`engine including a processor and coupled to
`the media server, the speech—synthesis
`
`engine adapted to convert the information
`retrieved from the information source into
`
`audio and convey the audio by the voice—
`enabled device.
`
`
`
`6. The voice—browsing system claim 5, further
`comprising: an interface to an associated
`website by the network to locate requested
`information.
`
`7. The voice—browsing system of claim 5,
`wherein the voice—enabled device accesses
`
`the voice—browsing system by at least one of
`a landline telephone, a wireless telephone,
`
`and an Internet Protocol telephonic
`connection and wherein the media server
`
`operatively connects to the network, by at
`least one of a local—area network, a wide—
`
`area network, and the Internet.
`
`6. The voice—browsing system claim 5, further
`comprising: an interface to an associated
`
`website by the network to locate requested
`information.
`
`7. The voice—browsing system of claim 5,
`wherein the voice—enabled device accesses
`
`the voice—browsing system by at least one of
`
`a landline telephone, a wireless telephone,
`and an Internet Protocol telephonic
`connection and wherein the media server
`
` 6. The voice—browsing system claim 5, further
`
`2. The system of claim 1, wherein the
`plurality of communication data networks
`includes the Internet.
`
`3. The system of claim 1, wherein the
`plurality of communication data networks
`include a local—area network.
`
`
`
`operatively connects to the network, by at
`least one of a local—area network, a wide—
`
`area network

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket