`
`
`
`VB ASSETS, LLC,
`
`
`
`
`v.
`
`
`AMAZON.COM SERVICES LLC,
`
`
`Defendant.
`
`IN THE UNITED STATES DISTRICT COURT
`FOR THE DISTRICT OF DELAWARE
`
`Plaintiff,
`
`C.A. No. 1:19-cv-01410-MN
`
`JURY TRIAL DEMANDED
`
`OPENING BRIEF IN SUPPORT OF DEFENDANT’S
`MOTION FOR JUDGMENT AS A MATTER OF LAW
`PURSUANT TO FED. R. CIV. P. 50(a)
`
`ASHBY & GEDDES, P.A.
`Steven J. Balick (#2114)
`sbalick@ashbygeddes.com
`Andrew C. Mayo (#5207)
`amayo@ashbygeddes.com
`500 Delaware Avenue, 8th Floor
`P.O. Box 1150
`Wilmington, DE 19899
`(302) 654-1888
`
`Attorneys for Defendant
`
`
`
`
`
`
`
`
`Of counsel:
`
`J. David Hadden, CSB No. 176148
`Saina S. Shamilov, CSB No. 215636
`Ravi R. Ranganath, CSB No. 272981
`Vigen Salmastlian, CSB No. 276846
`Allen Wang, CBS No. 278953
`Johnson Kuncheria, CSB No. 335765
`Min Wu, CSB No. 307512
`Jeffrey A. Ware, CSB No. 271603
`Rebecca A.E. Fewkes, CSB No. 209168
`FENWICK & WEST LLP
`801 California Street
`Mountain View, CA 94041
`(650) 988-8500
`
`Dated: November 7, 2023
`
`{01956200;v1 }
`
`
`
`
`
`Case 1:19-cv-01410-MN Document 283 Filed 11/07/23 Page 2 of 24 PageID #: 10965
`
`
`
`TABLE OF CONTENTS
`
`Page
`
`I.
`
`PLAINTIFF FAILED TO PROVE INFRINGEMENT. ......................................................1
`
`A.
`
`B.
`
`C.
`
`D.
`
`Plaintiff failed to prove that Defendant directly infringes claim 13 of the ’681
`patent. .......................................................................................................................2
`
`Plaintiff failed to prove that Defendant directly infringes claim 40 of the ’176
`patent. .......................................................................................................................6
`
`Plaintiff failed to prove that Defendant directly infringes claim 23 of the ’097
`patent. .......................................................................................................................7
`
`Plaintiff failed to prove that Defendant directly infringes claim 25 of the ’703
`patent. .......................................................................................................................8
`
`II.
`
`THE ASSERTED CLAIMS ARE INVALID. .....................................................................9
`
`A.
`
`B.
`
`The asserted claims lack adequate written description under 35 U.S.C. § 112. ......9
`
`The asserted claims are obvious under 35 U.S.C. § 103. .......................................13
`
`1.
`
`2.
`
`3.
`
`4.
`
`MIT Galaxy renders obvious claim 13 of the ’681 patent. ...................14
`
`MIT Galaxy renders obvious claim 40 of the ’176 patent. ...................14
`
`MIT Galaxy renders obvious claim 23 of the ’097 patent. ...................15
`
`United System in combination with Partovi renders obvious claim
`25 of the ’703 patent. ................................................................................15
`
`C.
`
`The asserted claims fail to claim patentable subject matter under 35 U.S.C. §
`101. .........................................................................................................................16
`
`PLAINTIFF FAILED TO PROVE WILLFUL INFRINGEMENT ..................................19
`
`THE EVIDENCE AT TRIAL CANNOT SUPPORT A DAMAGES VERDICT ............20
`
`III.
`
`IV.
`
`
`
`
`{01956200;v1 }
`
`i
`
`
`
`Case 1:19-cv-01410-MN Document 283 Filed 11/07/23 Page 3 of 24 PageID #: 10966
`
`
`
`Cases
`
`TABLE OF AUTHORITIES
`
`Page(s)
`
`Apple, Inc. v. Ameranth, Inc.,
`842 F.3d 1229 (Fed. Cir. 2016)................................................................................................16
`
`Ariad Pharms, Inc. v. Eli Lilly & Co.,
`598 F.3d 1336 (Fed. Cir. 2010) (en banc) ..............................................................10, 11, 12, 13
`
`Bayer Healthcare LLC v. Baxalta Inc.,
`989 F.3d 964 (Fed. Cir. 2021)............................................................................................19, 20
`
`Ericsson, Inc. v. D-Link Sys., Inc.,
`773 F.3d 1201 (Fed. Cir. 2014)................................................................................................20
`
`Halo Elecs., Inc. v. Pulse Elecs., Inc.,
`579 U.S. 93 (2016) ...................................................................................................................19
`
`Intuitive Surgical, Inc. v. Auris Health, Inc.,
`549 F. Supp.3d 362 (D. Del. 2021) ..........................................................................................20
`
`KOM Software Inc. v. NetApp, Inc.,
`No. 18-CV-00160-WCB, 2023 WL 6460025 (D. Del. Oct. 4, 2023) ......................................17
`
`Oiness v. Walgreen Co.,
`88 F.3d 1025 (Fed. Cir. 1996)..................................................................................................20
`
`Prism Techs. LLC v. Sprint Spectrum L.P.,
`849 F.3d 1360 (Fed. Cir. 2017)................................................................................................20
`
`Uniloc USA, Inc. v. Microsoft Corp.,
`632 F.3d 1292 (Fed. Cir. 2011)................................................................................................20
`
`VOIT Techs., LLC v. Del-Ton, Inc,
`No. 5:17-CV-259-BO, 2018 WL 385188 (E.D.N.C. Jan. 11, 2018), aff’d, 757
`F. App’x 1000 (Fed. Cir. 2019) ...............................................................................................16
`
`Statutes
`
`35 U.S.C. § 101 ..............................................................................................................................16
`
`35 U.S.C. § 103 ..............................................................................................................................13
`
`35 U.S.C. § 112 ................................................................................................................................9
`
`
`
`{01956200;v1 }
`
`ii
`
`
`
`Case 1:19-cv-01410-MN Document 283 Filed 11/07/23 Page 4 of 24 PageID #: 10967
`
`
`
`Defendant moves under Fed. R. Civ. P. 50(a) for judgment as a matter of law.1
`
`I.
`
`PLAINTIFF FAILED TO PROVE INFRINGEMENT.
`
`It was Plaintiff’s burden at trial to provide substantial evidence that Alexa’s NLU practices
`
`every limitation of the asserted claims. Plaintiff failed to do so. The basic operation of Alexa was
`
`undisputed: (1) a device such as Echo records and sends an audio signal of the user’s current
`
`utterance to Amazon’s cloud servers (Tr. Tx. at 556:5-19 (Strom); 675:5-19 (Johnson); 314:4-11
`
`(Polish)); (2) an Automatic Speech Recognition (“ASR”) component receives the audio and runs
`
`machine learning models to return words recognized from the utterance (Tr. Tx. at 556:5-557:5
`
`(Strom); 675:5-19 (Johnson); 314:4-11 (Polish)); (3) Natural Language Understanding (“NLU”)
`
`receives the recognized words (Tr. Tx. at 556:5-19 (Strom); 675:5-19 (Johnson); 317-11 (Polish))
`
`and runs dozens of machine learning models in parallel to generate list of hypotheses, i.e., the
`
`possible user intents, for the current utterance (Tr. Tx. at 557:12-558:16 (Strom); 675:9-676:5
`
`(Johnson); 317:12-17, 375:21-376:16 (Polish)); (4) routing components receive the list and
`
`identify the applications that may best be able to handle the user’s request (Tr. Tx. at 565:17-24
`
`(Strom); 676:20-676:5; 376:8-16 (Polish)); and (5) the application(s) receive the list. (Tr. Tx. at
`
`565:17-566:6 (Strom); 676:14-25 (Johnson); 376:8-16 (Polish).)
`
`Three of the four asserted claims require “computer executable instructions” or “computer
`
`program instructions” to perform steps of the claim.2 But VB Assets’ infringement expert Dr.
`
`Polish reviewed no source code for Alexa and made no attempt to map the specific claim elements
`
`
`1 Judgment as a matter of law is proper “[i]f a party has been fully heard on an issue during a
`jury trial and the court finds that a reasonable jury would not have a legally sufficient evidentiary
`basis to find for the party on that issue.” Fed. R. Civ. P. 50(a); Marten v. Hunt, 479 F. App’x 436,
`439 (3d Cir. 2012). The Court should grant the motion unless there is substantial evidence in
`support of each essential element of Plaintiff’s claims. Lightning Lube, Inc. v. Witco Corp., 4 F.3d
`1153, 1184 (3d Cir. 1993).
`2 Claim 25 of the ’681 patent, claim 23 of the ’097 patent, and claim 25 of the ’703 patent.
`
`{01956200;v1 }
`
`1
`
`
`
`Case 1:19-cv-01410-MN Document 283 Filed 11/07/23 Page 5 of 24 PageID #: 10968
`
`
`
`to it. (Tr. Tx. 386:2-10 (“I did not review code”)).3 Instead, Dr. Polish spoke to an Alexa device,
`
`listened to the device’s responses, and attempted to “infer” how Alexa generated the responses.
`
`(See, e.g., Tr. Tx. at 405:20-407:13 (inferring that Alexa “grammatically or syntactically adapts”
`
`a response because “Amazon is responding to a request from a user”); 407:23-14 (speculating that
`
`“[t]he context interpreter may well be being used”), 412:9-413:8 (“I think the behavior of the
`
`system speaks for itself”).) Dr. Polish’s superficial front-end analysis failed to provide substantial
`
`evidence from which the jury could conclude that Defendant infringes any asserted claim.
`
`A.
`
`Plaintiff failed to prove that Defendant directly infringes claim 13 of the ’681
`patent.
`
`No accumulating short-term shared knowledge. Claim 13 of the ’681 patent requires
`
`“accumulate short-term shared knowledge about the current conversation” that “includes
`
`knowledge about the utterance received at the voice during the current conversation” and is then
`
`used as part of “identify[ing] a context associated with the utterance.” Alexa does not accumulate
`
`this short-term shared knowledge. (See Tr. Tx. at 682:13-685:5 (Johnson).) Instead Alexa’s NLU
`
`processes only the current utterance, without any short-term shared knowledge accumulated from
`
`previous utterances. (Tr. Tx. at 558:17-21 (“the only thing that’s coming in as input to [the DNNs]
`
`are the words that we spoke”) (Strom); 682:16-25 (Johnson).) Plaintiff provided no evidence that
`
`a single-turn utterance to Alexa included short-term knowledge, he admitted that he provided no
`
`evidence or examples of this to the jury. (Tr. Tx. at 420:4-10 (Polish).) Moreover, in a multi-turn
`
`Alexa’s re-ranker also does not use any accumulated short-term shared knowledge to identify a
`
`
`3 Similarly, Mr. Peck did not analyze the asserted claims or attempt to find any functionality
`in the code purportedly meeting the claim limitations. (See Tr. Tx. at 435:1-3 (Peck).) Mr. Peck
`simply reviewed source code as Plaintiff’s counsel instructed and admitted that he did not review
`the source code for any particular limitation of the asserted claims. (See Tr. Tx. at 444:13-447:22
`(Peck); see also Tr. Tx. at 383:10-23 (Polish).)
`
`{01956200;v1 }
`
`2
`
`
`
`Case 1:19-cv-01410-MN Document 283 Filed 11/07/23 Page 6 of 24 PageID #: 10969
`
`
`
`context, but instead reorders the determined interpretations based on a prompt that was sent to the
`
`user. (Tr. Tx. at 564:7-19 (Strom); 683:1-22 (Johnson).)
`
`No accumulating long-term shared knowledge. Claim 13 requires “accumulate long-term
`
`shared knowledge about the user” that “includes knowledge about one or more past conversations
`
`with the user” and is used as part of “identify[ing] a context associated with the utterance.” Alexa
`
`does not accumulate long-term shared knowledge as claimed. (Tr. Tx. at 685:5-688:19 (Johnson).)
`
`Dr. Polish argued Alexa uses a “speaker ID service” to “distinguish individual users of the device
`
`in a single household” and “direct the conversation,” but does not explain how the speaker ID
`
`constitutes “knowledge about one or more past conversations” as required by the claim, or how it
`
`is used to identify a context for establishing the meaning of an utterance. (Tr. Tx. at 332:15-
`
`333:24.) Instead, speaker ID is used by back-end applications that receive the interpretations
`
`generated by the NLU to “customize interactions back to the user,” but is not used by the NLU to
`
`interpret the meaning of the utterance. (Tr. Tx. at 563:25-564:3 (Strom); 686:12-23; 687:18-688:4
`
`(Johnson).) Dr. Polish further accused a “mutable model” and an “enrollment profile” as the
`
`claimed long-term shared knowledge. (Tr. Tx. at 343:7-16; PDX3-19.) However, Dr. Polish
`
`admitted that these concepts were only used as part of Alexa’s ASR for improving speech
`
`recognition, which is irrelevant to identifying “a context associated with the utterance” and
`
`“establish[ing] the intended meaning within the identified context.” (Id.; see also Tr. Tx. at
`
`685:19-686:11; 687:4-14 (Johnson)).
`
`No identifying a context. Claim 13 requires “identify a context associated with the
`
`utterance” and “establish an intended meaning for the utterance within the identified context.”
`
`(’681 patent, claim 13.) As discussed above, when Alexa’s NLU receives recognized words of a
`
`current utterance from the ASR component, the NLU runs dozens of deep neural networks in
`
`{01956200;v1 }
`
`3
`
`
`
`Case 1:19-cv-01410-MN Document 283 Filed 11/07/23 Page 7 of 24 PageID #: 10970
`
`
`
`parallel to generate, from the recognized words, an N-best list of different interpretations for the
`
`current utterance. (Tr. Tx. at 557:12-21; 564:4-19 (Strom); 676:14-25 (Johnson).) It does so
`
`without identifying a context. (Tr. Tx. at 562:23-563:6 (“the NLU models are currently not aware
`
`of context”) (Strom); 676:14-25 (Johnson).) And because the NLU does not identify a context
`
`when generating the interpretations, the NLU cannot establish an intended meaning for the
`
`utterance “within the identified context.” (Tr. Tx. at 678:9-17 (Johnson).) While further
`
`operations of the NLU may re-rank the different interpretations of the N-best list, they do not
`
`change the actual interpretations that are output by the NLU, and as such also do not identify a
`
`context and establish an intended meaning of the utterance within the identified context. (Tr. Tx.
`
`at 683:1-25 (Johnson); Tr. Tx. at 564:7-19; 565:11-16; 591:24-592:1 (Strom).)
`
`Dr. Polish did not show that Alexa identifies a context associated with an utterance and
`
`establish an intended meaning for the utterance within the identified context, as he admitted that
`
`interpretations determined by the NLU’s DNNs is done “without dialogue context,” and that he
`
`did not know whether the determined interpretations are changed before being output by the NLU.
`
`(Tr. Tx. at 400:1-11; 400:24-401:4 (“I don’t know that they get changed or not”).) Further, Dr.
`
`Polish admitted that Alexa’s NLU always outputs multiple interpretations of a user’s request. (See
`
`Tr. Tx. at 376:8-16.)
`
`No identifying a context from both short-term shared knowledge and long-term shared
`
`knowledge. Claim 13 requires that the “a context associated with the utterance” is identified “from
`
`the short-term shared knowledge and the long-term shared knowledge.” (’681 patent, claim 13.)
`
`As discussed above, Alexa does not identify a context to establish an intended meaning for the
`
`utterance within the identified context, much less identify a context from both short-term shared
`
`knowledge and long-term shared knowledge. (See Tr. Tx. at 684:24-685:4; 688:7-13 (Johnson))
`
`{01956200;v1 }
`
`4
`
`
`
`Case 1:19-cv-01410-MN Document 283 Filed 11/07/23 Page 8 of 24 PageID #: 10971
`
`
`
`To demonstrate that Alexa meets this element, Dr. Polish needed to show that the context
`
`identified for a particular utterance is identified from both short-term shared knowledge and long-
`
`term shared knowledge. He did not do so. In addition to not showing that Alexa identifies a
`
`context in which to establish the intended meaning of the utterance as discussed above, Dr. Polish
`
`alternated between two different example utterances, neither of which demonstrate use of both
`
`types of shared knowledge. (See Tr. Tx. 393:21-394:2 (“I did not prove it by showing one specific
`
`utterance”) (Polish)). First, Dr. Polish argued an example where a user says “play Hunger Games,”
`
`where “Hunger Games” could potentially refer to a book, song, album, or video. (Tr. Tx. at 328:1-
`
`11 (Polish).) But Dr. Polish made no argument that Alexa’s NLU identifies a context for this
`
`utterance using both short-term and long-term shared knowledge. (Tr. Tx. at 403:25-404:3
`
`(alleging that “it was deciding whether or not it was a song or an album was using knowledge
`
`about whether the person had purchased the album”) (Polish).) Second, Dr. Polish argued an
`
`example involving the user saying the utterance “morning” specifying whether “6:30” referred to
`
`6:30am or 6:30pm. (Tr. Tx. at 339:6-17; 403:15-17; 424:23-425:1.) But again, Dr. Polish did not
`
`argue the NLU identifies a context or this utterance from on both short-term and long-term shared
`
`knowledge for this example too. (Tr. Tx. at 403:15-17 (Polish cross, arguing “We had an example
`
`involving disambiguating the word – you know, 6:30 based upon short-term knowledge”); 424:1-
`
`425:1 (Polish redirect, alleging that Alexa “changes the slot to be time equals 600” based on “the
`
`previous short-term information”).)
`
`No grammatically or syntactically adapting the response. Claim 13 requires a
`
`conversational speech engine that “grammatically or syntactically adapts the response.” Alexa
`
`does not grammatically or syntactically adapt a generated response. Dr. Polish testified that the
`
`only evidence he provided the jury to show that Alexa meets this limitation was a diagram and
`
`{01956200;v1 }
`
`5
`
`
`
`Case 1:19-cv-01410-MN Document 283 Filed 11/07/23 Page 9 of 24 PageID #: 10972
`
`
`
`“the fact that Amazon is responding to a request from a user.” (Tr. Tx. at 407:9-13 (Polish).) He
`
`later conceded this element. His first argument fails because he expressly admitted that the
`
`diagram did not show that Alexa “grammatically or syntactically adapts” any response. (Tr. Tx.
`
`at 407:9-13 (Polish).) And his second argument fails because it showed only that an Alexa device
`
`generated a response to Dr. Polish’s spoken commands, not that Alexa “grammatically or
`
`syntactically adapt[ed] the generated response.” (Tr. Tx. at 349:13-19); 407:3-8 (Polish).)
`
`No single utterance. Claim 13 of the ’681 patent requires “receiv[ing] an utterance” and
`
`then performing four specific steps using that single utterance: 1) “accumulate short-term shared
`
`knowledge” that includes “knowledge about the utterance,” 2) “identify a context associated with
`
`the utterance . . . from the short-term shared knowledge and the long-term shared knowledge,” 3)
`
`“establish an intended meaning for the utterance,” and 4) “generate a response to the utterance”
`
`that is “grammatically or syntactically adapt[ed]” “based on the intended meaning.” Alexa does
`
`not perform each claimed step on any single utterance it receives. (See Tr. Tx. at 674:2-6
`
`(Johnson).) Dr. Polish admitted that he “did not prove” that Alexa performs each limitation “by
`
`showing one specific utterance.” (Tr. Tx. at 392:21-24; 393:21-394:2 (“I did not prove it by
`
`showing one specific utterance.”) (Polish)).
`
`B.
`
`Plaintiff failed to prove that Defendant directly infringes claim 40 of the ’176
`patent.
`
`No reinterpreting the words or phrases in response to the predetermined event. Claim
`
`40 requires “an adaptive misrecognition engine configured to determine that the conversational
`
`language incorrectly interpreted the words or phrases in response to detecting a predetermined
`
`event, wherein the conversational language processor reinterprets the words or phrases in response
`
`to the predetermined event.” Alexa’s NLU only operates on the words of the current utterance,
`
`and ever goes back to reinterpret an utterance that was already interpreted. (Tr. Tx. at 696:9-
`
`{01956200;v1 }
`
`6
`
`
`
`Case 1:19-cv-01410-MN Document 283 Filed 11/07/23 Page 10 of 24 PageID #: 10973
`
`
`
`697:14 (Johnson).) As such, the Alexa NLU does not reinterpret words or phrases, much less in
`
`response to a predetermined event as claimed. Indeed, Dr. Polish admitted that he did not identify
`
`any single predetermined event that would cause Alexa to reinterpret the words or phrases. (Tr.
`
`Tx. at 410:12-15 (“I haven’t picked one out, no.”) (Polish).)4
`
`No establishing a context. Claim 40 requires a “establishing a context for the natural
`
`language utterance.” As discussed above, Alexa’s NLU determines all interpretations for a
`
`recognized word without establishing a context. (Tr. Tx. at 691:14-692:3 (Johnson).) Dr. Polish
`
`argued that when a user says “Alexa, I want to buy an iPhone case,” Alexa’s NLU performs a two-
`
`step approach to establish a context as claimed. (Tr. Tx. at 354:2-17 (referring to PDX3-32,
`
`showing two-step approach and context interpreter); 407:23-408:2 (Polish).) He later admitted
`
`that Alexa “definitely” does not use the two-step approach for the utterance he presented to the
`
`jury. (Tr. Tx. at 408:3-408:14 (Polish); see also Tr. Tx. at 693:16-694:10 (Johnson).)
`
`C.
`
`Plaintiff failed to prove that Defendant directly infringes claim 23 of the ’097
`patent.
`
`No determination of whether a pronoun refers to a product or service. Claim 23 requires
`
`“responsive to the existence of a pronoun in the natural language utterance, determine whether the
`
`pronoun refers to one or more of the product or service or a provider of the product or service.”
`
`Alexa’s NLU does not determine whether a pronoun in the natural language utterance refers to a
`
`product or service. (Tr. Tx. at 699:13-700:8 (Johnson).) Dr. Polish admitted that he did not know
`
`what the NLU does with a pronoun. (Tr. Tx. at 411:15-20; 413:4-8; 414:2-10 (Polish).) His
`
`infringement theory was not based on source code or technical documents, only his assumption of
`
`
`4 In addition, despite designating “I want to buy an iPhone case” as an example of the claimed
`natural language utterance, Dr. Polish did not explain how the words and phrases of the utterance
`“I want to buy an iPhone case” are misinterpreted by Alexa or how they are reinterpreted. (See
`Tr. Tx. at 355:12-20 (“So the example that I gave in my demo was that I said Alexa, I want to buy
`an iPhone case, and the speechlet went off… and figured out that it wanted to sell me this case.”))
`
`{01956200;v1 }
`
`7
`
`
`
`Case 1:19-cv-01410-MN Document 283 Filed 11/07/23 Page 11 of 24 PageID #: 10974
`
`
`
`how the NLU handles pronouns based on responses that he heard from an Alexa device when he
`
`spoke commands that included pronouns to it. (Tr. Tx. at 411:15-20; 413:4-8; 414:2-10 (Polish).)5
`
`D.
`
`Plaintiff failed to prove that Defendant directly infringes claim 25 of the ’703
`patent.
`
`No providing, without further user input, a request for user confirmation to use payment
`
`and shipping information. Claim 25 requires “provide, without further user input after the receipt
`
`of the user input, a request for user confirmation to use the payment information and the shipping
`
`information for a purchase transaction for the product or service.” Alexa does not provide a request
`
`for user confirmation to use payment and shipping information, without further user input, in
`
`response to a user saying “Alexa, buy it now.” (Tr. Tx. at 707:16-708:3 (Johnson).) Indeed, Dr.
`
`Polish admitted that he did not show that Alexa performs this limitation in response to the “buy it
`
`now” and instead accused a receipt sent via email. (Tr. Tx. at 418:15-24 (Polish).) However, a
`
`receipt sent by email after Alexa has already charged the user’s credit card is not a request for
`
`confirmation to use payment and shipping information as required by the claim. (See also Tr. Tx.
`
`at 368:8-10 (Dr. Polish testifying that his credit card was charged “without any additional input”
`
`following him saying “buy it now”) (Polish).)
`
`No determining a context or identify a product or service based on the determined
`
`context. Claim 25 requires “determine . . . a context based at least on the one or more words of
`
`phrases” of the utterance, and “identify . . . the product or service . . . based on the determined
`
`context.” Alexa’s NLU determines all interpretations for a recognized word without determining
`
`
`5 As Dr. Strom explained, the different responses produced by Alexa when Dr. Polish said
`“what color” and “what color is it” in his demonstration was a result of the system routing the
`generated interpretations of “what color” to the general knowledge domain instead of the Shopping
`speechlet, and not due to the NLU determining that “it” referred to a particular product or service.
`(Tr. Tx. at 566:9-568:13.) Dr. Polish admitted that he didn’t know what an NLU does with a
`pronoun. (See Tr. Tx. at 411:4-14 ("I don’t know what it’s done with the pronoun, I don’t know
`that – what it thinks about the pronoun.”))
`
`{01956200;v1 }
`
`8
`
`
`
`Case 1:19-cv-01410-MN Document 283 Filed 11/07/23 Page 12 of 24 PageID #: 10975
`
`
`
`a context. (Tr. Tx. at 703:3-23 (Johnson).) Dr. Polish argued Alexa’s NLU determines a context
`
`based on the words or phrases of the utterance using “the context interpreter” and “a 2 step
`
`approach.” (Tr. Tx. at 364:16-365:2.) However, he did not explain how the two-step approach
`
`would be used to identify a context for the utterance “I want to buy an iPhone case” or “buy it
`
`now,” and in fact later admitted that Alexa “definitely” does not use the two-step approach for the
`
`utterance “I want to buy an iPhone case.” (See Tr. Tx. at 362:18-22; 408:3-408:14.) Dr. Polish
`
`further did not explain how Alexa’s NLU identifies a product or service based on an identified
`
`context, instead concluding that the jury could infer it did so because “it found an iPhone case to
`
`offer me” after he said “I want to buy an iPhone case.” (Tr. Tx. at 365:8-13.)
`
`No identifying a product and obtaining shipping information without further user input.
`
`Claim 25 requires “identify, without further user input… a product or service” and “obtain, without
`
`further user input . . . shipping information.” Alexa’s NLU does not identify a product or service
`
`and obtain shipping information without further user input in response to the utterance “buy it
`
`now” or “I want to buy an iPhone case.” (Tr. Tx. at 704:21-705:7; 706:1-15; 706:20-707:6
`
`(Johnson).) Dr. Polish argued Alexa identifies a product after his utterance of “I want to buy an
`
`iPhone case,” but admitted that he did not know if Alexa could obtain shipping information prior
`
`to his further user input of “buy it now.” (Tr. Tx. at 365:8-13; 415:20-416:2.) On the other hand,
`
`to the extent that shipping information is determined following the utterance “buy it now,” there
`
`is no identification of a product or service, because “the product is already in your cart.” (Tr. Tx.
`
`at 706:20-707:6 (Johnson).)
`
`II.
`
`THE ASSERTED CLAIMS ARE INVALID.
`A.
`
`The asserted claims lack adequate written description under 35 U.S.C. § 112.
`
`The asserted patents do not show that the named inventors possessed the full scope of what
`
`is claimed, as required under 35 U.S.C. § 112, because they do not describe how to perform the
`
`{01956200;v1 }
`
`9
`
`
`
`Case 1:19-cv-01410-MN Document 283 Filed 11/07/23 Page 13 of 24 PageID #: 10976
`
`
`
`claimed ideas and results of natural language understanding. See Ariad Pharms, Inc. v. Eli Lilly
`
`& Co., 598 F.3d 1336, 1351 (Fed. Cir. 2010) (en banc).
`
`claim 13, ’681 patent. “Cooperative conversational voice user interface.” No
`
`reasonable jury could conclude that the inventors of the ’681 patent had possession of the claimed
`
`invention as of the application filing date. See Ariad, 598 F.3d at 1351. The specification does
`
`not describe the inner workings of the claimed “cooperative conversational voice user interface”
`
`that performs the steps recited in claim 13. “accumulate short-term shared knowledge,”
`
`“accumulate long-term shared knowledge,” “identify a context associated with the utterance,”
`
`“establish an intended meaning for the utterance within the identified context,” and “grammatically
`
`or syntactically adapt[] the response.” (Tr. Tx. at 710:19-711:8 (Johnson).) It instead identifies
`
`black boxes—labeled “shared knowledge,” “intelligent hypothesis builder,” and “adaptive
`
`response builder”—as achieving the claimed results, but without “detailed steps on how one would
`
`carry them out.” (Tr. Tx. at 710:19-711:8.) As such, a person reading the specification “would
`
`not know that the inventors actually invented [the claimed invention] and [that] they know how to
`
`do it.” (Id.) This is evidenced by the fact that the specification came from the VUE paper that Mr.
`
`Freeman testified was written as a “vision piece” that only lays out “the principle of a cooperative
`
`conversation that Mr. [Paul] Grice described in his paper in 1975” without specifying how these
`
`can be applied by computers. (Tr. Tx. at 286:3-8; 287:18-22; 288:20-289:4.) Accumulate short-
`
`and long- term knowledge. The specification again fails to disclose how these claimed functions
`
`are performed. Named inventor Tom Freeman admitted that the claimed ideas were not his but
`
`instead Professor Paul Grice’s ideas disclosed in a 1975 paper. (Tr. Tx. at 288:4-13.) The
`
`specification describes the “shared knowledge” block by giving some examples of what short-term
`
`shared knowledge could be, but does not provide any details how shared knowledge is
`
`{01956200;v1 }
`
`10
`
`
`
`Case 1:19-cv-01410-MN Document 283 Filed 11/07/23 Page 14 of 24 PageID #: 10977
`
`
`
`accumulated, stored, and used later to determine contexts. (Tr. Tx. at 711:9-712:10 (Johnson).)
`
`Similarly, the specification provides some examples of what long-term shared knowledge could
`
`be, but does not explain how it is accumulated or how it is used along with short-term shared
`
`knowledge to identify a context. (Tr. Tx. at 712:16-713:7 (Johnson).) Claim 13 of the ’681 patent
`
`is therefore invalid for lack of written description. Identify a context. No reasonable jury could
`
`conclude that the inventors had possession of the claimed invention of the ’681 patent at the
`
`application filing date. See Ariad, 598 F.3d at 1351. The patent contains a block diagram of the
`
`claimed conversational speech engine with a block labelled “context determination,” but does not
`
`explain its inner workings at all, much less how it uses both short-term or long-term shared
`
`knowledge to “identify a context” as claimed. (Tr. Tx. at 713:15-714:17.) That the inventors did
`
`not yet possess the invention is further evidenced by the fact that both Mr. Kennewick and Mr.
`
`Freeman admitted that they did not invent a conversational language processor. (Tr. Tx. at 226:16-
`
`25; 283:4-8.) Furthermore, Mr. Freeman admitted that as of 2010, years after the filing of the ’681
`
`patent, VoiceBox had not yet produced a product that utilized long-term memory, and software
`
`development was still underlying an iterative process to reach that point. (Tr. Tx. at 278:22-279:9.)
`
`Grammatically or syntactically adapts the response. No reasonable jury could conclude that
`
`the inventors of the ’176 patent had possession of the claimed invention as of the application filing
`
`date. See Ariad, 598 F.3d at 1351. The specification illustrates a block labelled “Adaptive
`
`Response Builder” and states that “contextually sensitive intelligent responses” may be generated
`
`from intelligent hypotheses, but does not provide sufficient detail on how said “contextually
`
`sensitive intelligent responses” are generated or how the claimed “grammatically or syntactically”
`
`adapting would be performed. (Tr. Tx. at 714:18-715:6.) In fact, Mr. Freeman admitted that
`
`{01956200;v1 }
`
`11
`
`
`
`Case 1:19-cv-01410-MN Document 283 Filed 11/07/23 Page 15 of 24 PageID #: 10978
`
`
`
`“adaptive response” was a concept that came from Grice’s 1975 paper. Claim 13 of the ’681 patent
`
`is therefore invalid for lack of written description.
`
` claim 40 of the ’176 patent. “Select an advertisement in the context established for
`
`the natural language utterance.” No reasonable jury could conclude that the inventors of the
`
`’176 patent had possession of the claimed invention as of the application filing date. See Ariad,
`
`598 F.3d at 1351. The specification identifies a single black box called an “advertisement selection
`
`250” as performing this claim limitation. The trial record