`
`______________________________
`
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`
`
`
`TOYOTA MOTOR CORPORATION
`
`Petitioner
`
`
`
`v.
`
`
`
`Patent of American VEHICULAR SCIENCES
`
`Patent Owner
`
`
`
`Patent No. 6,772,057
`
`Issue Date: August 3, 2004
`
`Title: VEHICLE MONITORING SYSTEMS USING IMAGE PROCESSING
`
`
`
`DECLARATION OF CRIS KOUTSOUGERAS, PhD IN SUPPORT OF
`AVS’S RESPONSE UNDER 37 CFR §42.120
`
`Case No. IPR2013-00419
`
`
`
`
`
`
`
`
`
`
`
`AVS EXHIBIT 2001
`Toyota Inc. v. American Vehicular Sciences LLC
`IPR2013-00419
`
`
`
`I.
`
`INTRODUCTION AND SUMMARY OF OPINIONS
`
`1. My name is Cris Koutsougeras. I am a professor at the department of
`
`Computer science and Industrial Technology at Southeastern Louisiana University,
`
`Hammond, LA., where I teach courses in Computer Science and in Engineering
`
`Technology. My background consists of degrees in Electrical Engineering,
`
`Computer Engineering, and Computer Science. Of my past work, pertinent to the
`
`present review is my research on neural networks, robotics control and
`
`intelligence, and sensors and their interfacing.
`
`2.
`
`I have been hired by American Vehicular Sciences (“AVS”) in
`
`connection with the above-captioned Inter Partes Reexamination Proceeding
`
`(“IPR”) before the United States Patent and Trademark Office. In the below
`
`paragraphs, I provide my opinion that at least claims 1-4, 7-10, 30-34, 37-39, 41,
`
`56, 59-62, and 64 of U.S. Patent No. 6,772,057 (“the ‘057 patent”) at issue in the
`
`IPR are not anticipated or obvious in view of the grounds for review.
`
`II.
`
`PROFESSIONAL BACKGROUND AND QUALIFICATIONS
`
`3. My background consists of a B.S. degree in Electrical Engineering, a
`
`M.S. degree in Computer Engineering, and a Ph.D. degree in Computer Science.
`
`4.
`
`I received my B.S. in 1983 from the National Technical University of
`
`Athens, my M.S. degree in 1984 from the University of Cincinnati, and my PhD in
`
`1988 from Case Western Reserve University. My Ph.D. research and dissertation
`
`
`
`1
`
`
`
`was on the topic of neural networks and more specifically on algorithms for
`
`training feed-forward types of neural networks.
`
`5.
`
`I also have experience in automotive technology involving external
`
`object detection and collision warning systems, and the use of pattern recognition
`
`technology in such systems, as I have participated in the DARPA 2005 Grand
`
`Challenge competition with a team that built an autonomous vehicle designed to
`
`drive completely unassisted
`
`in unknown and unrehearsed cross-country
`
`environments. The vehicle was a regular production SUV which was modified to
`
`be controlled by computers aided by sensors including Ladars and GPS.
`
`6.
`
`I am a professor at the department of Computer science and Industrial
`
`Technology at Southeastern Louisiana University, Hammond, LA., teaching
`
`courses in Computer Science and in Engineering Technology. I have served as
`
`department head of that department from 2006 to 2011.
`
`7.
`
`Prior to joining Southeastern LA University, I was a faculty of the
`
`department of Electrical Engineering and Computer Science from 1988 to 2006.
`
`8.
`
`A more detailed account of my work experience, qualifications, and
`
`list of publications is included in my Curriculum Vitae, which is attached to this
`
`Declaration.
`
`
`
`
`
`
`
`2
`
`
`
`III. COMPENSATION AND MATERIALS CONSIDERED
`
`9.
`
`I am being compensated for my time as an expert witness on this
`
`matter at $260 per hour. My compensation, however, does not depend in any way
`
`on my opinions or conclusions, nor on the result of this proceeding.
`
`10.
`
`11.
`
`12.
`
`I am not an employee of AVS or any affiliate, parent, or subsidiary.
`
`I have not served as an expert in the last 10 years.
`
`In arriving at my opinions, I considered the following documents:
`
` U.S. Patent No. 6,772,057;
`
` Prosecution History of U.S. Pat. No. 6,772,057;
`
` The Patent Trial and Appeals Board’s Decision to Institute Inter
`
`Partes Review;
`
` Toyota’s Petition for Inter Partes Review;
`
` The Declaration of Dr. Nikolaos Papanikolopoulos;
`
` The Transcript of the Deposition of Dr. Papanikolopoulos
`
` U.S. Pat. No. 6,553,130 to Lemelson;
`
` U.S. Pat. No. 5,245,422 to Borcherts;
`
` U.S. Pat. No. 5,214,408 to Asayama;
`
` Japanese Unexamined Patent Application Publication H07-
`
`125567 to Watanabe;
`
`
`
`3
`
`
`
` Pomerleau, “ALVINN: An Autonomous Land Vehicle in a
`
`Neural Network,” Technical Report AIP-77, March 13, 1990;
`
` Rombaut, M., “PRO-LAB2: A Driving Assistance System,
`
`Proceedings of the 1993 IEEE/Tsukuba International Workshop
`
`on Advanced Robotics, Tsukuba, Japan, Nov. 8-9, 1993;
`
` Suzuki, et al., “Driver Environment Recognition for Active
`
`Safety,” Toyota Technical Review Vol. 43, No. 1 (Sept. 1993);
`
` Japanese Unexamined Patent Application Publication H06-
`
`124340 to Yamamura; and
`
` Vincent W. Porto and David B. Fogel, Neural Network
`
`Techniques for Navigation of AUVs, Proceedings of the
`
`Symposium on Autonomous Underwater Vehicle Technology
`
`AUV 90, Washington, 5-6 June 1990.
`
` The additional patents and references I cite in this declaration in
`
`support of my opinions.
`
`IV. OVERVIEW OF THE ‘057 PATENT
`
`A. Technical Overview of the ‘057 Patent
`
`13. The ‘057 patent relates to an arrangement for monitoring an
`
`environment exterior of a vehicle. In particular, the ‘057 patent invention involves
`
`
`
`4
`
`
`
`classifying or identifying objects outside of the vehicle, and affecting other
`
`systems in the vehicle in response to the classification or identification.
`
`14. Each of the claims at issue in the IPR requires at least one of three
`
`specific features. Claims 1-4, 7-10, 31, 41, 56, 59-62, and 64 require using a
`
`“trained pattern recognition algorithm” that is generated from “data of possible
`
`exterior objects and patterns of received waves from the possible exterior objects.”
`
`Claim 30-34 and 37-39 require “at least one receiver arranged on a rear view
`
`mirror of the vehicle.” And claims 40-41, 43, 46, 48, and 49 require “a plurality of
`
`receivers arranged apart from one another and to receive waves from different parts
`
`of the environment exterior of the vehicle.”
`
`15. With respect to claims 1-4, 7-10, 31, 41, 56, 59-62, and 64, the ‘057
`
`patent’s trained pattern recognition algorithm is trained with data of possible
`
`exterior objects and patterns of received waves. (See, e.g., ‘057 at 14:17-25,
`
`35:36-37:58, 39:63-40:9.) In other words, the ‘057 patent’s pattern recognition
`
`sub-system is trained to recognize how waves behave when they are received from
`
`a given object. A trained pattern recognition model, such as a neural network is
`
`very different than a traditional program based model (otherwise known as a
`
`“symbolic” approach). A computer program can be used if there exists a well
`
`understood process (recipe to make it plain) that is expressible in finite terms and
`
`which can be used to determine the output that corresponds to an input. In order
`
`
`
`5
`
`
`
`for a programmer to produce a program he/she must know exactly this process and
`
`what part(s) or features of the input are relevant in this process. Sometimes,
`
`however, it can be very difficult, if not impossible, to know this recognition or
`
`decision process, or it may not be expressible in finite terms, or it may not be
`
`known which input parts (variables) or combination of input variables are
`
`sufficient to uniquely and confidently determine the output. Then the alternative to
`
`traditional programming is to use trainable systems which essentially use statistical
`
`methods to interpolate from input-output example instances.
`
`16. What the '057 patent discloses is that waves received from objects
`
`should carry enough information in their patterns to identify these objects and
`
`locations etc. But it is very difficult to isolate and extract this information by some
`
`stepwise process which will yield this identification in some finite steps. This is
`
`because we may not be able to express this process in finite terms, and/or because
`
`we do not know which parts of the waves (input) to use and how to combine them
`
`in order to determine the output. Therefore, the ‘057 patent discloses the use of
`
`neural networks.
`
`17.
`
`In turn, neural networks require training to perform a task, using a
`
`certain set of inputs and the outputs that correspond to those inputs (these input-
`
`output pairs comprise the training set). The choice of training set is an important
`
`key to the quality of the training of a neural net. With neural nets the developer
`
`
`
`6
`
`
`
`does not need to encode the process which the system is expected to perform
`
`(because the system is expected to “learn” it). Instead, the developer “teaches” or
`
`“trains” the system what it is expected to perform by providing example input and
`
`output samples. But what the system will learn, or how well it will learn, depends
`
`on what was provided as the training set as well as what was chosen to provide as
`
`input. There is not a single, unique way or method known to choose the input
`
`features and training set that will guarantee the optimal training for any possible
`
`application. The ‘057 patent suggests using “data of possible exterior objects and
`
`patterns of received waves.”
`
`18. Considering an object detection system, there might be an infinite
`
`number of possible input objects, when one factors in possible shapes, colors,
`
`angles, etc. In particular, in a vehicle-based system for detecting cars, it would be
`
`difficult to program such a system to compare a received image of a car to a
`
`database of images of all possible car models, in all possible colors, from all
`
`possible angles. Pattern recognition algorithms accommodate large variability in
`
`possible targets. A pattern recognition system based on a neural network is
`
`different from a traditional computer program because it does not just compare a
`
`detected object to a database to find a match. For example, a pattern recognition
`
`algorithm calculates degrees of similarity between something it has been informed
`
`is a car, versus something it has been informed is not a car, and does so based on
`
`
`
`7
`
`
`
`statistics extracted from the training set and reflected in its structure during
`
`training. The larger the training set, with more and balanced positive and negative
`
`examples that the system is given, the higher the degree of confidence that it will
`
`be properly trained to perform the intended function.
`
`19.
`
` The ‘057 patent discloses and claims a specific method for generating
`
`the “training set.” The ‘057 patent discloses training the algorithm with “data of
`
`possible exterior objects and patterns of received waves from the possible exterior
`
`objects.” For example, if the vehicle uses a radar receiver, a neural network could
`
`be trained with examples of received radar waves from possible objects such as
`
`cars, plus labels to indicate the classification and possibly other information
`
`relating to the object. This “other information” might include geographic
`
`information (GIS) data relating to GPS information. The statement “received
`
`waves from possible exterior objects” is understood to describe actual readings of
`
`real waves from actual possible exterior objects.
`
`20.
`
`In the case of a radar system, the examples of received radar waves
`
`from possible objects used to train the system are real radar waves, so that the
`
`system knows how to recognize radar waves received from that same object or a
`
`similar one when the vehicle is later driving in actual conditions. This can be
`
`done, for example, by putting actual examples of a possible object in front of a
`
`vehicle radar system, subjecting the object with radar waves that are received back
`
`
`
`8
`
`
`
`by the system, and then labeling the object for the system. This is different from
`
`other possible ways to train a pattern recognition system, such as through
`
`completely simulated data, which I discuss below.
`
`21. With respect to claims 30-34 and 37-39, the ‘057 patent requires a
`
`transmitter that is “on” the rearview mirror—not merely “near” the rearview
`
`mirror, or “in combination with” the rearview mirror. In my opinion, that
`
`necessarily means that the receiver must be of the type and size that it can fit “on”
`
`the rearview mirror. For example, a large, bulky receiver or a receiver mounted on
`
`the exterior of the vehicle would not be “on” the rearview mirror. In addition,
`
`claim 31 also requires a trained pattern recognition algorithm” that is generated
`
`from “data of possible exterior objects and patterns of received waves from the
`
`possible exterior objects.”
`
`22. With respect to claims 40-41, 43, 46, 48, and 49, I understand that
`
`AVS is only presenting arguments to overcome the grounds of rejection with
`
`respect to claim 41, which requires a trained pattern recognition algorithm” that is
`
`generated from “data of possible exterior objects and patterns of received waves
`
`from the possible exterior objects.” Therefore, I will only address claim 41.
`
`23.
`
`I also considered the prosecution history of the ‘057 patent. The
`
`prosecution history of the ‘057 patent did not involve any arguments or disclaimers
`
`that were relevant to my opinions in this declaration.
`
`
`
`9
`
`
`
`B. Claim Construction
`
`24.
`
`I understand that the first step in any invalidity analysis is to construe
`
`the meaning of the claims. The Board has done so in its Decision Instituting Inter
`
`Partes Review. In particular, the Board made the following claim constructions:
`
`25.
`
`“trained pattern recognition algorithm” The Board defined this as “an
`
`algorithm that processes a signal that is generated by an object, or is modified by
`
`interacting with an object, in order to determine to which one of a set of classes the
`
`object belongs, the algorithm having been taught, through a variety of examples,
`
`various patterns of received signals generated or modified by objects.”
`
`26.
`
`“trained pattern recognition means” The Board defined this as “a
`
`neural computer or neural network trained for pattern recognition, and equivalents
`
`thereof.”
`
`27. “identify” / “identification” The Board defined this as “determine
`
`that the object belongs to a particular set or class” and “identification” as
`
`“determination that the object belongs to a particular set or class.”
`
`28. “exterior object” The Board defined this as “a material or physical
`
`thing outside the vehicle, not a part of the roadway on which the vehicle travels.”
`
`29. “rear view mirror” The Board defined this as “a mirror that faces to
`
`the rear, which necessarily excludes non-rear-facing mirrors.”
`
`
`
`10
`
`
`
`30. “transmitter” The Board defined this as “encompassing devices that
`
`transmit any type of electromagnetic waves, including visible light.”
`
`C.
`
`31.
`
`Person of Ordinary Skill in the Art
`
`I understand that all my opinions with respect to the validity
`
`(including claim construction) of the ‘057 patent are to be considered from the
`
`viewpoint of the hypothetical person of ordinary skill in the art as of the date of the
`
`invention. I understand that this hypothetical person of ordinary skill in the art is
`
`considered to have the normal skills and knowledge of a person in a certain
`
`technical field, as of the time of the invention at issue. I understand that the factors
`
`that may be considered in determining the level of ordinary skill in the art include
`
`the education level of the inventor, the types of problems encountered in the art,
`
`prior art solutions to those problems, the educational level of active workers in the
`
`field, the rapidity with which innovations are made, and the sophistication of the
`
`technology.
`
`32.
`
`In my opinion, based on my experience and knowledge in the field,
`
`such a person would have at least a bachelor’s degree in a relevant engineering
`
`field and at least some professional experience, perhaps two to three years,
`
`working with exterior monitoring or object detecting systems as well as pattern
`
`recognition methods, or such a person can have more experience or education such
`
`as a master’s degree or doctorate degree.
`
`
`
`11
`
`
`
`33.
`
`I have read the opinion of Dr. Papanikolopoulos, and I have no
`
`fundamental dispute with his proposed definition. Therefore, I have no objection
`
`to using it for the purposes of my analysis.
`
`D.
`
`34.
`
`Scope and Content of the Prior Art
`
`In my opinion, the scope and content of the prior art would have been
`
`narrower than that offered by Dr. Papanikolopoulos. In my opinion, the scope and
`
`content of the prior art would not have generically included any “vehicle sensing
`
`systems,” as there are many vehicle sensing systems that do not specifically target
`
`the “classification”, “identification”, or “location” of external objects. In my
`
`opinion, the scope and content of the prior art that is relevant here, is that which
`
`would have included sensors and pattern recognition algorithms for object
`
`classification, including those for automotive use.
`
`35.
`
`I do not disagree that the references offered by Dr. Papanikolopoulos
`
`and applied by the Board in its Institution Decision are within the scope and
`
`content of the prior art to the ‘057 patent. I do, however, disagree that any of the
`
`references invalidate the ‘057 patent claims that I address below.
`
`V. LEGAL STANDARDS APPLIED
`
`36.
`
`I am not an expert in patent law, and I am not purporting to provide
`
`any opinions regarding the correct legal standards to apply in these proceedings. I
`
`
`
`12
`
`
`
`have been asked, however, to provide my opinions in the context of the following
`
`legal standards that have been provided to me by AVS’s attorneys.
`
`37. Anticipation: It is my understanding that a patent is invalid as
`
`anticipated if each and every limitation of the claimed invention is disclosed in a
`
`single prior art reference, either expressly or inherently, such that one of ordinary
`
`skill in the art would be enabled to make the claimed invention without undue
`
`experimentation. For anticipation, every limitation of a claim must appear in a
`
`single prior art reference as arranged in the claim. An anticipating reference must
`
`describe the patented subject matter with clarity and detail to establish that the
`
`subject matter existed in the prior art and that such existence would be recognized
`
`by one of ordinary skill. The prior art is enabling if the disclosure would have put
`
`the public in possession (i.e., provided knowledge) of the claimed invention and
`
`would have enabled one of ordinary skill to make or carry out the invention
`
`without undue experimentation.
`
`38.
`
`Inherency: I understand that if a prior art reference does not expressly
`
`disclose a claimed feature, but the teaching of the reference would necessarily
`
`result in a product with the claimed feature, then anticipation may be met
`
`inherently. For a prior art reference to inherently disclose a claimed feature,
`
`however, the feature must be necessarily present and may not be established just
`
`because it may be probable or possible. The mere fact that a condition may result
`
`
`
`13
`
`
`
`from a set of circumstances, or even probably results from the set of circumstances,
`
`is not sufficient for proof of inherency. Further, I understand that for the purposes
`
`of evaluating anticipation of a prior art reference, the reference must be interpreted
`
`from the understanding of one of ordinary skill in the art.
`
`39. Obviousness in General: I have been informed that a patent can also
`
`be invalidated through obviousness if the subject matter of a claim as a whole
`
`would have been obvious at the time of the invention to a person of ordinary skill
`
`in the art. I understand that obviousness allows for the combination of prior art
`
`references. I have been informed that there are four basic inquiries that must be
`
`considered for obviousness:
`
`a. What is the scope and content of the prior art?
`
`b. What are the differences, if any, between the prior art and each claim
`
`of the patent?
`
`c. What is the level of ordinary skill in the art at the time the invention
`
`of the patent was made?
`
`I also understand that when prior art references require selective combination to
`
`render a patent obvious, there must be some reason to combine the references other
`
`than hindsight. Even if there would have been an apparent reason for combining
`
`prior art references, however, there must also have been a reasonable expectation
`
`of success. I understand that features from prior art references need not be
`
`
`
`14
`
`
`
`physically combinable (i.e., a combination may be obvious if one of ordinary skill
`
`in the art would know how to make any necessary modifications to combine
`
`features from prior art references), but that this concept does not negate the
`
`requirement of a reasonable expectation of success. One must also consider the
`
`evidence from secondary considerations including commercial success, copying,
`
`long-felt but unresolved needs, failure of others to solve the problem, unexpected
`
`results, and whether the invention was made independently by others at the same
`
`time of the invention. I understand that these secondary considerations can
`
`overcome a finding of obviousness.
`
`VI. OPINIONS REGARDING VALIDITY OF ‘057 PATENT CLAIMS
`
`A. None of the Cited References Disclose a “Pattern Recognition
`Algorithm Generated From Data of Possible Exterior Objects and
`Patterns of Received Waves from the Possible Exterior Objects”
`(claims 1-4, 7-10, 31, 41, 56, 59-61, 62, 64)
`
`
`40. As I previously discussed, independent claims 1 and 56 both require a
`
`“trained pattern recognition means” that comprises a “pattern recognition
`
`algorithm generated from data of possible exterior objects and patterns of received
`
`waves from the possible exterior objects.” (‘057 patent at claims 1 and 56.)
`
`Therefore, dependent claims 2-4, 7-10, 59-61, 62, and 64 also require this
`
`limitation. In addition, dependent claims 31 and 41 also both require this
`
`limitation. (See ‘057 patent at claims 31 and 41.)
`
`41. The only prior art reference, according to the Board’s preliminary
`15
`
`
`
`
`
`Decision to Institute Inter Partes Review, which allegedly discloses a “trained
`
`pattern recognition means” comprising a “pattern recognition algorithm generated
`
`from data of possible exterior objects and patterns of received waves from the
`
`possible exterior objects,” is Lemelson. (See Board Decision at pp. 15-21, 38.)
`
`Because of that, the Board only granted review of claims 1-3, 7-10, 41, 56, 60-61,
`
`and 64 for alleged anticipation in view of Lemelson. Therefore, my understanding
`
`is that the only prior art that is pertinent to this review (validity of these claims) is
`
`that disclosed by Lemelson and thus my considerations should be restricted to the
`
`disclosures of Lemelson. The Board also granted review of claims 4, 59, and 62 on
`
`this ground, as well as for alleged obviousness in view of other limitations of those
`
`claims. In other words, the Board did not allege that any other reference rendered
`
`obvious a “trained pattern recognition means” comprising a “pattern recognition
`
`algorithm generated from data of possible exterior objects and patterns of received
`
`waves from the possible exterior objects.” I understand that the Board’s decision
`
`to Institute Inter Partes Review is final as to grounds not adopted. (See Board
`
`Decision at p. 45 (“FURTHER ORDERED that all other grounds raised in the
`
`petition are denied.”).)
`
`42. Accordingly, I understand that if Lemelson is found to not disclose a
`
`“trained pattern recognition means” comprising a “pattern recognition algorithm
`
`generated from data of possible exterior objects and patterns of received waves
`
`
`
`16
`
`
`
`from the possible exterior objects,” then claims 1-4, 7-10, 31, 41, 56, 59-61, 62, 64
`
`overcome all remaining grounds and will be upheld in the inter partes review. I
`
`understand that this is the only argument with respect to the ‘057 patent that AVS
`
`is presenting. For the reasons discussed below, in my opinion, Lemelson does not
`
`in fact disclose, either expressly or inherently, the required “trained pattern
`
`recognition means” comprising a “pattern recognition algorithm generated from
`
`data of possible exterior objects and patterns of received waves from the possible
`
`exterior objects.” And while the Board has not made this argument in any other
`
`grounds, neither do any of the other prior art references raised by Toyota in its
`
`Petition. For that reason, and considering the above mentioned constraints, in my
`
`opinion, claims 1-4, 7-10, 31, 41, 56, 59-61, 62, and 64 are patentable over the
`
`prior art.
`
`a) Lemelson
`
`Lemelson’s Disclosure of Training a Pattern Recognition
`Algorithm—Lemelson Does Not Expressly Disclose the Type and
`Nature of the Training or “Known Inputs” Used for Training
`
`43. Lemelson discloses a system for identifying objects exterior to a
`
`vehicle, and it does disclose using a type of pattern recognition algorithm, a neural
`
`network. (See Lemelson at Fig 3, 7:47, 8:1) The only pertinent discussion in
`
`Lemelson, however, relating to generating the neural network, is found at column
`
`8, line 4, which states “[t]raining involves providing known inputs to the network
`
`
`
`17
`
`
`
`resulting in desired output responses.” (Lemelson at 8:4-6.)
`
`44.
`
`I note that this is also the only reference to training in Lemelson found
`
`in the declaration of Dr. Papanikolopoulos. (See Exhibit 2002, Papanikolopoulos
`
`Decl. at ¶¶ 47-64.) Specifically, Dr. Papanikolopoulos only quotes the sentence
`
`from Lemelson that states “[t]raining involves providing known inputs to the
`
`network resulting in desired output responses.” (Id. at ¶55.) Nowhere else in his
`
`declaration does he use the words “training”, “trained,” or “generate” when
`
`discussing the Lemelson reference.
`
`Dr. Papanikolopoulos’s Deposition Testimony
`
`45. At his deposition, Dr. Papanikolopoulos tried to suggest that other
`
`disclosure in Lemelson also allegedly relates to the nature and type of training
`
`involved. (See Exhibit 2002, Papanikolopoulos Dep. Tr. at 163:4-165:13, 167:4-
`
`169:22.) But in fact, the disclosure pointed to by Dr. Papanikolopoulos points out
`
`a specific neural network structure and where it is integrated into Lemelson's
`
`description, but it does not relate to the training per se of a pattern recognition
`
`algorithm. For example, Dr. Papanikolopoulos referred to Figures 1-5 at his
`
`deposition. None of those Figures refers to training or includes the word training.
`
`Each of those figures only relates to how the system gathers data while in use
`
`(after it has been trained). Dr. Papanikolopoulos’s argument, for example, that
`
`Figure 1’s reference to an “image analyzing computer” discloses the nature and
`
`
`
`18
`
`
`
`extent of the “training” is unfounded. (See Exhibit 1002, Lemelson at Fig. 1.) The
`
`fact that a computer was used to analyze images obtained when the system is in use
`
`(after it has been trained) tells nothing about the nature and extent of the training
`
`phase. An image analysis computer could be used to analyze received camera
`
`images when the vehicle is driving along the road, but the system could have been
`
`trained with something else entirely (such as simulated inputs as I discuss below).
`
`Nor does the fact that Figure 5 refers to “images” provides any information at all
`
`about the training phase of the algorithm. Again, the “images” referred to in
`
`Figure 5 are the images gathered by the camera/receiver when the vehicle is being
`
`driven in normal operating mode (post-training), to identify objects—nothing in
`
`that Figure refers to the training phase of the algorithm. (See Exhibit 1002,
`
`Lemelson at Fig. 5.)
`
`46. Dr. Papanikolopoulos appears to conjecture that the way a pattern
`
`recognition system is used after training necessarily discloses the methods and
`
`means by which it was trained during the training phase, and this is where we
`
`sharply differ. There are many different modes in which the training phase can be
`
`conducted independently of the precise intended use in the actual operating phase
`
`of the application. In this case for example, the training phase can be conducted in
`
`real time while driving around a prototype in real conditions, or it can be
`
`conducted by storing sensor data from driving a prototype in the actual
`
`
`
`19
`
`
`
`environment and later using these data in a lab with or without pre-processing, or it
`
`can be conducted in the controlled environment of a lab in which real-life
`
`conditions are re-created, or it can be conducted with completely simulated data
`
`generated by software in a lab, etc. My point is that the intended use after training
`
`does not necessarily and uniquely determine how the training was conducted.
`
`Thus I do not agree that the mere adoption of the specific neural network in
`
`Lemelson's disclosures implies the specific training method adopted for any other
`
`related use of a neural network.
`
`47.
`
`In fact, Dr. Papanikolopoulos at his deposition seemed to have largely
`
`avoided providing clear answers to questions on the issue of Lemelson’s disclosure
`
`of training. For example, on the question of where in his declaration he used the
`
`word “training” other than his citation to the single sentence from Lemelson
`
`referring to “known inputs,” he pointed at other features in Lemelson, which
`
`nevertheless did not relate to the question. (See Exhibit 2002, Papanikolopoulos
`
`Decl. at 167:17-169:2) In addition, his answers at his deposition (specifically
`
`170:11-172:6 and 173:4-173:14) regarding Lemelson's disclosure(s) of the
`
`“training” were not specifically addressing the specific question and instead
`
`seemed as efforts to broaden the scope of the question and introduce the training
`
`issue as an anticipation. Dr. Papanikolopoulos did not provide a clear and definite
`
`answer to the simple question regarding whether the word “training” appeared
`
`
`
`20
`
`
`
`anywhere in his declaration discussion of Lemelson other than the single sentence
`
`in paragraph 55 of his declaration (it clearly does not).
`
`48. Throughout his deposition, Dr. Papanikolopoulos evaded many other
`
`clear questions. For example, when he was asked to provide some examples of
`
`“trained pattern recognition algorithms” that are not “generated from data of
`
`possible exterior objects and patterns of received waves from the possible exterior
`
`objects”, Dr. Papanikolopoulos simply recited back the claim construction and
`
`stated that he had to “stick to the construction that was given,” or state that he “was
`
`not asked this,” or he recited the disclosure of the Lemelson reference, even after
`
`AVS’s counsel repeatedly stated that his answers did not respond to the question
`
`asked and that he was required to give a substantive response. (See, e.g.,
`
`Papanikolopoulos Dep. Tr. at 89:22-97:4.)
`
`49.
`
`In another instance, Dr. Papanikolopoulos was asked of his opinions
`
`regarding a receiver arranged “on a rearview mirror,” AVS’s counsel asked him
`
`what should have been a simple question—if a receiver on the back bumper of a
`
`car is “arranged on the rearview mirror” as required by claim 30? His answer to
`
`such a clearly binary question appears extremely evasive:
`
`Q. If I put the receiver on the back bumper of the car, can you tell me if
`that’s arranged on the rearview mirror?
`
`A. What is the size in this case?
`
`Q. Any receiver. Any receiver that’s positioned on the back bumper of a
`21
`
`
`
`
`
`car, is that positioned on the rearview mirror?
`
`A. Is it supposed to be a receiver that’s too big to fit on the rear end of the
`car?
`
`Q. I’m not asking anything like that. If I point to a car—I put a car in front
`of you. I point to the camera that’s on the rear bumper and I ask you: ‘Is
`that camera on the rear bumper on the rearview mirror,’ you can’t answer
`yes or no?
`
`A. Because there is no yes-or-no answer on this.
`
`(Papanikolopoulos Dep. Tr. at 155:3-23.) He has advocated so strongly that some
`
`of his opinions do not quite appear reasonable.
`
`50.
`
`In any event, Dr. Papanikolopoulos’s testimony does not change the
`
`fact (and the conclusion of the Board in its Decision to Institute Inter Partes
`
`Review) that the only sentence in Lemelson that refers to training the pattern
`
`recognition algorithm is the sentence at column 8, line 4. I tried to find another
`
`instance of the