`__________________
`
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`__________________________________________________________________
`
`TOYOTA MOTOR CORPORATION
`
`Petitioner
`
`v.
`
`AMERICAN VEHICULAR SCIENCES,
`
`Patent Owner
`
`Patent No. 5,845,000
`Issue Date: December 1, 1998
`Title: OPTICAL IDENTIFICATION AND MONITORING SYSTEM USING
`PATTERN RECOGNITION FOR USE WITH VEHICLES
`__________________________________________________________________
`
`REPLY DECLARATION OF NIKOLAOS PAPANIKOLOPOULOS, PH.D.
`
`
`Case No. IPR2013-00424
`__________________________________________________________________
`
`
`
`
`
`
`
`IPR2013-00424 – Ex. 1020
`Toyota Motor Corp., Petitioner
`1
`
`
`
`I, Nikolaos Papanikolopoulos, Ph.D., hereby further declare and state as follows:
`
`I.
`BACKGROUND
`1. My employment and compensation information have not changed since I
`
`submitted my original declaration in support of Toyota’s Petition for Inter Partes
`
`Review of U.S. Patent No. 5,845,000 (“the ’000 patent”).
`
`2.
`
`A copy of my updated curriculum vitae is included herewith.
`
`II. ASSIGNMENT AND COMPENSATION
`3.
`I submit this declaration in support of Toyota’s Reply to Patent Owner’s
`
`Response (Paper 29, hereinafter “Response”) and in response to the Declaration
`
`(Exhibit 2002) and Deposition Testimony (Exhibit 1019) of Cris Koutsougeras.
`
`4.
`
`Specifically, I have been asked to respond to Dr. Koutsougeras’s opinions
`
`regarding the disclosure in U.S. Patent No. 6,553,130 (“Lemelson”) relating to neural
`
`network training and regarding the combination of Lemelson with Japanese
`
`Publication No. JP-S62-131837 to Yanagawa (“Yanagawa”).
`
`5.
`
`The opinions expressed in this declaration are not exhaustive of my opinions
`
`on the patentability of any of the claims in the ’000 patent. Therefore, the fact that I
`
`do not address a particular point should not be understood to indicate any agreement
`
`on my part that any claim otherwise complies with the patentability requirements.
`
`In forming my opinion I have reviewed the following additional sources:
`
`
`
`
`Declaration of Chris Koutsougeras, PhD in Support of AVS’s Response
`
`Under 37 CFR §42.120 (Ex. 2002).
`
`
`
`2
`
`
`
`
`
`
`
`
`
`
`
`Decision on Institution of Inter Partes Review for U.S. Patent No.
`
`5,845,000 (Paper 16).
`
`Patent Owner’s Response (Paper 29).
`
`U.S. Patent No. 5,537,327 (Exhibit 2004).
`
`The transcript from the deposition of Dr. Cris Koutsougeras in
`
`connection with this case (Exhibit 1019).
`
`6.
`
`The opinions expressed in this declaration are my personal opinions and do not
`
`reflect the views of University of Minnesota.
`
`III. ANALYSIS
`A.
`As a preliminary matter, I understand from Dr. Koutsougeras’s declaration and
`
`Preliminary Understanding of Dr. Koutsougeras’ Positions
`
`7.
`
`deposition that he divided the data that could have been used for pattern recognition
`
`algorithm training (in 1995) into three areas: training with data and waves from actual
`
`objects (“real data”), training with simulated data and waves (“simulated data”), and
`
`training with “data and waves not representing exterior objects to be detected”
`
`(“partial data”). Ex. 1019 at 86:25-87:14, 132:24-138:5, 163:18-164:7. As I understand
`
`it, Dr. Koutsougeras opined that only training with real data would meet the claim
`
`limitations “pattern recognition algorithm generated from data of possible exterior
`
`objects and patterns of received electromagnetic illumination from the possible
`
`exterior objects” and “pattern recognition algorithm generated from data of possible
`
`sources of radiation including lights of vehicles and patterns of received radiation
`
`
`
`3
`
`
`
`from the possible sources.”
`
`8.
`
`As I understand it, Dr. Koutsougeras further opined that Lemelson’s disclosure
`
`of training with “known inputs” does not necessarily mean training with “real data”
`
`because it could have been referring instead to “simulated data” or “partial data.”
`
`9.
`
`For the reasons I discuss below, I disagree with Dr. Koutsougeras’s
`
`interpretation of Lemelson. In my opinion, one of ordinary skill in the art would have
`
`understood the phrase “known inputs” in Lemelson to refer to “real data” because
`
`Lemelson’s neural network was trained to identify exterior objects, and one of
`
`ordinary skill in 1995 would have known that training with “real data” would have
`
`yielded the best results for this purpose. One of ordinary skill in the art would not
`
`have understood that “known inputs” referred to simulated data or partial data in the
`
`context of Lemelson’s disclosure, since one of ordinary skill would not have had any
`
`reason to believe that those categories of data would have been effective for the
`
`purpose of identifying exterior objects or sources of radiation.
`
`B. One of Ordinary Skill Would Have Understood that Training of
`the Lemelson Neural Network Would Have Used Real Data
`10. Lemelson discloses a collision avoidance system, wherein a neural network is
`
`used to identify many different types of objects that could present themselves as
`
`hazards on a roadway, including, for example, road barriers, trucks, automobiles,
`
`pedestrians, signs and symbols, etc. Ex. 1002 at 5:41-43; 8:1-6. Lemelson explains:
`
`Neural networks used in the vehicle . . . warning system are trained to
`
`
`
`4
`
`
`
`recognize roadway hazards which the vehicle is approaching including
`
`automobiles, trucks, and pedestrians. Training involves providing
`
`known inputs to the network resulting in desired output responses. The
`
`weights are automatically adjusted based on error signal measurements
`
`until the desired outputs are generated. Various learning algorithms may
`
`be applied. Adaptive operation is also possible with on-line adjustment
`
`of network weights to meet imaging requirements.
`
`Ex. 1002 at 8:1-6.
`
`11. One of ordinary skill in the art would have understood the phrase “known
`
`inputs,” and would have understood that it referred to the use of real sensor data in
`
`the context of Lemelson. For example, one of ordinary skill would have understood
`
`that training a neural network could involve putting actual examples of real-world
`
`objects in front of a camera, imaging them, and providing feedback to the neural
`
`network as to the desired output responses corresponding to those images.
`
`12. As set forth below, it is my opinion that one of ordinary skill would not have
`
`understood the phrase “known inputs” in the context of Lemelson to refer to “partial
`
`data” or “simulated data” because one of ordinary skill would have recognized that
`
`neither of these categories would have been effective for the intended purpose of
`
`training a neural network to identify various types of exterior objects or for identifying
`
`the sources of radiation. I often refer to these ineffective training routines to my
`
`students as “garbage in–garbage out.”
`
`
`
`5
`
`
`
`C. Dr. Koutsougeras’ Description of Partial Data is Inaccurate
`13. One of ordinary skill in the art would not have understood the phrase “known
`
`inputs” to refer to partial data, such as from license plates because they would have
`
`recognized that training a neural network with partial data would not have been
`
`successful for the purpose of identifying different exterior objects.
`
`14.
`
`For example, one of ordinary skill in the art would not have been able to use
`
`partial data, such as license plate data, to identify, classify, or locate pedestrians and to
`
`distinguish them from vehicles. While partial data may be useful in certain isolated
`
`situations, such as when there is only a single object of interest, partial data is not
`
`useful when there are many possible objects that need to be identified, such as is the
`
`case in Lemelson.
`
`15. Also, when identifying objects that reflect radiation, the presence of occlusions
`
`and/or shadows in the environment exterior to a vehicle complicates training, because
`
`these occlusions and shadows may completely mask partial data.
`
`16. One of ordinary skill in the art would not have understood that a neural
`
`network used for exterior object identification would be trained with partial data.
`
`Rather, one of ordinary skill would have understood that a neural network would be
`
`trained with all available sensor information to associate particular sensor information
`
`with desired output responses. The purpose of training a neural network is to identify
`
`the particular features in a scene that are important and that are indicative of the
`
`exterior object of interest. On the other hand, to detect objects using partial data, one
`
`
`
`6
`
`
`
`of ordinary skill would already expect to know the particular features that are
`
`important and indicative of the exterior object of interest (e.g., the outline and corner
`
`points of a license plate). The only example that Dr. Koutsougeras points to in his
`
`declaration to support his “partial data” theory is a pattern recognition system I
`
`worked on to detect license plates. Contrary to Dr. Koutsougeras’s assumption, this
`
`system did not involve trained pattern recognition.
`
`17. Accordingly, one of ordinary skill would not have expected that training the
`
`neural network of Lemelson with partial data would have resulted in a system capable
`
`of identifying the exterior objects required by Lemelson, such as “pedestrians, barriers
`
`and dividers, turns in the road, signs and symbols.” Ex. 1002 at 5:42-43.
`
`D. Dr. Koutsougeras’ Description of Simulated Data is Inaccurate
`18. One of ordinary skill in the art would not have understood the phrase “known
`
`inputs” to refer to simulated data because they would have recognized that training a
`
`neural network with simulated data would not have been successful for the purpose of
`
`identifying different exterior objects. One of ordinary skill in the art in 1995 would
`
`have known that the generation of simulated data was not sophisticated enough to
`
`allow for training the type of neural network described by Lemelson. Thus, even if
`
`simulated data were used, the result would have likely been “garbage in–garbage out.1”
`
`
`1. I note that AVS has taken two quotes from my deposition regarding simulated data
`
`out of context. First, the system I was talking about on page 48 of the transcript was
`
`
`
`7
`
`
`
`19. One of ordinary skill in the art would have known that simulated data suffered
`
`from several problems in 1995.
`
`20.
`
`First, generation of simulated data would have required a lot of computer
`
`power and special equipment, neither of which were disclosed by Lemelson. See, e.g.,
`
`the Warp supercomputer used by Pomerleau, Ex. 1005 at 40. Lemelson does not
`
`disclose any computer hardware or methods for generating simulated data. See, e.g.,
`
`1002 at Fig. 1. Generation of a simulated data set would also have required an
`
`understanding of all of the possible exterior objects and radiation sources that a
`
`vehicle could encounter. One of ordinary skill would have recognized that it would
`
`have been easier to simply use real sensor data.
`
`21.
`
`Second, the Lemelson neural network was trained to identify “other vehicles,
`
`pedestrians, barriers and dividers, turns in the road, signs and symbols.” As of 1995,
`
`
`my own system for identifying only license plates. See Ex. 1022 at 41:4-49:21. This
`
`system was not a collision avoidance system where different types of exterior objects
`
`needed to be identified. I was only locating license plates. Second, the question on
`
`page 102 referred to a system from the present day, and not from 1995. See Ex. 1022
`
`at 102:18-22. Both of my statements at my deposition are fully consistent with my
`
`opinion here that one of ordinary skill in the art at the time of the publication of
`
`Lemelson would only have expected to use real data as known inputs to the neural
`
`network.
`
`
`
`8
`
`
`
`one of ordinary skill in the art would not have expected that a simulated data set could
`
`be readily generated that could accurately represent all exterior objects described by
`
`Lemelson as perceived by sensors on a vehicle. This type of simulation would have
`
`required modeling of both a moving camera and moving objects in a scene, such as
`
`pedestrians, which would have resulted in a very complex data set. Furthermore, one
`
`of ordinary skill would have recognized that all of these complexities would have been
`
`obviated by simply training the system with real data in a variety of situations.
`
`22. Dr. Koutsougeras cites to U.S. Patent No. 5,537,327, in his discussion of
`
`simulated training data. However, his citation to the ’327 patent is misplaced. The
`
`disclosure of the ’327 patent relates to a different subject matter from that disclosed
`
`by the ’000 patent or in the Lemelson reference. It relates to the use of neural
`
`networks to identify fault impedances in electrical power systems. The ’327 patent
`
`does not involve classification, identification, or location of objects exterior to a
`
`vehicle, or for that matter, any exterior monitoring from a vehicle at all. Furthermore,
`
`the ’327 patent accounts for none of the complications that would have arisen when
`
`identifying possible exterior objects that could collide with a vehicle, as in Lemelson.
`
`Accordingly, the ’327 patent would not have indicated to one of ordinary skill in the
`
`art that “simulated data” could have been used as a known input to the Lemelson
`
`system.
`
`23.
`
`I also note that the Pomerleau publication, which Dr. Koutsougeras relied on
`
`in connection with IPR2013-00419 and in his deposition, instructs one of ordinary
`9
`
`
`
`
`
`skill in the art not to use simulated data for identification. For example, the
`
`Pomerleau thesis concluded that training with simulated data “has serious
`
`drawbacks.” Ex. 1005 at p. 40. Ultimately, Pomerleau concluded that simulated data
`
`should not be used to train a system. Id. at pp. 40, 56. Pomerleau reached this
`
`conclusion despite the fact that his computational needs were much less demanding
`
`than that required by Lemelson. Pomerleau explained that “differences between the
`
`synthetic road images on which the network was trained and the real situations on
`
`which the network was tested often resulted in poor performance in real driving
`
`situations.” Ex. 1005 at p. 40. Pomerleau further stated: “[W]hile relatively effective
`
`at training the network to drive under the limited conditions of a single-lane road, it
`
`quickly became apparent that extending the synthetic training paradigm to deal with
`
`more complex situations such as multi-lane and off-road driving would require
`
`prohibitively complex training data generators.” Id. Because of these drawbacks,
`
`Pomerleau concluded that, “[g]enerating realistic artificial training data proved
`
`impractical for all but the simplest driving situations.” Ex. 1005 at p. 56.
`
`24. Accordingly, one of ordinary skill in the art would not have understood that the
`
`phrase “known inputs” in Lemelson referred to “simulated data.”
`
`E. Dr. Koutsougeras’ Description of Training the System of
`Yanagawa in Combination with the Neural Network of Lemelson
`is Inaccurate
`25. As I described in my opening declaration, one of ordinary skill in the art would
`
`have been motivated to combine the neural network of Lemelson with the high beam
`
`
`
`10
`
`
`
`detection system of Yanagawa. The purpose of the combination would have been to
`
`improve the system of Yanagawa so that it could account for more complicated
`
`scenarios, such as motorcycles, police sirens, or traffic lights.
`
`26. One of ordinary skill in the art would have been motivated to train a neural
`
`network for the Yanagawa system using “real data,” i.e. “data of possible sources of
`
`radiation including lights of vehicles and patterns of received radiation from the
`
`possible sources.”
`
`27. The motivation to combine Yanagawa and Lemelson would have necessitated
`
`the use of real data, as opposed to simulated data. For example, the motivation to use
`
`a neural network would have arisen because the variability in real-world situations is
`
`difficult to represent with a set of equations, such as when a neural network attempts
`
`to identify stray reflections off road signs or traffic lights.
`
`28. One of ordinary skill would not have used simulated data, because this would
`
`have required complex generation of simulations of all possible radiation sources and
`
`the situations in which a vehicle would encounter the radiation. Nor would one of
`
`ordinary skill in the art have used license plate data to train the Yanagawa system.
`
` declare that all statements made herein of my own knowledge are true and that all
`
` I
`
`statements made on information and belief are believed to be true, and further that
`
`these statements were made with the knowledge that willful false statements and the
`
`like so made are punishable by fine or imprisonment, or both, under Section 1001 of
`11
`
`
`
`
`
`Title 18 of the United States Code.
`
`
`
`
`Date: June 1, 2014
`Nikolaos Papanikolopoulos, Ph.D.
`
`
`
`
`
`12
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
` Thesis: “Controlled Active Vision” under the supervision of Professor Pradeep K. Khosla.
`
`• Carnegie Mellon University, Pittsburgh, Pennsylvania
` Master of Science in Electrical and Computer Engineering, December 1988.
` GPA: 4.0 / 4.0.
`
`
`
`
`
`
`
`• Carnegie Mellon University, Pittsburgh, Pennsylvania
` Doctor of Philosophy in Electrical and Computer Engineering, August 1992.
` GPA: 4.0 / 4.0.
`
`Nikolaos P. Papanikolopoulos
`
` Distinguished McKnight University Professor
`
` Director, Center for Distributed Robotics and SECTTRA
`
`
`Department of Computer Science and Engineering
`University of Minnesota
`
`200 Union Street SE
`
` Minneapolis, MN 55455
`
`phone: (612) 625-0163
`
`fax: (612) 625-0575
`
`e-mail: npapas@cs.umn.edu
`
`URL: http://www.cs.umn.edu/~npapas
`
`Education
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
` Project: “Use of Fuzzy Logic in the Selection of Gains of the PID Controller” under the supervision of
`
` Professor Spyros Tzafestas.
`
`Professional Experience
`
`
`
`
`
`
`
`
`
`
`Research Experience
`
`
`
`• University of Minnesota, September 1992 - Present
`
`
`
`
`
`
`
` Project: “FORS: Flexible Organizations” under the supervision of Professor Sarosh Talukdar.
`
`• National Technical University of Athens, Athens, Greece
` Diploma of Engineering in Electrical and Computer Engineering, July 1987.
` GPA: 8.75 / 10.0.
`
`• Professor: University of Minnesota, Minneapolis, Minnesota.
`Fall 2001 - present.
`
`
`• Associate Professor: University of Minnesota, Minneapolis, Minnesota.
`Fall 1996 - 2001.
`
`• Assistant Professor: University of Minnesota, Minneapolis, Minnesota.
`Fall 1992 - 1996.
`
`Distributed Systems (Robotics)
`Robotics
`Computer Vision
`Medical Robotics
`Sensor Networks
`
`13
`
`
`
`Robotic Visual Tracking and Servoing
`Sensor-Based Control in Transportation Systems
`Active Vision
`Computer Engineering
`Signature Recognition and Identification
`Object Recognition
`
`• Carnegie Mellon University, August 1987 - August 1992
`
`Robotics, Computer Vision, and Control
`Computer Integrated Manufacturing
`Tool Integration
`Software Engineering
`
`• National Technical University of Athens, September 1986 - July 1987
`
`Fuzzy and Intelligent Control
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`Industrial Experience
`
`
`
`• Consultant: University of Cyprus, Nicosia Cyprus.
`Helped in the creation of the Department of Electrical and Computer Engineering, 2001 - 2005.
`
`
`
`
`
`
`
`
`
`
`• Consultant: Banner Engineering, KYOS, Tennant, ISS, Best Buy, MOVE eye, VideoNEXT Inc.,
`DATACARD Inc., etc., Minneapolis, MN.
`Developed computer vision algorithms for vehicle tracking, camera calibration, human tracking, object
`recognition, and fingerprint recognition, 1997 - Present.
`
`• Consultant: IMS ExpertServices, ΑΒΒ, Toyota, Canon, Κenyon and Kenyon, and Bartlit Beck Herman
`Palenchar & Scott LLP, Minneapolis, MN.
`Provided technical expertise in litigation that involves surveillance systems, vehicle monitoring systems,
`human activity monitoring, and motion control devices, 2012 - Present.
`
`• Consultant: Architecture Technology Corporation, Eden Prairie, MN.
`Developed algorithms and software for multi-robot teams, 2003 - 2005.
`
`• Founder: ReconRobotics Inc., Edina, MN.
`Developed different robotic platforms, 2005-Present.
`
`• Software Engineer: Piraeus Service Bureau, Piraeus, Greece.
`Developed software for inventory control, accounting, and factory automation. August 1983 - July 1987.
`
`
`
`
`
`
`
`
`
`
`Teaching Experience
`
`
`• Professor: University of Minnesota,
`
`
` Minneapolis, Minnesota.
`
`Taught several undergraduate and graduate courses, Fall 2001 - Present.
`
`
`
`
`
`
`• Director of Undergraduate Studies: University of Minnesota,
` Minneapolis, Minnesota.
`Supervised the CSE Undergraduate Program, Fall 2001 – Spring 2004.
`
`14
`
`
`
`• Associate Professor: University of Minnesota, Minneapolis, Minnesota.
`Taught several undergraduate and graduate courses, Fall 1996 - 2001.
`
`• Assistant Professor: University of Minnesota, Minneapolis, Minnesota.
`Taught several undergraduate and graduate courses, Fall 1992 - 1996.
`
`• Teaching Assistant: Carnegie Mellon University, Pittsburgh, Pennsylvania.
`Assisted in the teaching of the graduate course “Robotic Systems”, Spring 1989.
`
`
`
`
`
`
`
`
`
`
`
`
`
`Publications
`
`Journal Papers (Significant papers are in bold)
`
`
`
`
`
`
`
`
`
`
`
`1. Cherian, A., Morellas, V., and Papanikolopoulos, N.P., “Efficient Nearest
`Neighbors via Robust Sparse Hashing”, accepted, IEEE Trans. on Image
`Processing.
`2. Sivalingam, R., Boley, D., Morellas, V., and Papanikolopoulos, N.P., “Tensor Sparse Coding for
`Positive Definite Matrices”, accepted, IEEE Trans. on Pattern Analysis and Machine
`Intelligence.
`
`3. Drenner, A., Janssen, M., Kottas, A., Kossett, A., Carlson, C., Lloyd, R., and Papanikolopoulos,
`N.P., “Multi-robot Teams with Miniature Robots and Mobile Docking Stations”, Journal of
`Intelligent and Robotic Systems, Volume 72 , No. 2, November 2013, pp 263-284.
`
`
`4. Cherian, A., Sra, S., Banerjee, A., and Papanikolopoulos, N.P., “Jensen-Bregman LogDet
`Divergence with Application to Efficient Similarity Search for Covariance Matrices”, IEEE Trans.
`on Pattern Analysis and Machine Intelligence, Volume 35, No. 9, September 2013, pp 2161-
`2174.
`
`5. Somasundaram, G., Sivalingam, R., Morellas, V., and Papanikolopoulos, N.P., “Classification
`and Counting of Composite Objects in Traffic Scenes Using Global and Local Image Analysis”,
`IEEE Trans. on Intelligent Transportation Systems, Volume 14, No. 1, March 2013, pp 69-81.
`
`
`6. Joshi, A., Porikli, F., and Papanikolopoulos, N.P., “Scalable Active Learning for Multi-
`Class Image Classification”, IEEE Trans. on Pattern Analysis and Machine Intelligence,
`Volume 34, No. 11, November 2012, pp 2259-2273.
`
`
`7. Ribnick, E., Sivalingam, R., Papanikolopoulos, N.P., and Daniilidis, K., “Reconstructing
`and Analysing Periodic Human Motion from Stationary Monocular Views”, Computer
`Vision and Image Understanding, Volume 116, No. 7, July 2012, pp 815-826.
`8. Min, H., and Papanikolopoulos, N.P., “Robot Formations Using a Single Camera and Entropy-
`Based Segmentation”, Journal of Intelligent and Robotic Systems, Volume 68, No. 1, March
`2012, pp 21-41.
`
`
`9. Bird, N., and Papanikolopoulos, N.P., “Optimal Image-Based Euclidean Calibration of
`Structured Light Systems in General Scenes”, IEEE Trans. on Automation Science and
`Engineering, Volume 8, No. 4, October 2011, pp 815-823.
`
`
`10. Sharma, M., Dos Santos, T., Papanikolopoulos, N.P., and Hui, S.K., “Feasibility of Intra-fraction
`Whole Body Motion Tracking for Total Marrow Irradiation”, Journal of Biomedical Optics,
`Volume 16, No. 5, May 2011.
`
`15
`
`
`
`
`11. Ribnick, E., and Papanikolopoulos, N.P., “3D Reconstruction of Periodic Motion from a
`Single View”, International Journal of Computer Vision, Volume 90, No. 1, October 2010,
`pp 28-44.
`
`12. Atev, S., Miller, G., and Papanikolopoulos, N.P., “Clustering of Vehicle Trajectories”, IEEE
`Trans. on Intelligent Transportation Systems, Volume 11, No. 3, September 2010, pp 647-657.
`
`13. Veeraraghavan, H., and Papanikolopoulos, N.P., “Learning to Recognize Video-based Spatio-
`temporal Events”, IEEE Trans. on Intelligent Transportation Systems, Volume 10, No. 4,
`December 2009, pp 628-638.
`
`
`14. Cao, D., Masoud, O., Boley, D., and Papanikolopoulos, N.P., “Online Motion Classification
`Using Support Vector Machines”, Computer Vision and Image Understanding, Volume 113, No.
`10, October 2009, pp 1064-1075.
`
`15. Bodor, R., Jackson, B., Masoud, O., Fehr, D., and Papanikolopoulos, N.P., “View-Independent
`Motion Classification Using Image-Based Reconstruction”, Image and Vision Computing,
`Volume 27, No. 8, July 2009, pp 1194-1206.
`
`16. Ribnick, E., Atev, S., and Papanikolopoulos, N.P., “Estimating 3D Positions and
`Velocities of Projectiles from Monocular Views”, IEEE Trans. on Pattern Analysis and
`Machine Intelligence, Volume 31, No. 5, May 2009, pp 938-944.
`17. Anderson, M., and Papanikolopoulos, N.P., “Implicit Cooperation Strategies for Multi-robot
`Search of Unknown Areas”, Journal of Intelligent and Robotic Systems, Volume 53, No. 4,
`December 2008, pp 381-397.
`
`18. Joshi, A., and Papanikolopoulos, N.P., “Learning to Detect Moving Shadows in Dynamic
`Environments”, IEEE Trans. on Pattern Analysis and Machine Intelligence, Volume 30,
`No. 11, November 2008, pp 2055-2063.
`
`19. Fiore, L., Fehr, D., Bodor, R., Drenner, A., Somasundaram, G., and Papanikolopoulos, N.P.,
`“Multi-Camera Human Activity Monitoring”, Journal of Intelligent and Robotic Systems, Volume
`52, No. 1, May 2008, pp 5-43.
`
`20. Kilambi, P., Joshi, A., Ribnick, E., Masoud, O., and Papanikolopoulos, N.P., “Estimating
`Pedestrian Counts in Groups”, Computer Vision and Image Understanding, Volume 110,
`No. 1, April 2008, pp 43-59.
`21. Rybski, P., Roumeliotis, S., Gini, M., and Papanikolopoulos, N.P., “Appearance-Based Mapping
`Using Minimalistic Sensor Models”, Autonomous Robots, Volume 24, No. 3, April 2008, pp.
`213-227.
`
`22. Papanikolopoulos, N.P., “EvalWare: Signal Processing for Robotics”, IEEE Signal Processing
`Magazine, Volume 25, No. 1, 2008, pp 154-157.
`
`23. Masoud, O., and Papanikolopoulos, N.P., “Using Geometric Primitives to Calibrate Traffic
`Scenes”, Transportation Research Part C: Emerging Technologies, Volume 15, No. 6,
`December 2007, pp 361-379.
`
`24. Bodor, R., Drenner, A., Schrater, P., and Papanikolopoulos, N.P., “Optimal Camera Placement
`for Automated Surveillance Tasks”, Journal of Intelligent and Robotic Systems, Volume 50, No.
`3, November 2007, pp 257-295.
`
`25. Cannon, K., LaPoint-Anderson, M., Bird, N., Panciera, K., Veeraraghavan, H.,
`Papanikolopoulos, N., and Gini, M., “Using Robots to Raise Interest in Technology Among
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`16
`
`
`
`Underrepresented Groups”, IEEE Robotics and Automation Magazine, Volume 14, No. 2, June
`2007, pp 73-81.
`
`26. Veeraraghavan, H., Bird, N., Atev, S., and Papanikolopoulos, N.P., “Classifiers for Driver
`Activity Monitoring”, Transportation Research Part C: Emerging Technologies, Volume
`15, No. 1, February 2007, pp 51-67.
`27. Veeraraghavan, H., Schrater, P., and Papanikolopoulos, N.P., “Robust Target Detection
`and Tracking Through Integration of Color, Motion, and Geometry”, Computer Vision
`and Image Understanding, Volume 103, No. 2, August 2006, pp 121-138.
`28. Pearce, J., Powers, B., Hess, C., Rybski, P., Stoeter, S., and Papanikolopoulos, N.P., “Using
`Virtual Pheromones and Cameras for Dispersing a Team of Multiple Miniature Robots”, Journal
`of Intelligent and Robotic Systems, Volume 45, No. 4, April 2006, pp 307-321.
`
`29. Stoeter, S., and Papanikolopoulos, N.P., “Kinematic Motion Model for Jumping Scout
`Robots”, IEEE Trans. on Robotics, Volume 22, No. 2, April 2006, pp 398-403.
`30. Atev, S., Arumugam, H., Masoud, O., Janardan, R., and Papanikolopoulos, N.P., “A Vision-
`Based Approach to Collision Prediction at Traffic Intersections”, IEEE Trans. on Intelligent
`Transportation Systems, Volume 6, No. 4, December 2005, pp 416-423.
`
`31. Jackson, B., Bodor, R., and Papanikolopoulos, N.P., “Deriving Occlusions in Static Scenes from
`Observations of Interactions with a Moving Figure”, Advanced Robotics, Volume 19, No 10,
`2005, pp 1043-1058.
`
`32. Bird, N., Masoud, O., Papanikolopoulos, N.P., and Isaacs, A., “Detection of Loitering Individuals
`in Public Transportation Areas”, IEEE Trans. on Intelligent Transportation Systems, Volume 6,
`No. 2, June 2005, pp 167-177.
`
`33. Stoeter, S., and Papanikolopoulos, N.P., “Autonomous Stair-Climbing with Miniature
`Jumping Robots”, IEEE Trans. on Systems, Man, and Cybernetics: Part B, Volume 35,
`No. 2, April 2005, pp 313-325.
`
`
`34. Maurin, B., Masoud, O., and Papanikolopoulos, N.P., “Tracking All Traffic: Computer Vision
`Algorithms for Monitoring Vehicles, Individuals, and Crowds”, Robotics and Automation
`Magazine, Volume 12, No. 1, March 2005, pp 29-36.
`
`35. Dos Santos, C., Stoeter, S., Rybski, P., and Papanikolopoulos, N.P., “Panoramic Imaging for
`Miniature Robots”, Robotics and Automation Magazine, Volume 11, No. 4, December 2004, pp
`62-68.
`
`36. Perrin, D., Kadioglu, E., Stoeter, S., and Papanikolopoulos, N.P., “Grasping and Tracking
`Using Constant Curvature Dynamic Contours”, International Journal of Robotics
`Research, Volume 22, No. 10-11, October 2003, pp 855-871.
`37. McMillen, C., Stubbs, K., Rybski, P., Stoeter, S., Gini, M., and Papanikolopoulos, N.P.,
`“Resource Scheduling and Load Balancing in Distributed Robotic Control Systems”, Robotics
`and Autonomous Systems, Volume 44, No. 3-4, September 2003, pp 251-259.
`
`38. Veeraraghavan, H., Masoud, O., and Papanikolopoulos, N., “Kalman Filter Based Tracking for
`Monitoring Intersections”, IEEE Trans. on Intelligent Transportation Systems, Volume 4, No. 2,
`June 2003, pp 78-89.
`39. Masoud, O., and Papanikolopoulos, N.P., "A Method for Human Action Recognition",
`Image and Vision Computing, Volume 21, No. 8, 2003, pp 729-743.
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`17
`
`
`
`40. Rybski, P., Stoeter, S., Papanikolopoulos, N.P., Burt, I., Dahlin, T., Gini, M., Hougen, D.,
`Krantz, D., and Nageotte, F., "Sharing Control: a Framework for the Operation and
`Coordination of Multiple Miniature Robots", IEEE Robotics and Automation Magazine,
`Volume 9, No. 4, 2002, pp 41-48.
`
`41. Perrin, D., Masoud, O., Smith, C., and Papanikolopoulos, N.P., “Snakes for Robotic Grasping”,
`accepted to the journal ANALEKTA.
`
`42. Rybski, P., Stoeter, S., Gini, M., Hougen, D., and Papanikolopoulos, N.P., "Performance
`of a Distributed Robotic System Using Shared Communications Channels: A Framework
`for the Operation and Coordination of Multiple Miniature Robots", IEEE Transactions on
`Robotics and Automation, Volume 18, No. 5, October 2002, pp 713-727. It also appeared
`in “Advances in plan-based control of robotic agents,” Lecture Notes in Artificial
`Intelligence, Volume 2466, 2002, pp. 211-225.
`
`43. Stoeter, S., Rybski, P., McMillen, C., Stubbs, K., Gini, M., Hougen, D., and Papanikolopoulos,
`N., "A Robot Team for Surveillance Tasks: Design and Architecture", Robotics and
`Autonomous Systems, Volume 40, No. 2-3, August 2002, pp 173-183.
`
`44. Gupte, S., Masoud, O., Martin, R., and Papanikolopoulos, N.P., "Detection and Classification of
`Vehicles", IEEE Trans. on Intelligent Transportation Systems, Volume 3, No. 1, March 2002, pp
`37-47.
`45. Masoud, O., and Papanikolopoulos, N.P., “A Novel Method for Tracking and Counting
`Pedestrians in Real-time Using a Single Camera”, IEEE Trans. on Vehicular Technology,
`Volume 50, No. 5, September 2001, pp 1267-1278.
`
`46. Eriksson, M., and Papanikolopoulos, N.P., “Driver Fatigue: A Vision-Based Approach to
`Automatic Diagnosis”, Transportation Research Part C: Emerging Technologies, Volume 9,
`2001, pp. 399-413.
`
`47. Singh, R., Voyles, R., Littau, D., and Papanikolopoulos, N.P., "Shape Morphing-Based Control
`of Robotic Visual Servoing", Autonomous Robots, Volume 10, No. 3, 2001, pp 317-338.
`
`48. Masoud, O., Papanikolopoulos, N.P., and Kwon, E., "The Use of Computer Vision in Monitoring
`Weaving Sections", IEEE Trans. on Intelligent Transportation Systems, Volume 2, No.1, March
`2001, pp 18-25.
`
`49. Rybski, P., Papanikolopoulos, N.P., Stoeter,