`
`UNITED STATES DISTRICT COURT
`SOUTHERN DISTRICT OF FLORIDA
`
`CASE NO. 1:16-cv-21761-KMM
`
`
`PRISUA ENGINEERING CORP.,
`
`
`Plaintiff,
`
`
`v.
`
`SAMSUNG ELECTRONICS CO., LTD., et al.,
`
`
`Defendants.
`________________________________________/
`
`
`DECLARATION OF SHARIAR NEGAHDARIPOUR, PH.D.
`
`I, Shariar Negahdaripour, Ph.D., declare:
`
`
`
`1.
`
`I am making this declaration at the request of Prisua Engineering Corp.. in the
`
`matter of Prisua Engineering Corp. v. Samsung Electronics Co., Ltd., et al.
`
`2.
`
`I am being compensated for my work in this matter at my standard hourly rate of
`
`$400 per hour for consulting services. My compensation in no way depends on the outcome of
`
`this proceeding or the content of my testimony.
`
`3.
`
`I.
`
`4.
`
`I have not previously served as an expert witness in any judicial proceeding.
`
`Professional Background
`
`I have nearly 30 years of experience in imaging technologies that cover image
`
`processing, image coding and programing, and basic system and hardware applications as
`
`applied to image and video applications. Information concerning my professional qualifications,
`
`experience, and publications are set forth in my current curriculum vitae, attached as Exhibit “A”
`
`hereto. Particular experience relevant to this matter is highlighted below.
`
`
`
`1
`
`Petitioner Samsung 1009
`
`
`
`Case 1:16-cv-21761-KMM Document 42-2 Entered on FLSD Docket 01/02/2017 Page 2 of 18
`
`5.
`
`I received my Bachelor Degree in Mechancial Engineering in 1979 from the
`
`Massachusetts Institute of Technology. I received my Masters degree in Mechanical Engineering
`
`in 1980 also from the Massachusetts Institute of Technology. Thereafter, I received my Doctoral
`
`degree in 1987 from the Massachusetts Institute of Technology. My Ph.D. thesis was on “Direct
`
`Passive Navigation.” The research for my thesis was performed at the MIT Artificial
`
`Intellegence Lab, where I served as a research assistant. Between 1987 and 1991 I served as an
`
`Assistant Professor on the faculty of Electrical Engineering Department at the University of
`
`Hawaii. I joined the Electrical and Computer Engineering Department at the University of Miami
`
`as an Associate Professor in 1991, received tenure in 1995, was promoted to Full Professor in
`
`1999, and have continued teaching and research to this day.
`
`6.
`
`I have authored numerous publications, including a book chapter and a multitude
`
`of juried or refereed journal articles that involve some form of image processing. Some of the
`
`most relevant articles to this matter include:
`
`i. M. D. Aykin, S. Negahdaripour, “On feature matching and image registration
`for 2-D forward-scan sonar imaging,” J. Field Robotics, 30(4), pp. 602-623,
`August 2013.
`
`ii. N. Gracias, M. Mahoor, S. Negahdaripour, A.C.R. Gleason, “Fast image
`blending using watersheds and graph cuts,” Image and Vision Computing,
`27(5), pp. 597-607, April 2009.
`
`iii. D. Lirman, N. Gracias, B. Gintert, A. Gleason, P.R. Reid, S. Negahdaripour,
`P. Kramer, “Development and application of a video-mosaic survey
`technology to document the status of coral reef communities,” Environmental
`Monitoring and Assessment, 125(1-3), pp. 59-73, 2007.
`
`iv.
`
`v.
`
`S. Negahdaripour, “Epipolar geometry of opti-acoustic stereo imaging,”
`IEEE Trans. PAMI, 20(11), pp. 1776-1788, October 2007.
`
` X. Xu, S. Negahdaripour, “Mosaic-based positioning and improved motion
`estimation methods
`for autonomous navigation,”
`IEEE J. Oceanic
`Engineering, 27(1), pp. 79-99, 2002.
`
`2
`
`
`
`
`
`
`
`
`
`
`
`
`
`Case 1:16-cv-21761-KMM Document 42-2 Entered on FLSD Docket 01/02/2017 Page 3 of 18
`
`
`
`
`
`
`
`vi. S. Negahdaripour, A. Khamene, “Motion-based compression of underwater
`video imagery for the operations of unmanned submersible vehicles,”
`Computer Vision Image Understanding: Special Issue on Underwate Comp.
`Vision and Pattern Recognition, 79(1), pp. 162-183, 2000.
`
`S. Negahdaripour, “”Revised interpretation of optical flow; Integration of
`radiometric and geometric cues in optical flow””, IEEE Trans. PAMI, 20(9),
`pp. 961-979, 1998.
`
`vii.
`
`viii.
`
`two
`the
`relationship between
`form
`S. Negahdaripour, “”Closed
`interpretations of a moving plane””, J. Opt. Soc. Am., 7(2), pp. 279-, 1990.
`
`7.
`
`For my technical contributions in underwater computer vision, I was promoted to
`
`the rank of Fellow by the IEEE Oceaning Engineering Society in 2012, the Institute’s highest
`
`member grade, bestowed on IEEE senior members who have contributed “to the advancement or
`
`application of engineering, science and technology.”
`
`8.
`
`On two occasions, I have served as the General Chair of IEEE Computer
`
`Society’s most prestigious conference in computer vision, and regularly present state-of-the-art
`
`technical papers at various conferences and technical meetings. Relevant presentations to this
`
`matter include:
`
`i. Y. Zhang, S. Negahdaripour, and Q.Z. Li, “Error-resilient video compression
`for underwater acoustic transmission,” Proc. IEEE/MTS Conf. Oceans’16,
`Monterey, CA, September, 2016.
`
`ii. D. Aykin, S. Negahdaripour, “On feature extraction and region matching for
`forward scan sonar imaging”, Proc. IEEE/MTS Oceans’12, Virginia Beach,
`VA, October 2012.
`
`iii. Zhang, Q. Z. Li, “Seafloor image compression using hybrid wavelets and
`directional filter banks”, IEEE Oceans Conf., Genova, Italy, May 2015.
`
`iv. A. Sarafraz, S. Negahdaripour, Y. Schechner, “Enhancing images in
`scattering media utilizing stereovision and polarization,” Proc. WACV’09,
`Snowbird, UT, December, 2009.
`
`3
`
`
`
`
`
`
`
`Case 1:16-cv-21761-KMM Document 42-2 Entered on FLSD Docket 01/02/2017 Page 4 of 18
`
`9.
`
`I have annually taught the graduate level course on Digital Image Processing
`
`(EEN538) in my Department since arriving at the University of Miami. Also, I currently hold
`
`editorial responsibilities in the technical journal Computer Vision and Image Understanding,
`
`published by Elsevier. The central focus of this journal is the computer analysis of pictorial
`
`information. Computer Vision and Image Understanding publishes papers covering all aspects of
`
`image analysis from the low-level, iconic processes of early vision to the high-level, symbolic
`
`processes of recognition and interpretation. A wide range of topics in the image understanding
`
`area is covered, including papers offering insights that differ from predominant views.
`
`II.
`
`The Patent in Suit
`
`10.
`
`In preparing this Declaration, I considered the following materials:
`
`i. U.S. Patent No. US 8,650,591;
`
`ii.
`
`File history for U.S. Patent No. US 8,650,591, which is attached hereto as Exhibit
`
`“B.”
`
`iii.
`
`Proposed claim constructions and extrinsic evidence cited by the parties in the
`
`Joint Claim Construction Statement dated November 21, 2016, including:
`
`A. Remote Sensing Image Analysis: Including Spatial Domain, Steven M.
`de Jong, F. D. Van der Meer, p. 251, Springer, ISBN: 978-1-4020-2559-
`4, attached hereto as Exhibit “B.”
`
`
`B. Polar Remote Sensing: Vol. II, Robert Massom, D. Lubin, p. 81,
`Springer, ISBN- 13: 978-3-5402-6101-8, attached hereto as Exhibit “C.”
`
`C. Yupeng Yang, et al., Semantic-Spatial Matching
`Image
`for
`Classification, Proceedings of the IEEE International Conference on
`Multimedia and Expo, IEEE Computer Society 2013, ISBN 978- 1-
`4799-0015-2, attached hereto as Exhibit “D.”
`
`D. Various Merriam-Webster Online Dictionary definitions, attached hereto
`as composite Exhibit “E.”
`
`4
`
`
`
`
`
`
`
`
`
`Case 1:16-cv-21761-KMM Document 42-2 Entered on FLSD Docket 01/02/2017 Page 5 of 18
`
`11.
`
`It is my understanding that the ‘591 patent is the only patent at issue in this suit.
`
`I further understand that the ‘591 patent was filed on March 8, 2011, and claims priority to a
`
`provisional application No. 61/311,892 filed on March 9, 2010.
`
`III. Claim Construction
`
`12.
`
`It is my understanding that the parties in this action dispute the proper
`
`construction of six claim terms, summarized on the table below:
`
`Claim Term
`
`Prisua Proposal
`
`Samsung Proposal
`
`user input video data stream
`
`
`
`(‘591 patent, claim 1)
`
`a sequence of images digitally
`recorded by a user separate
`from the original video data
`stream
`
`
`
`
`
`
`
`a digitally recorded sequence
`of frames contained in a
`format for displaying the
`frames as a motion picture
`(e.g., ASF, MPEG-2, AVI) that
`is provided by the user
`
`
`
`Claim Term
`
`Prisua Proposal
`
`Samsung Proposal
`
`original video data stream
`
`
`
`a digitally recorded sequence
`of images that is to be
`modified
`
`(‘591 patent, claim 1)
`
`
`
`
`
`
`
`a digitally recorded sequence
`of frames contained in a
`format for displaying the
`frames as a motion picture
`(e.g., ASF, MPEG-2, AVI) that
`is to be modified
`
`Claim Term
`
`Prisua Proposal
`
`Samsung Proposal
`
`spatially matching
`
`
`
`(‘591 patent, claim 1)
`
`aligning a set of pixels in the
`spatial domain
`
`
`
`
`
`
`
`5
`
`Indefinite;
`
`or, in the alternative:
`partitioning images into a set
`of course to fine scale sub-
`blocks and concatenating the
`histograms extracted from all
`blocks into a long vector
`representation
`
`
`
`Case 1:16-cv-21761-KMM Document 42-2 Entered on FLSD Docket 01/02/2017 Page 6 of 18
`
`Claim Term
`
`Prisua Proposal
`
`Samsung Proposal
`
`select and separate out
`
`Plain meaning (i.e., “removing”)
`
`
`
`extracting
`
`
`
`(‘591 patent, claim 1)
`
`
`
`
`
`Claim Term
`
`Prisua Proposal
`
`Samsung Proposal
`
`extracting the at least one pixel
`from the user entering data in
`the data entry display device
`
`
`
`(‘591 patent, claim 3)
`
`
`
`
`
`selecting and separating out
`the at least one pixel chosen by
`a user on a display, when said
`display is acting as a data
`entry device and receives a
`selection of at least one pixel
`by said user
`
`Indefinite because (2) there is no
`antecedent basis for the clam
`term “the user entering data” in
`Claim 3 and (2) there is no
`antecedent basis for the claim
`term “the data entry display
`device” in Claim 3.
`
`
`
`Claim Term
`
`Prisua Proposal
`
`Samsung Proposal
`
`the digital processing unit is
`further capable of extracting the
`at least one pixel from the user
`pointing to a spatial location in a
`displayed video frame
`
`performing spatial analysis on
`a video frame based on a user
`input, then selecting and
`separating out the at least one
`pixel chosen by said user
`
`Indefinite at least because it
`depends from Claim 3.
`
`
`
`(‘591 patent, claim 4)
`
`
`
`
`IV. Relevant Legal Standards
`
`13.
`
`I am informed by counsel that the claims of a patent define the scope of the
`
`invention and that the terms of a claim are to be accorded their ordinary and customary meaning.
`
`The ordinary and customary meaning is the meaning that the term would have to a person of
`
`ordinary skill in the art in question at the time of the invention as measured by the patent’s
`
`priority date. I understand that it is also proper to consider the context of the surrounding words
`
`
`
`6
`
`
`
`Case 1:16-cv-21761-KMM Document 42-2 Entered on FLSD Docket 01/02/2017 Page 7 of 18
`
`of the claim in determining the ordinary and customary meaning of a term. Where claim terms
`
`are common words, the widely accepted meaning of such terms applies.
`
`14.
`
`I am informed by counsel that the written description of the specification should
`
`also be considered in interpreting patent claims, especially where the inventor has expressly
`
`defined a term used in the claims. However, it is my understanding that while the specification
`
`should be considered and understood, it is wrong to import limitations in the specification into
`
`the claims.
`
`15.
`
`Further, I am informed by counsel that a patent is invalid for indefiniteness if its
`
`claims, read in light of the specification delineating the patent, and the prosecution history, fail to
`
`inform, with reasonable certainty, those skilled in the art about the scope of the invention. It is
`
`therefore my understanding that indefiniteness is measured from the viewpoint of a person
`
`skilled in the art at the time the patent was filed. I understand it must be shown by clear and
`
`convincing evidence that a skilled artisan is unable to discern the boundaries of the claim based
`
`on the language of the claim, the specification, the prosecution history, and his or her knowledge
`
`of the relevant area in order to successfully prove that a patent claim fails for indefiniteness.
`
`16.
`
`Finally, I am informed by counsel that a party sometimes has the burden of
`
`proving a claim or defense by “clear and convincing evidence.” I am informed by counsel that
`
`“clear and convincing evidence” means the evidence submitted must persuade you that the fact
`
`or thing to be proved is highly probable or reasonably certain. I am informed by counsel that this
`
`is a higher standard of proof than proof by a “preponderance of the evidence” which simply
`
`requires that a party prove that, in light of all the evidence, what it claims is more likely true than
`
`not.
`
`
`
`
`
`7
`
`
`
`Case 1:16-cv-21761-KMM Document 42-2 Entered on FLSD Docket 01/02/2017 Page 8 of 18
`
`V.
`
`17.
`
`Person of Ordinary Skill in the Art
`
`I am informed by counsel that when providing my opinions, I must do so based on
`
`the perspective of one of ordinary skill in the art at the relevant priority date. I am likewise
`
`informed by counsel that the effective priority date of the ‘591 patent is March 9, 2010.
`
`18.
`
`It is my opinion that one of ordinary skill in the art in the 2010 time period would
`
`have had a bachelor of science degree, or an equivalent degree, with at least three years of
`
`experience in the field of imaging and signal processing.
`
`19.
`
`I am a person of at least ordinary skill in the art and was so at the time of the
`
`invention of the ‘591 patent.
`
`VI. Discussion of Disputed Terms
`
`i. “user input video data stream” / “original video data stream”
`
`20.
`
`I understand that Prisua’s proposed construction of the term “user input video data
`
`stream” is: “a sequence of images digitally recorded by a user separate from the original video
`
`data stream.”
`
`21.
`
`Prisua’s proposed definition construes “video data stream” as a “sequence of
`
`images.” In my opinion, Prisua’s proposed construction is correct because it stays true to the
`
`claim language and is best aligned with the ‘591 patent’s description of the invention.
`
`22.
`
`In the ‘591 patent, the word “image” appears 17 separate times in claim 1. (‘591
`
`patent, 7:14-55). Claim 1 recites “receiving a selection of the … image from the … video data
`
`stream”, and “extracting the …image” from said video data stream (‘591 patent, 7:42-49). A
`
`plain reading of this language makes it evident to one of skill in the art that the referenced “video
`
`data stream” is composed of multiple images. Claim 1 similarly recites an “image display device
`
`displaying the video data stream.” (‘591 patent, 7:20-24). Again here, it is evident to one of
`
`
`
`8
`
`
`
`Case 1:16-cv-21761-KMM Document 42-2 Entered on FLSD Docket 01/02/2017 Page 9 of 18
`
`skill in the art that the display of a “video data stream” requires the display of images, leading to
`
`the conclusion that a “video data stream” is composed of images. Indeed, the resulting output of
`
`claim 1 is a displayable edited “video data stream” which contains one or more images resulting
`
`from “performing the substitution of the spatially matched first image with the … second
`
`image.” (‘591 patent, 7:51-54). Thus, Prisua’s proposed construction, which interprets “video
`
`data stream” as “a sequence of images” is closely aligned with the actual claim language.
`
`23. Moreover, Prisua’s proposed construction cuts to the heart of the invention
`
`disclosed in the ‘591 patent, namely: image processing. See e.g. (‘591 patent, 2:46-58) (“…the
`
`digital device may further process the image to enhance it”); see also (‘591 patent, 3:28-40)
`
`(“This process requires the input device to perform … image analysis and processing …); (‘591
`
`patent, 3:42-46) (“the UDD 106 is capable of … image and video processing, such as image and
`
`video enhancements…”); (‘591 patent, 4:57-67) (“The stand alone devices can be equipped with
`
`…capabilities such as …image …portioning,
`
`image…enhancement,
`
`filtering,
`
`texture
`
`analysis…”). Therefore, construing a “video data stream” as a “sequence of images” is best
`
`aligned with the ‘591 patent’s description of the invention.
`
`24.
`
`I understand that Samsung has proposed that the term “video data stream” be
`
`construed as “a digitally recorded sequence of frames contained in a format for displaying the
`
`frames as a motion picture (e.g., ASF, MPEG-2, AVI).” Samsung’s proposed construction is
`
`erroneous.
`
`25.
`
`After a review of the ‘591 patent, I note that the terms “ASF”, “MPEG-2” or
`
`“AVI” do not appear anywhere on the body of the issued patent, the prosecution history, or the
`
`original (i.e., provisional) application.
`
`
`
`9
`
`
`
`Case 1:16-cv-21761-KMM Document 42-2 Entered on FLSD Docket 01/02/2017 Page 10 of 18
`
`26.
`
`It is my understanding, based on my knowledge of the state of the art, that the
`
`“ASF”, or Advanced System Format is a file format developed my Microsoft, Inc. used to
`
`encapsulate Windows Media Audio or Windows Media Video. This file format is typically used
`
`for streaming video over the Internet or other similar media.
`
`27.
`
`Further, it is my understanding, based on my knowledge of the state of the art that
`
`“MPEG-2” is a compression technique was born out of the desire to achieve compression of
`
`broadcast-quality video, and is used for DVD, High Definition Television, broadcasts, and
`
`digital/personal video recorders (DVRs). The MPEG-2 compression standard describes a
`
`combination of lossy video compression and lossy audio data compression methods.
`
`28.
`
`Finally, it is my understanding, based on my knowledge of the state of the art, that
`
`the “AVI” (Audio Video Interleave) file format is a multimedia container format introduced by
`
`Microsoft in 1992, as part of its “Video for Windows” software.
`
`29.
`
`AVI, MPEG-2 and ASF all work with computer programs or devices for encoding
`
`(compressing) or decoding (decompressing) a digital data stream or signal. These so-called
`
`“codecs” apply some type of compression algorithm to the underlying image data.
`
`30.
`
`By attempting to construe the term as involving “ASF”, “MPEG-2” or “AVI”
`
`compression – without any support – Samsung requires the “video data stream” to be
`
`compressed at all times. However, claim 1 of the ‘591 patent requires the image manipulation to
`
`occur in the uncompressed “spatial” domain. (‘591 Patent, 7:46-50) (“spatially matching an area
`
`of the second image to an area of the first image in the original video data stream”). As a result,
`
`Samsung’s construction conflicts with the plain language of the claims in the ‘591 patent.
`
`
`
`10
`
`
`
`Case 1:16-cv-21761-KMM Document 42-2 Entered on FLSD Docket 01/02/2017 Page 11 of 18
`
`31.
`
`Further, Samsung requires the “video data stream” to be “displayed … as a
`
`motion picture.” The term “motion picture” did not appear anywhere on the body of the issued
`
`patent, the prosecution history, or the original (i.e., provisional) application.
`
`32.
`
`In my opinion, one of ordinary skill in the art would not understand “video data
`
`stream” to require a “sequence of frames” to be displayed as a “motion picture.” Those in the art
`
`understand the term “motion picture" to mean a movie production specifically intended for
`
`theatrical exhibition. In other words, the term “motion picture” refers to a film or movie,
`
`projecting in rapid succession onto a theater screen.
`
`33.
`
`This very specific and restrictive interpretation submitted by Samsung clearly
`
`does not comport with the display means contemplated and disclosed in the ‘591 patent. (‘591
`
`Patent, 2:10-24) (“This …new … sequence … is subsequently … played by the digital device”).
`
`34.
`
`Prisua further construes a “user input video data stream” to be “separate from the
`
`original video stream.” This construction is not only correct, but well supported by the ‘591
`
`patent. Samsung’s proposed construction is silent on this point.
`
`35.
`
`One of skill in the art would readily understand from a plain reading of the ‘591
`
`patent that more than one “video data stream” is recited, and that each has its separate roles.
`
`(‘591 patent, 7:14-54). As structured, claim 1 recites a “user input video data stream,” an
`
`“original video data stream” and a “displayabled edited video data stream.” Id.
`
`36.
`
`Finally, I have also reviewed the parties’ proposed construction of the term
`
`“original video data stream.” I concur with Prisua’s proposed construction, as more fully
`
`described above, with regards to the term “video data stream.” I also note that the parties agree
`
`that the term “original” should be construed to mean a video data stream “that is to be modified.”
`
`
`
`
`
`11
`
`
`
`Case 1:16-cv-21761-KMM Document 42-2 Entered on FLSD Docket 01/02/2017 Page 12 of 18
`
`ii. spatially matching
`
`A.
`
`Background
`
`37.
`
`A two-dimensional digital image represents a physical entity (e.g., reflected
`
`light, scene or target temperature) in the form of intensity or color, that varies in a coordinate
`
`plane. Therefore, an image can be represented by the mathematical model I(x,y). The variables x
`
`and y represent spatial coordinates (i.e., the row and column numbers) of a particular point or
`
`“pixel” in an image. I represents the value (the particular color or intensity) of a particular pixel.
`
`For example:
`
`
`
`Figure 1 - An Image and its Spatial Coordinates
`
`
`
`38.
`
`Thus, the image plane defines the spatial domain, where an image pixel can be
`
`identified or referred to using row and column coordinates.
`
`39.
`
`Spatial or “spatial domain” processing involves the direct manipulation of the
`
`values of individual pixels or “regions” (which are composed of one or more pixels) in the image
`
`plane.
`
`40.
`
`It is well known to those of skill in the art that “spatial matching” involves
`
`selecting two regions from one or more images, in order to apply some process or method to the
`
`
`
`12
`
`
`
`Case 1:16-cv-21761-KMM Document 42-2 Entered on FLSD Docket 01/02/2017 Page 13 of 18
`
`pixel values within these regions. A simple visual interpretation of spatial matching is to align
`
`two regions (e.g., regions of similar or equal size) as two layers, one on top of the other.
`
`41.
`
`In the context of the invention disclosed by the ‘591 patent, the spatial domain
`
`processing disclosed involves the replacement – in the image plane – of the pixel values in one
`
`layer with the pixel values from the other layer.
`
`42.
`
`To summarize, “spatial matching” can be defined as “the alignment, pixel-by-
`
`pixel of two same-size regions from one or more images.” The “spatial matching” process
`
`therefore establishes a direct pixel-by-pixel correspondence between the two regions.
`
`B.
`
`Samsung’s Proposed Definition Misses the Mark
`
`43.
`
`In light of the foregoing, Samsung’s construction for “spatially matching” -
`
`“partitioning images into a set of coarse to fine scale sub-blocks and concatenating the
`
`histograms extracted from all blocks into a long vector representation” - is plainly incorrect.
`
`44.
`
`Crucially, the second step in the construction proposed by Samsung (i.e.,
`
`“concatenating the histograms extracted from all blocks into a long vector representation”) does
`
`not allow for any spatial or spatial-domain processing because it transforms the image
`
`representation from the image plane, which defines the spatial domain, to a multi-dimensional
`
`feature-space domain.
`
`45.
`
`The step of “concatenating ... extracted from all blocks,” is likewise ambiguous.
`
`In my opinion, applying this methodology to the image would generate a very long vector of
`
`histogram values defined for all sub-blocks, thereby generating a non-spatial domain
`
`representation of the image, which contradicts the straightforward spatial (or spatial-domain)
`
`processing required by the claim terms of the ‘591 patent.
`
`
`
`13
`
`
`
`Case 1:16-cv-21761-KMM Document 42-2 Entered on FLSD Docket 01/02/2017 Page 14 of 18
`
`46.
`
`In contrast, Prisua’s construction for spatially matching, “aligning a set of pixels
`
`in the spatial domain” is consistent with what is generally understood by those ordinary skill in
`
`the art. In other words, when performing “spatially matching” functions in the image plane, one
`
`brings into alignment a set of pixels defining an image region with another set of pixels defining
`
`the same-size region from the same or other images.
`
`47.
`
`Accordingly, in my opinion, Prisua’s proposed construction is correct. Samsung’s
`
`construction is erroneous again because it does not allow for spatial processing to take place, as
`
`required by the very words of the ‘591 patent.
`
`iii. “extracting”
`
`In my opinion, a person of ordinary skill in the art of signal processing at the time
`
`48.
`
`of the invention disclosed in the ‘591 patent would understand the term “extracting” to mean an
`
`operation that selects and separates out data without affecting or modifying the underlying
`
`“video data stream” in any way.
`
`49.
`
`As disclosed in the claims of the ‘591 patent, the digital processing unit (“DPU”)
`
`performs various types of extractions. ‘591 Patent, 7:39-40 (“extracting the identified at least
`
`one pixel as the second image”); ‘591 Patent, 7:45 (“extracting the first image”).
`
`50.
`
`The plain language of the ‘591 patent reveals that the “extracting” operations by
`
`the DPU occur on a “video data stream.” 591 Patent, 7:35-40 (“at least one pixel in the frame of
`
`the user input video data stream”).
`
`51.
`
`Those of ordinary skill in the art would understand that this type of “extraction”
`
`operation on a “data stream” does not throw away, discard, permanently delete or harm the
`
`integrity of the underlying information in the “stream” data set. In other words, the data set
`
`remains intact, while the digital information required or requested is simply “copied” into
`
`
`
`14
`
`
`
`Case 1:16-cv-21761-KMM Document 42-2 Entered on FLSD Docket 01/02/2017 Page 15 of 18
`
`memory for further use, as required by the digital processing unit. Indeed, the ‘591 patent’s
`
`claim language supports this interpretation. (‘591 patent, 7:41-42) (“storing the second image in
`
`a memory device operably coupled with the interactive media apparatus.”)
`
`52.
`
`Samsung’s construction of simply “removing” data is too limiting. It assumes
`
`that the digital processing unit has sufficient permissions (i.e., read/write) over the data streams
`
`so as to be capable of modifying them permanently. Indeed, one of ordinary skill in the art
`
`would know that the DPU does not modify or alter the original and user input video data streams
`
`so as to remove or delete certain content. It would be understood that the creation of a
`
`“displayable edited video stream” resulting from the spatially matching operation performed by
`
`the digital processing unit would be achieved in memory.
`
`53.
`
`In sum, one of ordinary skill in the art attempting to practice the ‘591 patent’s
`
`disclosure would likely simply “select” and “copy out” into memory the information present in
`
`the data streams, and generate a new third “data stream” from memory. Prisua’s proposed
`
`construction therefore comports with the understanding of the term as known to those of skill in
`
`the art.
`
`iv. “extracting the at least one pixel from the user entering data in the data
`entry display device”
`
`The ‘591 specification discloses “user input data” that “can be captured directly”
`
`54.
`
`by a “stand alone device” that is equipped with “data entry devices such as a…displaying
`
`device.” ‘591 Patent, 4:45-60.
`
`55.
`
`In other words, to one of ordinary skill in the art, the ‘591 patent discloses a
`
`display device that can serve as an input device, so as to capture an input by a “user.”
`
`
`
`15
`
`
`
`Case 1:16-cv-21761-KMM Document 42-2 Entered on FLSD Docket 01/02/2017 Page 16 of 18
`
`56.
`
`As of 2010, when the provisional application was presented to the USPTO, the
`
`use of display devices (e.g., touchscreens) as a way to capture user input (i.e., as input devices)
`
`was already generally well know to those of ordinary skill in the art.
`
`57.
`
`Indeed, at the time of filing the ‘591 patent, input devices such as touchscreens
`
`were capable of receiving a selection of at least one pixel by a user, and transmitting or
`
`translating the input received into a digital signal, to be sent to a digital processing unit or other
`
`type of digital signal processor.
`
`58.
`
`Commercially available touchscreens in the 2010 timeframe enabled the control
`
`of cursor movement, adjusting the control-display ratio and pixel-accurate targeting with input of
`
`one or more fingers.
`
`VII.
`
`59.
`
` Conclusion.
`
`In sum, the proposed constructions submitted by Prisua stay true to the claim
`
`language and most naturally align with the ‘591 patent’s description of the invention.
`
`60.
`
`In my opinion, and as supported by the plain language of the claims in the ‘591
`
`patent, a “user input video data stream” is “a sequence of images digitally recorded by a user
`
`separate from the original video data stream.”
`
`61.
`
`The term “user input video data stream” cannot be, as Samsung proposes,
`
`“contained in a format for displaying the frames as a motion picture (e.g. ASF, MPEG-2, AVI).”
`
`Samsung’s attempt to impose this limitation is mistaken because the compression applied by the
`
`enumerated file formats would render the ‘591 patent’s invention meaningless since image
`
`manipulation would no longer take place in the spatial domain.
`
`
`
`16
`
`
`
`Case 1:16-cv-21761-KMM Document 42-2 Entered on FLSD Docket 01/02/2017 Page 17 of 18
`
`62.
`
`Samsung is likewise incorrect to require the “video data stream” take the form of
`
`a “motion picture.” There is simply no mention of a “motion picture” in the body of the ‘591
`
`patent or the prosecution history, and the invention disclosed is not centered on film making.
`
`63.
`
`Further, the term “spatially matching” is best construed as proposed by Prisua:
`
`“aligning a set of pixels in the spatial domain.” As understood by those of skill in the art, the
`
`spatial domain processing disclosed in the ‘591 patent involves the replacement – in the two
`
`dimensional image plane – of pixel values.
`
`64.
`
`Samsung’s proposed construction of the term “spatially matching” is contrary to
`
`the plain language of the claims of the ‘591 patent because the concatenation of “histograms
`
`extracted from all blocks into a long vector representation” generates a very long vector of
`
`histogram values defined for all sub-blocks, thereby creating a non-spatial domain representation
`
`of the image. This contradicts the straightforward spatial (or spatial-domain) processing required
`
`by the claims of the ‘591 patent.
`
`65. Moreover, in my opinion, a person of ordinary skill in the art would understand
`
`the term “extracting” to mean an operation that would not disrupt the integrity of the underlying
`
`“data stream.” In other words, those of skill in the art would agree with Prisua’s proposed
`
`definition of extracting as “select and separate out.”
`
`66.
`
`Those of ordinary skill in the art would understand that “extracting” – contrary to
`
`Samsung’s proposed construction – does not throw away, discard, permanently delete or harm
`
`the integrity of the underlying information in the “stream” data set. In other words, the data set
`
`where data is extracted from remains intact, while the digital information extracted is simply
`
`“copied” into memory.
`
`
`
`17
`
`
`
`Case 1:16-cv-21761-KMM Document 42-2 Entered on FLSD Docket 01/02/2017 Page 18 of 18
`
`67.
`
`Finally, it is my opinion that those of skill in the art would understand the scope
`
`of claims 3 and 4 of the ‘591 patent. In 2010, the patent’s priority date, the use of display
`
`devices (e.g., touchscreens) as a way to capture user input (i.e., as input devices) was already
`
`generally well know to those of ordinary skill in the art. Indeed, at that time input devices such
`
`as touchscreens were capable of receiving a selection of at least one pixel by a user, and
`
`transmitting or translating the input received into a digital signal, to be sent to a digital
`
`processing unit or other type of digital signal processor.
`
`I hereby declare under penalty of perj
Accessing this document will incur an additional charge of $.
After purchase, you can access this document again without charge.
Accept $ ChargeStill Working On It
This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.
Give it another minute or two to complete, and then try the refresh button.
A few More Minutes ... Still Working
It can take up to 5 minutes for us to download a document if the court servers are running slowly.
Thank you for your continued patience.
This document could not be displayed.
We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.
Your account does not support viewing this document.
You need a Paid Account to view this document. Click here to change your account type.
Your account does not support viewing this document.
Set your membership
status to view this document.
With a Docket Alarm membership, you'll
get a whole lot more, including:
- Up-to-date information for this case.
- Email alerts whenever there is an update.
- Full text search for other cases.
- Get email alerts whenever a new case matches your search.
One Moment Please
The filing “” is large (MB) and is being downloaded.
Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.
Your document is on its way!
If you do not receive the document in five minutes, contact support at support@docketalarm.com.
Sealed Document
We are unable to display this document, it may be under a court ordered seal.
If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.
Access Government Site