throbber
U.S. Patent No. 10,218,995
`Declaration of Clifford Reader
`
`UNITED STATES PATENT AND TRADEMARK OFFICE
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`
`Samsung Electronics Co. Ltd.
`“Samsung”
`Petitioner
`
`v.
`
`Advanced Coding Technologies, LLC
`“ACT”
`Patent Owner
`
`INTER PARTES REVIEW OF US. PATENT NO.10,218,995
`
`DECLARATION OF DR. CLIFFORD READER
`
`I declare thatall statements made in this declaration on my own knowledge are
`
`true and that all statements made on information andbelief are believed to be true, and
`
`further,
`
`that
`
`these statements were made with the knowledge that willful
`
`false
`
`statements and thelike so made ate punishable by fine or imprisonment, or both, under
`
`Section 1001 of Title 18 of the United States Code.
`
`Date: fale By:Ltda
`
`Clifford Reader, Ph.D.
`
`1
`
`SAMSUNG-1003
`
`1
`
`SAMSUNG-1003
`
`

`

`
`
`I. 
`
`II. 
`
`U.S. Patent No. 10,218,995
`Declaration of Clifford Reader
`
`Contents
`Professional Background ................................................................................. 8 
`A. 
`Summary ............................................................................................... 8 
`B. 
`Education ............................................................................................... 8 
`C.  Work Experience ................................................................................... 8 
`D. 
`Standardization .................................................................................... 10 
`E. 
`Intellectual Property Rights ................................................................. 12 
`F. 
`Curriculum Vitae ................................................................................. 13 
`Digital Video Technologies ........................................................................... 13 
`A.  Analog Video ...................................................................................... 13 
`B. 
`Digital Video ....................................................................................... 14 
`C. 
`Image Restoration; Superresolution .................................................... 18 
`D. 
`Image Warping .................................................................................... 24 
`E. 
`Video Coding ...................................................................................... 26 
`F. 
`Loop Filters ......................................................................................... 29 
`G.  Video Coding Standards ...................................................................... 32 
`
`1. 
`
`2. 
`
`3. 
`
`Background ........................................................................................... 32 
`
`The MPEG1 Standards ....................................................................... 33 
`
`The MPEG2 Standard ......................................................................... 36 
`
`4. 
`MPEG Scalability ................................................................................. 36 
`The H.263 Standard ............................................................................. 42 
`H. 
`The H.264 Standard ............................................................................. 48 
`I. 
`III.  Relevant Legal Standards .............................................................................. 56 
`
`2
`
`

`

`U.S. Patent No. 10,218,995
`Declaration of Clifford Reader
`
`
`IV.  Person of Ordinary Skill in the Art ................................................................ 59 
`V.  Overview of the Subject Patent US10218995 ............................................... 60 
`A.  Overview ............................................................................................. 60 
`
`1. 
`
`2. 
`
`3. 
`
`Field of Art ............................................................................................ 60 
`
`Background - Problem statement, Admitted Prior Art .................. 60 
`
`Summary of the disclosed subject matter ......................................... 61 
`
`4. 
`Comments ............................................................................................. 70 
`Claims .................................................................................................. 72 
`B. 
`Prosecution History of the ’995 Patent ............................................... 75 
`C. 
`VI.  Summary of the Applied Prior Art ................................................................ 78 
`A.  Overview of Phek ................................................................................ 78 
`B. 
`Overview of Segall .............................................................................. 80 
`C. 
`Overview of Martins ........................................................................... 81 
`D.  Overview of He ................................................................................... 82 
`VII.  Claim Construction ........................................................................................ 85 
`VIII.  Ground 1 – Claims 2-4 and 11 are Obvious under 35 U.S.C. § 103 based
`on Phek in view of Segall, Martins, and He .................................................. 85 
`A.  Overview of the Phek-Segall-Martins-He Combination ..................... 85 
`
`1. 
`
`2. 
`
`3. 
`
`Obvious based on Segall to Encode/Decode Phek’s Base and
`(First) Enhancement Layer Streams at the Same Spatial
`Resolution .............................................................................................. 86 
`
`Obvious based on Segall to Implement a Second
`Enhancement Layer in Phek’s System for Spa-tial Scalability ....... 90 
`
`Obvious based on Segall to Multiplex/Demultiplex Phek’s
`Base and Enhancement Layer Bitstreams ........................................ 93 
`
`3
`
`

`

`
`
`U.S. Patent No. 10,218,995
`Declaration of Clifford Reader
`
`4. 
`
`Obvious based on Martins to Downscale Phek’s Super-
`Resolution Enlarged Reference Pictures .......................................... 96 
`
`5. 
`
`Obvious based on He to Selectively Apply Phek’s/Martin’s
`Super-Resolution (and Downscaling) Loop Filter ........................ 103 
`Relevance of the Phek-Segall-Martins-He Combination to the
`Claims ................................................................................................ 110 
`
`B. 
`
`1. 
`
`2. 
`
`3. 
`
`Claim 2 ................................................................................................. 110 
`
`Claim 3 ................................................................................................. 147 
`
`Claim 4 ................................................................................................. 149 
`
`4. 
`Claim 11 ............................................................................................... 152 
`IX.  Additional Remarks .....................................................................................155 
`
`
`
`
`
`
`4
`
`

`

`
`I, Clifford Reader, Ph.D., do hereby declare:
`
`U.S. Patent No. 10,218,995
`Declaration of Clifford Reader
`
`1.
`
`I am making this declaration at the request of Samsung in the matter of
`
`INTER PARTES REVIEW OF U.S. PATENT NO. 10,218,995, “the ‘995 Patent.”
`
`2.
`
`I am being compensated for my work in this matter at my standard hourly
`
`rate of $750 for consulting services. My compensation in no way depends on the
`
`outcome of this proceeding or the content of my testimony.
`
`3.
`
`In preparing this Declaration, I considered the following materials:
`
`SAMSUNG-1001 U.S. Patent No. 10,218,995 to Sakazume (“the ’995 Patent”)
`
`SAMSUNG-1002 Excerpts from the Prosecution History of the ’995 Patent (“the
`’995 Patent File History”)
`
`SAMSUNG-1005 English Translation of Japanese Patent Publication No.
`2007316161 A with Translation Certificate (“Phek”)
`
`SAMSUNG-1006 Segall et al., Spatial Scalability Within the H.264/AVC Scalable Video
`Coding Extension; IEEE Transactions on Circuits & Systems for
`Video Tech., Vol. 17, No. 9, September 2007 (“Segall”)
`
`SAMSUNG-1007 Martins et al., A Unified Approach to Restoration, Deinterlacing and
`Resolution Enhancement in Decoding MPEG-2 Video, IEEE
`Transactions on Circuits & Systems for Video Tech., Vol. 12, No.
`9, September 2002 (“Martins”)
`
`SAMSUNG-1008 U.S. Patent Publication No.2008/0137753 (“He”)
`
`SAMSUNG-1009 U.S. Patent No. 5,886,736 (“Chen”)
`
`SAMSUNG-1010 Schwarz et al., Overview of the Scalable Video Coding Extension of the
`H.264/AVC Standard, IEEE Transactions on Circuits & Systems
`for Video Tech., Vol. 17, No. 9, September 2007 (“Schwarz”)
`
`SAMSUNG-1011 U.S. Patent Publication No. 2009/0154567 (“Lei”)
`
`5
`
`

`

`
`SAMSUNG-1012 Park et al., Super-Resolution Image Reconstruction: A Technical Overview,
`IEEE Signal Processing Magazine, pp. 21-36, May 2003 (“Park”)
`
`U.S. Patent No. 10,218,995
`Declaration of Clifford Reader
`
`SAMSUNG-1013 English Machine Translation of Japanese Patent Publication No.
`2008053848 (“Hatanaka”)
`
`SAMSUNG-1014 U.S. Patent Publication No. 2010/0165077 (“Yin”)
`
`SAMSUNG-1015 U.S. Patent Publication No. 2009/0257664 (“Kao”)
`
`SAMSUNG-1016 U.S. Patent Publication No. 2009/0046995 (“Kanumuri”)
`
`SAMSUNG-1017 U.S. Pat. 6,470,051 (“Campisano”)
`
`SAMSUNG-1018 U.S. Patent Publication No. 2006/0013306 (“Kim”)
`
`SAMSUNG-1019 Lu et al., Mechanisms of MPEG Stream Synchronization, ACG
`SIGCOMM Computer Communication Review, Vol. 24, Issue 1,
`pp. 57-67 (January 1994) (“Lu”)
`
`SAMSUNG-1020 U.S. Patent Publication No. 2003/0021345 (“Brusewitz”)
`
`SAMSUNG-1021 U.S. Patent Publication No. 2003/0021347 (“Lan”)
`
`SAMSUNG-1022 EE|Times, How Video Compression Works (Aug. 6, 2007), available
`at https://www.eetimes.com/how-video-compression-works
`(retrieved Dec. 26, 2023) (“EETimes”)
`
`SAMSUNG-1023 Bier, Introduction to Video Compression, Berkeley Design Tech., Inc.
`(October 2005), available at
`https://www.bdti.com/MyBDTI/pubs/20051024_GSPx05_Vide
`o_Intro.pdf (retrieved Dec. 30, 2023) (“Bier”)
`
`SAMSUNG-1024 U.S. Patent No. 7,379,496 (“Holcomb”)
`
`SAMSUNG-1025 U.S. Patent Publication No. 2008/0043832 (“Barkley”)
`
`SAMSUNG-1026 U.S. Patent No. 6,173,013 (“Suzuki”)
`
`SAMSUNG-1030 Ely, MPEG Video Coding: A Simple Introduction, EBU Technical
`Review (Winter 1995) (“Ely”)
`
`SAMSUNG-1031 Japanese Patent Publication No. 2007316161 A with English
`Machine Translation (“Phek”)
`
`6
`
`

`

`
`SAMSUNG-1032 ITU-T Recommendation H.264 (11/07)
`
`U.S. Patent No. 10,218,995
`Declaration of Clifford Reader
`
`SAMSUNG-1033 Karczewicz et al., The SP- and SI-Frames Design for
`H.264/AVC, IEEE TRANSACTIONS ON CIRCUITS AND
`SYSTEMS FOR VIDEO TECHNOLOGY, v.13 (July 2003)
`
`SAMSUNG-1034 ISO/IEC 11172-2: 1993 (E) (MPEG1 Video)
`
`SAMSUNG-1035 ISO/IEC 13818-2: 2000 (E) (MPEG2/H.262 Video )
`
`SAMSUNG-1036 ITU-T Recommendation H.263 (02/98)
`
`SAMSUNG-1037 Reader, Intraframe and Interframe Adaptive Transform Coding,
`Efficient Transmission of Pictorial Information, SPIE v.66, 108
`(1975)
`
`SAMSUNG-1038 Reader, MPEG4: Coding for Content, Interactivity, and Universal
`Accessibility, Optical Engineering 35(1), 104 (January 1996)
`
`SAMSUNG-1039 Reader, MPEG Patents, in MPEG Video Compression Standard,
`Chapter 16, at 357-362 (Mitchell et al ed.) (1996)
`
`SAMSUNG-1040 Radha et al., The MPEG-4 Fine-Grained Scalable Video Coding
`Method for Multimedia Streaming Over IP, IEEE Transactions
`on Multimedia, v.3, 53 (March 2001)
`
`SAMSUNG-1041 H.264 / MPEG-4 Part 10 White Paper, January 31, 2003
`(downloaded from https://github.com/yistLin/H264-
`Encoder/blob/1cdaf090b63642932ed726b102ed2cf80908edbb/d
`oc/H.264%20%3A%20MPEG-
`4%20Part%2010%20White%20Paper.pdf on Jan. 5, 2024)
`
`SAMSUNG-1042 ISO/IEC 14496-2: 2004 (MPEG4)
`
`SAMSUNG-1043 ISO/IEC 13818-2: 1996 (MPEG2)
`
`
`
`
`
`7
`
`

`

`
`I.
`
`U.S. Patent No. 10,218,995
`Declaration of Clifford Reader
`
`Professional Background
`A.
`4.
`
`Summary
`
`I am a digital media consultant providing technical, business development
`
`and intellectual property consulting services in the areas of digital media including digital
`
`imaging, digital video, digital audio and digital speech. Applications include consumer
`
`audio and video transmission and storage; video, audio and speech compression; real-
`
`time video processing and display; digital speech processing; image/video/audio
`
`systems architecture; and image/video/audio chip architecture. I have held this
`
`position since 2001 and have consulted for over 80 clients.
`
`B. Education
`5.
`
`I received my Doctoral degree in 1974 from the University of Sussex,
`
`England. My thesis was titled “Orthogonal Transform Coding of Still and Moving
`
`Pictures.” The research for my thesis was performed in residence at the Image
`
`Processing Institute, University of Southern California, Los Angeles.
`
`6.
`
`I received my B. Eng. Degree with Honors in 1970 from the University
`
`of Liverpool, England, in the field of electronics.
`
`C. Work Experience
`7.
`
`From 1970 to 1973 I performed my graduate research in video
`
`compression. I was one of the first to perform a type of image coding (adaptive block
`
`transform coding) and the first to apply this type of coding to video. This is described
`
`in my thesis and summarized in an SPIE paper. SAMSUNG-1037. These techniques
`
`8
`
`

`

`
`underlie the audiovisual coding standards known as MPEG (Moving Picture Experts
`
`U.S. Patent No. 10,218,995
`Declaration of Clifford Reader
`
`Group), H.26x, and virtually all other video compression schemes today.
`
`8.
`
`From 1975 to 1981 I studied, designed and developed systems for military
`
`imaging systems including real-time image and video reconnaissance systems and
`
`battlefield management systems.
`
`9.
`
`In the early 1980s I taught classes at Santa Clara University, California in
`
`digital signal processing and digital image processing.
`
`10. From 1982 to 1989 I architected and led hardware and software
`
`engineering teams in the design of systems for real-time imaging for military, medical
`
`and earth resources applications. In 1983 I designed an image warping system that could
`
`perform perspective geometric warping of images in near-real-time. My team
`
`implemented the system in hardware on a single VME board under software control.
`
`Significant sales ensued over the next years with a pull-through effect on sales of the
`
`complete image processing system in which it was an option. This product design was
`
`published in a 1984 paper1.
`
`11. From 1990 to 2001 I architected and led hardware and software
`
`engineering teams in the design of semiconductor chips and systems for real-time
`
`
`1 Adams J, Patton C, Reader C, Zamora D, Hardware for Geometric Warping,
`
`Electronic Imaging, April 1984.
`
`9
`
`

`

`
`imaging in digital consumer audio/video applications, including videoconferencing
`
`U.S. Patent No. 10,218,995
`Declaration of Clifford Reader
`
`(with speech coding), broadcast TV and DVD.
`
`D.
`12.
`
`Standardization
`
`In 1990 I became an accredited delegate to the Moving Picture Experts
`
`Group – MPEG. I, and my team contributed to the technical work for all three parts
`
`of the standard – Systems, Video and Audio – and I participated in the management of
`
`the MPEG committee.
`
`13.
`
`I was Head of Delegation (HoD) to MPEG for the United States in 1991-
`
`1992, during which time the MPEG1 standard was completed and the MPEG2
`
`standard was successfully positioned as the global standard for digital video
`
`broadcasting and recording (DVD). I was the Editor in Chief of the MPEG1 standard.
`
`I personally reviewed and edited all three parts of the standard in detail, and wrote much
`
`of the informative annex for the MPEG1 standard. I participated in the technical and
`
`management work for development of the MPEG2 standard.
`
`14.
`
`I chaired the implementation subcommittee that analyzed MPEG1 Audio
`
`(Levels I & II aka MUSICAM; Level III aka mp3), Dolby AC3 and other proposed
`
`audio compression algorithms including legacy algorithms for complexity and cost of
`
`implementation.
`
`15.
`
`I was a co-founder of the MPEG4 standard and chaired the subcommittee
`
`from inception for 2-1/2 years, beginning in 1993, following which I chaired the
`
`10
`
`

`

`
`MPEG4 Requirements subcommittee for a further 2 years. These subcommittees
`
`U.S. Patent No. 10,218,995
`Declaration of Clifford Reader
`
`established many of the fundamental principles of the MPEG4 standard, including
`
`object-based coding, software-based implementation, and development of the
`
`bitstream as a syntactic language. E.g., SAMSUNG-1038. MPEG4 focused on
`
`audiovisual low bitrate coding, including low bitrate speech coding.
`
`16. The MPEG4 standard in ISO and the H.263 standard in ITU-T were
`
`developed collaboratively, with the H.263 Rapporteur and I synchronizing meetings
`
`and establishing profiles in each standard that are precisely compatible.
`
`17.
`
`I was instrumental in establishing the work on Advanced Audio Coding
`
`(AAC).
`
`18.
`
`I initiated the work in Synthetic-Natural Hybrid Coding (SNHC), and
`
`personally contributed to the work on compression of 3D graphics in the area of error
`
`resilient coding.
`
`19.
`
`In the early 2000s, I was an invited expert to the Joint Video Team (JVT)
`
`established by ISO and ITU to develop the H.264 standard (also denoted MPEG4 Pt.
`
`10, AVC). I was also invited to become an officer of the China National Standards
`
`committee AVS, in which I chair the Intellectual Property Rights Subgroup.
`
`20.
`
`I have closely followed the developments of the H.265 and H.266
`
`standards.
`
`11
`
`

`

`U.S. Patent No. 10,218,995
`Declaration of Clifford Reader
`
`Intellectual Property Rights
`
`In 1993 I was hired by CableLabs to be the technical expert for
`
`
`
`E.
`21.
`
`establishing the MPEG Patent Pool (Now MPEGLA). In the course of creating a list
`
`of essential IP to practice the standard, I reviewed approximately 10,000 abstracts and
`
`1,000 patents. This is summarized in a chapter of the MPEG book by Mitchell et al.
`
`SAMSUNG-1039.
`
`22. Multiple companies have hire me to assist in developing portfolios of their
`
`standards-essential patents.
`
`23.
`
`In 2002 I was hired by 10 companies to evaluate the standards essential
`
`patent environment for the nascent H.264 standard.
`
`24.
`
`In 2003 I was hired to evaluate the standards essential patent environment
`
`for the nascent national China AVS standards.
`
`25.
`
`In 2013 I was hired to evaluate the standards essential patent environment
`
`for the nascent AV1 standard.
`
`26.
`
`In 2017 I was hired to evaluate the standards essential patent environment
`
`for the H.265/HEVC standard.
`
`27.
`
`I am the Co-Director of the China AVS Patent Pool Administration. I
`
`lead the negotiations for sub-licensing of the AVS standards. In 2022 I played a leading
`
`role in the adoption of the AVS3 video standard by DVB.
`
`12
`
`

`

`28.
`
`I have performed expert consulting and expert witness work for patent
`
`U.S. Patent No. 10,218,995
`Declaration of Clifford Reader
`
`
`
`holders and defendants in patent licensing negotiations and litigation.
`
`F.
`Curriculum Vitae
`29. Additional information concerning my professional publications and
`
`presentations in the field of digital video and cases in which I have served as an expert
`
`are set forth in my current Curriculum Vitae, a copy of which is attached as Exhibit A.
`
`This Curriculum Vitae lists many publications authored or co-authored by me; and lists
`
`the cases in which I have testified via depositions and trials.
`
`II. Digital Video Technologies
`30. Below I describe the state of digital video coding as of the ’995 Patent’s
`
`earliest claimed priority date (May 30, 2008), and summarize the development of
`
`technologies underlying the features and methods recited in claims 2-4 and 11.
`
`A.
`Analog Video
`31. Like film, video comprises a rapid sequence of still images that the human
`
`vision system integrates into a perception of motion. The “moving images” are often
`
`called video frames and video pictures.
`
`32. All-electronic television2 was invented by Philo Farnsworth in the late
`
`1920s. Farnsworth developed the technique of raster scanning images and produced
`
`
`2 Electro-mechanical television systems were also introduced.
`
`13
`
`

`

`
`cameras and CRTs (cathode ray tubes) that could synchronously acquire and reproduce
`
`U.S. Patent No. 10,218,995
`Declaration of Clifford Reader
`
`moving images.
`
`33.
`
` All-electronic television service was introduced by the BBC in England
`
`in November 1936. Broadcasting of television programs continued until interrupted
`
`by the Second World War. Television service began in the U.S. in July 1941. These
`
`services provided only black-and-white video until 1953, when the NTSC standard in
`
`the U.S. was revised to include color. Analog broadcasting continued in the U.S. for
`
`almost seventy years. Full-power analog television broadcasting ceased in 2009 under
`
`order from the FCC.
`
`B. Digital Video
`34. Digitization of analog signals in general was introduced in the late 1940s
`
`with the name “pulse code modulation” (PCM). Fundamental sampling theory showed
`
`that a bandwidth-limited analog signal could be sampled at twice the rate of the highest
`
`frequency in the band, and still be perfectly reconstructed from such samples. The
`
`sample amplitudes could be represented by the value of a number, e.g., a decimal value
`
`or a binary value. In the case of image or video data, it has been widespread practice to
`
`represent this value by an unsigned 8-bit integer. Analog TV signals that scanned a
`
`video frame were digitized into two-dimensional arrays of digital picture elements,
`
`known as “pels” or “pixels.” Digital video, like analog video (and film) comprises a
`
`rapid sequence of frames.
`
`14
`
`

`

`35. While a correctly bandpass filtered and sampled digital image, such as a
`
`U.S. Patent No. 10,218,995
`Declaration of Clifford Reader
`
`
`
`video frame, theoretically includes spatial frequencies up to the Nyquist limit, in practice
`
`digital images contain spatial frequencies only up to a much lower level. There are many
`
`reasons, that may depend on the particular application. For example, the optical
`
`assembly that captured the image may limit the optical resolution of the data including
`
`lens materials and aberrations. Another example applies in situations where the imaging
`
`occurs in an outdoor environment in which atmospheric turbulence exists between an
`
`object being imaged and the image capture system – we may see this ourselves as the
`
`heat haze above a highway when driving in hot weather. Another example includes
`
`electronic noise in the image capture and processing system. Therefore, while the scene
`
`being imaged – the object space – may contain spatial frequencies up to a given level, a
`
`digital imaging system designed with a Nyquist frequency at that level will not in practice
`
`contain spatial frequencies as high as that. We may refer to such an image as blurred.
`
`36.
`
`In this context, it is important to understand what is meant by the term
`
`“resolution”. A distinction should be made between the resolution of the information in
`
`an image (such as a video frame) and the data representing the information in the image.
`
`The latter comprises the array of pixels, referred to above, and it is common in everyday
`
`speech and within the imaging/video community itself to refer to the number of pixels
`
`or the number of lines as the “resolution” of the image. As just explained, typical digital
`
`images do not contain spatial frequencies up to the theoretical limit given by the number
`
`15
`
`

`

`
`of pixels/lines. For example, many high-definition television (HDTV) receivers
`
`U.S. Patent No. 10,218,995
`Declaration of Clifford Reader
`
`comprise a pixel array that vertically has 1080 lines. There are two principle broadcast
`
`formats – NBC, CBS and PBS broadcast in a “1080i” format (where the i denotes
`
`interlace-scan), while ABC and Fox broadcast in a “720p” format (where the p denotes
`
`progressive format). Both are presented on the HDTV in a 1080p format by processing
`
`the 1080i broadcasts to deinterlace the signals, and upsampling the 720p broadcasts to
`
`1080 lines. But according to the Kell principle, the 1080i signal only contains about
`
`700 lines of perceivable information and when either format is converted to 1080p
`
`format, neither of them has 1080 lines of “information” even though they both have
`
`1080 lines of “resolution”.
`
`37. PCM was followed in 1952 by the invention at Bell Labs of differential
`
`pulse code modulation, (DPCM), which was immediately applied to video. Because the
`
`correlation between pixels in typical scenes was high, it was efficient to code the pixels
`
`by successively predicting each pixel from the preceding pixel or pixels. AT&T
`
`demonstrated its Picturephone in 1964 at the World’s Fair. Subsequently, commercial
`
`Picturephone service was introduced in 1970, using DPCM coding.
`
`38. Digital video was introduced in the professional television studio
`
`environment with the CCIR 601 standard that was developed in the early 1980s and
`
`published in 1982 (now known as the ITU-R BT.601 standard). The standard covered
`
`PCM data formats for the NTSC and PAL video standards and facilitated the
`
`16
`
`

`

`
`development of modern production and post-production studios, involving non-linear
`
`U.S. Patent No. 10,218,995
`Declaration of Clifford Reader
`
`editing, digital switchers, and mixers, as well as digital special effects.
`
`39. Digital video in its raw format is highly redundant. Each video frame
`
`comprises a regular, rectangular array of pixels. The density of the pixel array is chosen
`
`such as to portray the highest spatial resolution enabled by a particular video system.
`
`Historically, the earliest systems portrayed 400-500 lines of resolution. The so-called
`
`HDTV systems introduced in the 1990s doubled the number of lines to 1000 lines, and
`
`the recent so-called 4K systems double the number of lines again to 2000 lines. The
`
`imminent next generation of so-called 8K systems will further double the number of
`
`lines to 4000 lines, which will approximate the resolution of a 35mm film system.
`
`40. Historically the frame rate was 30 frames per second, but one of the
`
`HDTV formats doubled this to 60 frames per second.
`
`41. Transmission or storage of raw digital video data requires huge volumes
`
`of data and very fast data rates. Transmitting standard definition color TV with a format
`
`of 720x486 pixels and 30 frames/s would require over 250Mbits/s. HDTV would
`
`require approximately 5 times as much data and so-called 4K TV (UHDTV) at 60
`
`frames/s would require approximately 40 times as much data.
`
`42. Within a given scene being imaged by a video system, the content is very
`
`likely not to contain the highest level of detail except in a few, small areas. While a
`
`prescribed density of pixels may accurately describe such high detail areas, that density
`
`17
`
`

`

`
`of pixels is overkill for the rest of the scene to a greater or lesser degree. So the typical
`
`U.S. Patent No. 10,218,995
`Declaration of Clifford Reader
`
`digital video frame is spatially redundant. For example, if the video scene contains the
`
`sky, or a uniformly painted wall, adjacent pixels will have almost identical intensity and
`
`color.
`
`43. The frame rate of the video is established to smoothly portray the fastest
`
`motion the system is designed to support. But such motion may occur only a fraction
`
`of the time, and may occur only over a portion of a frame spatially. So the typical video
`
`sequence of frames is temporally redundant. For example, sportscasters sitting at a desk
`
`demonstrate little motion over only a portion of the scene, but when they cut away to
`
`basketball, the action is frenetic and may be scene-wide. In the former case the pixel
`
`values will be almost identical from one frame to the next over most of the frame.
`
`C.
`Image Restoration; Superresolution
`44. The field of image restoration is concerned with providing the maximum
`
`image quality possible within a pixel array of a given dimension. The raw image may be
`
`degraded by various aberrations and corrupted by noise, as discussed above. The fine
`
`details within the image – which in general correspond to the high frequency content
`
`of the image may not be visible to a viewer. Image restoration is the class of processing
`
`that reverses the aberrations and suppresses the noise, thus exposing the fine details.
`
`45. Research in this field is over 50 years old, and comprises multiple
`
`approaches, involving linear and non-linear filtering, statistical methods and perceptual
`
`18
`
`

`

`
`models. Early work in this field is compiled in the book Digital Image Restoration,
`
`U.S. Patent No. 10,218,995
`Declaration of Clifford Reader
`
`Prentice-Hall, 1977 by Andrews and Hunt. Approaches to the problem include
`
`mathematical modeling of the aberrations and applying the inverse of the model to
`
`correct for the aberrations, or statistical modeling of the noise and applying digital signal
`
`processing to filters out the noise.
`
`46. One member of the class of
`
`image restoration techniques
`
`is
`
`superresolution. The term implies a process to create an output image having spatial
`
`resolution higher than the theoretical limit of the dimensions of the image pixel array
`
`of the input image. This is not possible and practically the term is used to provide
`
`enhanced spatial frequency content in two different ways. In the first way, the term
`
`refers to restoring the spatial frequency content of an image within the theoretical limits
`
`of the pixel array dimensions. This means that the information content of the
`
`superresolution processed image is higher than that of the input image, within the same
`
`pixel array dimensions. However, in the other way, the term is used to describe a process
`
`in which the processed image possesses a pixel array having larger dimensions than the
`
`input image. For example, the output superresolution image may have twice as many
`
`pixels horizontally and vertically (four times as many pixels in total).
`
`47. Merely doubling the number of pixels horizontally and the number of
`
`pixels or lines vertically in the representation of an image does not increase the
`
`19
`
`

`

`
`resolution of the image3. Using superresolution processing to produce an output image
`
`U.S. Patent No. 10,218,995
`Declaration of Clifford Reader
`
`that has double the pixels in each dimension may increase the information content of
`
`the image, i.e., the true resolution of the image and at the same time the output image
`
`has twice the “resolution” in pixels horizontally and vertically. But the two are not
`
`synonymous.
`
`48.
`
`Improving the spatial frequency content of an image to provide
`
`“superresolution” when the raw image spatial frequencies are limited by factors such as
`
`physical phenomena in the image capture process may seem impossible. But if multiple
`
`images of the same scene (object space) are available, and these images are not aligned
`
`with each other by translational differences that are fractions of the dimensions of a
`
`single pixel, then the assembly of such images does contain higher spatial frequency
`
`content than any one member of the assembly. A very well-established superresolution
`
`method in the art is to register such multiple images using sub-pixel accuracy, and
`
`
`3 One can easily check this on one’s own computer – zooming in (scaling up) an
`
`image that is initially being displayed with each pixel in the image corresponding to a
`
`pixel on the display will not cause the displayed image to become sharper, i.e., show
`
`more information, instead it will look blurry and eventually the individual pixels will
`
`become visible as square blocks on the screen.
`
`20
`
`

`

`
`filtering the registered image to output an image with higher spatial frequencies than
`
`U.S. Patent No. 10,218,995
`Declaration of Clifford Reader
`
`any one of the input images.
`
`49. Many4 prior art publications describe this approach to superresolution,
`
`including admitted prior art in the ‘995 patent. See Super-Resolution Image
`
`Reconstruction, by Sung C. P. and Min K. P.: A Technical Overview, IEEE Signal Proc.
`
`Magazine, Vol. 26, No. 3, pp. 21-36, 2003, ‘995 22:47-50. The source of the multiple
`
`images of the same scene that are to be registered may be satellite images taken on
`
`multiple passes over the same location. This is described in a 1984 paper by Tsai: Tsai
`
`R Y, Huang T S, Multiframe image restoration and registration, Advances in Computer
`
`Vision and Image Processing, Vol. 1, T. S. Huang, ed., Jai Press, pp. 317-319, 1984.
`
`The ’995 Patent, for example, refers to the Park prior art reference (SAMSUNG-1012)
`
`in discussing super-resolution enlargement. SAMSUNG-1001, 22:47-50. Park
`
`summarizes super-resolution process

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket