throbber
(12) United States Patent
`Dunton et al.
`
`USOO65.12541 B2
`(10) Patent No.:
`US 6,512,541 B2
`(45) Date of Patent:
`*Jan. 28, 2003
`
`(54) INCREASING IMAGE FIELD OF VIEW AND
`FRAME RATE IN AN MAGING APPARATUS
`
`(75) Inventors: Randy R. Dunton; Lawrence A.
`Booth, Jr., both of Phoenix, AZ (US)
`(73) Assignee: Intel Corporation, Santa Clara, CA
`(US)
`
`5,822,625 A * 10/1998 Leidig et al. ................. 396/77
`5,920,657 A * 7/1999 Bender et al. .............. 382/284
`6,005,609 A * 12/1999 Cheong ...................... 348/169
`6,005,613 A * 12/1999 Endsley et al. ............. 348/231
`OTHER PUBLICATIONS
`“Canon's Optura: A digital Camcorder That Performs Like
`and SLR Camera,” HyperZine Manufacture's View, original
`posting: 1997–08–25; revised: 1997-09-03, pp. 1–3.
`This patent issued on a continued pros-
`“From Consumer Electronics Online News: Aug. 25, 1997,”
`ecution application filed under 37 CFR
`HyperZine Expert's View, Original posting: 1997-08-25,
`1.53(d), and is subject to the twenty year
`pp. 1-3.
`patent term provisions of 35 U.S.C.
`sk -
`154(a)(2).
`cited by examiner
`Subject to any disclaimer, the term of this E.'EC NE W.her
`patent is extended or adjusted under 35
`(74) Attorney, Agent, or Firm-Blakely, Sokoloff, Taylor &
`U.S.C. 154(b) by 0 days.
`Zafman LLP
`
`(*) Notice:
`
`(21) Appl. No.: 08/986,754
`(22) Filed:
`Dec. 8, 1997
`O
`O
`(65)
`Prior Publication Data
`US 2001/0050712 A1 Dec. 13, 2001
`7
`(51) Int. Cl." ................................................ H04N 5/235
`
`(52) U.S. C. - - - - - - - - - - - - - - - - - - - - - - - - 348/230; 348/240; 348/239
`
`(58) Field of Search ................................. 348/222, 224,
`348/229, 230, 240, 239, 14, 15, 18, 19,
`340, 135, 169, 172; 250/203.1, 203.6
`
`(56)
`
`References Cited
`U.S. PATENT DOCUMENTS
`5,031,049 A * 7/1991 Toyama et al. ............. 348/352
`5,301.244. A * 4/1994 Parulski ........
`... 382/319
`5,305,046 A * 4/1994 Sato .............
`... 396/123
`5,471,572 A * 11/1995 Buchner et al.
`... 395/139
`5,572,253 A 11/1996 Ueda .......................... 348/222
`5,734,424. A 3/1998 Sasaki
`5,734,508 A
`3/1998 Sato ........................... 359/687
`5,812,189 A
`9/1998 Kimura et al. .............. 348/240
`
`ABSTRACT
`(57)
`An imaging apparatus that is configurable to operate in at
`least two modes. One mode is particularly suitable for still
`image capture, whereas the Second mode is Suitable for
`Video image capture and other rapid frame rate applications.
`The image data in the Second mode is Smaller (lower
`resolution) than the image data obtained in the first mode.
`
`The reduction is accomplished by either digital Scaling,
`
`cropping, or by a combination of optical Scaling and Selec
`tive readout of Sensor Signals. The simple digital Scaling
`provides a fixed angular field of view for both modes of
`operation, while cropping alone gives a Smaller field of view.
`Using the combination of optical Scaling and Selective
`sensor Signal readout, however, provides a wider field of
`View for the Second mode of operation while at the same
`time providing lower resolution images, thus improving
`frame rate in the Second mode of operation. The embodi
`ments can be used in a wide range of imaging applications,
`including digital cameras used for both Still image capture
`and Video.
`
`17 Claims, 6 Drawing Sheets
`
`DISTAN
`SCENE
`
`NEAR
`SCENE
`102
`
`-- 100
`
`106
`mr. He
`
`
`
`NEMAGE
`
`WIDEO
`
`-4N2
`
`
`
`WEWING
`ANGLE
`
`COMM.
`INTERFACE
`154
`
`LOCALUSER
`INTERFACE
`N158
`
`N DISTANT
`IMAGE
`DATA 172
`
`LOCAL
`STORAGE
`122
`
`ZTE Exhibit 1007
`
`

`

`U.S. Patent
`
`Jan. 28, 2003
`
`Sheet 1 of 6
`
`US 6,512,541 B2
`
`
`
`?, "SDIE
`
`
`
`HEST TWOOT
`
`EOVHHE||N|
`
`
`HETTOH1NOO
`
`WE 1SÅS
`
`
`
`
`
`
`
`
`
`ZTE Exhibit 1007 - 2
`
`

`

`FIG.2
`
`
`
` Q‘
`
`
`QQQQ\\\\\\\VIIII\\\\\\\\\\\\\\\\\\\\__I
`
`IIIQQM
`
`__________________
`
`ZTE Exhibit 1007 - 3
`
`

`

`U.S. Patent
`
`Ja
`
`
`
`
`
`
`
`
`
`co -
`as \-
`
`y
`
`S
`n
`
`____________________
`
`
`
`s
`FIG.3
`
`VANN
`sr.
`
`W
`
`\
`
`
`W N
`N
`N
`I W
`“N
`\\\\\‘
`IIIIII
`I
`- W.
`\\\
`
`ZTE Exhibit 1007 - 4
`
`

`

`U.S. Patent
`
`Jan. 28, 2003
`
`Sheet 4 of 6
`
`US 6,512,541 B2
`
`SELECTED COLUMNS FOR NEAR IMAGES
`
`114
`
`|
`
`|
`
`|
`
`| | | |
`|
`|
`|
`|
`| | ||
`|
`|
`|
`|
`| | |
`|
`| | | |
`|
`|
`|
`|
`|
`| | | | |
`|
`| |
`|
`||||||||||||||
`|| ||
`||
`|| || || || 3 ||
`55SE,
`IN
`NEARMAGES) || || || || | IANT
`|| || || 1 || 1
`------EHE
`| |
`|
`| | | | |
`| | | | | | |
`| | | ||
`H | | | |
`|
`
`|
`
`|
`|
`|
`
`N- N-1
`SENSOR SIGNALS FOR VIDEO IMAGE
`
`N- - - N - - - -
`SENSOR SIGNALS FOR DISTANT IMAGES
`FIG. 4
`
`ZTE Exhibit 1007 - 5
`
`

`

`U.S. Patent
`US. Patent
`
`whS
`
`6M
`
`US 6,512,541 B2
`US 6,512,541 132
`
`
`
`S
`
`EjoEzoom26:SESF2M&III!IIIIII2_m.....................---__JSE“Ea_
`.uNNE
`
`
`
`Em:E03_>_m:m>m958:$03
`SSESREDRDD
`
`
`
`
`moEmEZEjoEzoo2922235228mo<mo$mmfimw‘mQ‘oo
`
`
`2:woémfizBEEF:
`
`m.07.
`
`a:9:E
`
`ADDRESS
`
`ADDRESS
`
`16b DATA
`16b DATA
`
`W:
`
`mvm
`
`ZTE Exhibit 1007 - 6
`
`o2<zQEdmmoBo
`
`053025005
`
`EZO_._.ommmOo<_>=>_<w<._.<m_
`
`
`
`
`
`6252452200Max:2—
`
`Esthmmzm.551.0_.0SN
`
`_
`
`
`
`QEBES:3
`
`
`
`
`
`
`
`ZTE Exhibit 1007 - 6
`
`
`
`
`

`

`U.S. Patent
`
`Jan. 28, 2003
`
`Sheet 6 of 6
`
`US 6,512,541 B2
`
`
`
`DENTIFY MODE OF OPERATION
`(VIDEO OR STILL)
`
`SELECTSPATIAL SCALING RATIO,
`DECORRELATION AND ENCODING SCHEMES,
`AND PACKING FORMAT FOR THE DENTIFIED
`MODE OF OPERATION
`
`SCALE IMAGE DATA TO OBTAIN SCALED
`IMAGE DATA BASED ON SELECTED SPATAL
`SCALINGRATIO
`
`DECORRELATE THE SCALED IMAGE DATAN
`PREPARATION FOR ENCODING ACCORDING TO
`THE SELECTED DECORRELATION SCHEME
`
`COMPRESS THE DECORRELATED IMAGE DATA
`ACCORDING TO ENTROPY ENCODING SCHEME
`SELECTED
`
`PACK THE COMPRESSED DATA INTO THE
`SELECTED FORMAT
`
`FIG. 6
`
`ZTE Exhibit 1007 - 7
`
`

`

`1
`INCREASING IMAGE FIELD OF VIEW AND
`FRAME RATE IN AN IMAGING APPARATUS
`
`US 6,512,541 B2
`
`BACKGROUND
`This invention is generally related to electronic imaging,
`and more particularly to changing an image field of view and
`image frame rate in an imaging apparatus.
`Modern electronic imaging Systems have become an
`important part of every household and business, from tra
`ditional applications Such as Video cameras and copiers to
`more modern ones Such as the facsimile machine, Scanner,
`medical imaging devices, and more recently, the digital
`camera. The digital camera has been developed as a portable
`System that acquires and Stores detailed Still images in
`electronic form. The images may be used in a number of
`different ways Such as being displayed in an electronic
`photo-album or used to embellish graphical computer appli
`cations Such as letters and greeting cards. The Still images
`may also be shared with friends via modem anywhere in the
`World within minutes of being taken.
`Most purchasers of digital cameras have access to a
`desktop computer for viewing the Still images. Therefore,
`Such purchaserS might also enjoy using the digital camera to
`communicate with another perSon Via Videoconferencing or
`to view images of motion in a Scene. Using a digital camera
`as a video camera or Videoconferencing tool, however,
`presents requirements that may conflict with those for cap
`turing Still images. For instance, due to the limited trans
`mission bandwidth between the camera and a host computer
`used for viewing video images, the transmitted frames of
`Video images must be of a typically lower resolution than
`Still images.
`To meet a given image frame rate over a limited trans
`mission bandwidth, one Solution is to simply electronically
`Scale the detailed Still image frames into lower resolution
`image frames prior to transmitting them. Alternatively, the
`detailed image can be “cropped' to a Smaller size, and
`therefore lower resolution image. In this way, the amount of
`Spatial data per image frame is reduced, So that a greater
`frame rate can be achieved between the digital camera and
`the host computer.
`Electronic Scaling and/or cropping of the detailed image,
`however, does not address another problem posed by
`Videoconferencing, namely that due to close proximity of
`the object (a person's face or body) to the digital camera
`during the Video phone or Videoconferencing Session, a
`wider field of view is required of the images. The field of
`View can loosely be thought of as relating to the fraction of
`the Scene included in the transmitted image frame.
`Digital cameras typically use an optical System with a
`fixed effective focal length. Although a detailed Still image
`using Such a camera could have an acceptable field of view
`for distant Scenes, electronically Scaling the image for Video
`operation does not increase the field of View, while cropping
`actually decreases the field of view. Therefore, what is
`needed is a mechanism that allows a digital camera to
`capture images of close-up Scenes having a wider field of
`view but with lower resolution, in order to increase frame
`rate for rapid frame rate applications Such as Video phones
`and Videoconferencing.
`
`SUMMARY
`
`The invention in one embodiment is directed at a circuit
`for processing first Sensor Signals to yield first digital image
`
`1O
`
`15
`
`25
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`2
`data, where the first signals are generated by an image Sensor
`in response to a first image of a Scene projected on the
`Sensor. The circuit is further configured to proceSS Second
`Sensor Signals to yield Second digital image data having a
`lower resolution than the first data. The Second Signals are
`also generated by the image Sensor but this time in response
`to a Second image projected on the Sensor, where the Second
`image has a greater angular field of View but is Smaller than
`the first image.
`The circuit may be incorporated into an imaging appara
`tuS Such as a digital camera as a different embodiment of the
`invention. The imaging apparatus includes the image Sensor
`coupled to an optical System, where the optical System has
`an adjustable effective focal length, Such as in a Zoom lens,
`in order to focus light from a Scene onto the Sensor to create
`the first and Second images. The Second image data is
`obtained through a combination of (1) the optical System
`being adjusted to project the Second image having a wider
`field of View than the first image on the image Sensor, and
`(2) the circuit processing the Second sensor Signals which
`are generated in response to the Second image. The first
`image data is generated while the camera operates in “still”
`mode to capture detailed images of distant Scenes, whereas
`the Second image data results while operating in “video'
`mode to capture leSS detailed but wider angle images of near
`Scenes typically encountered during, for example, Videocon
`ferencing.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`These and other features as well as advantages of the
`different embodiments of the invention will be apparent by
`referring to the drawings, detailed description, and claims
`below, where:
`FIG. 1 illustrates a digital image capture apparatus pro
`Viding images in dual mode according to a first embodiment
`of the invention.
`FIG. 2 illustrates the detail in an embodiment of the
`optical System used in the imaging apparatus to generate a
`near image.
`FIG. 3 illustrates detail in the optical system that gener
`ates a Smaller field of View but larger image, according to
`another embodiment of the invention.
`FIG. 4 is a diagram of an image Sensor with a projected
`near image and the associated Sensor Signals to be processed
`by a circuit embodiment of the invention.
`FIG. 5 shows a data flow diagram of the path taken by
`image data for Video and Still modes of operation.
`FIG. 6 illustrates a flow diagram of imaging operations
`that can be performed by the embodiment of FIG. 5.
`DETAILED DESCRIPTION
`AS briefly summarized above, the embodiments of the
`invention are directed at an apparatus and associated method
`for capturing images having increased angular field of view
`and at the same allowing an increased frame rate due to their
`lower resolution. The techniques are particularly Suitable for
`an imaging System Such as a digital camera that operates in
`at least two modes to provide Still and Video imageS. The
`Video images have lower resolution but a greater angular
`field of View than the Still imageS. The greater angular field
`allows the Video images to capture close-up Scenes that are
`typically found in Videoconferencing Sessions, while their
`lower resolution permits the transmission of images at a
`higher frame rate to a host processor over a limited trans
`mission bandwidth. The method and apparatus embodiments
`
`ZTE Exhibit 1007 - 8
`
`

`

`3
`of the invention achieve Such a result by a combination of
`optical Scaling to obtain a Smaller image size for Video
`mode, and Selectively reading only those Sensor Signals
`which are generated in response to the Smaller image.
`For purposes of explanation, Specific embodiments are Set
`forth below to provide a thorough understanding of the
`invention. However, as understood by one skilled in the art,
`from reading this disclosure, the invention may be practiced
`without Such details. Furthermore, well-known elements,
`devices, proceSS Steps, and the like, are not set forth in detail
`in order to avoid obscuring the invention.
`FIG. 1 shows a digital image capture apparatus 100
`according to a first embodiment of the invention. The
`apparatus 100 has an optical system 108, including a lens
`system 106 and aperture 104, for being exposed to incident
`light reflected from a Scene whose image is to be captured.
`For this embodiment, two Scenes in particular are identified,
`a near Scene 102 and a distant Scene 103. The near Scene
`may be, for instance, a Videoconferencing Session where a
`user is Sitting at a desk with the apparatus 100 positioned
`approximately 2 feet in front of the user. The distant Scene
`103 includes the presence of objects that are located farther
`away from the apparatus 100, e.g., 8-10 feet, such as when
`taking Still images.
`The apparatus 100 may also include a strobe 112 or
`electronic flash for generating Supplemental light to further
`illuminate the Scenes when the apparatus 100 is operating
`under low light conditions.
`The optical system 108 channels the incident light rays
`onto an electronic image Sensor 114. The image Sensor 114
`has a number of pixels or photocells (not shown) which are
`electrically responsive to incident light intensity, and,
`optionally, to color. Each of the pixels in the sensor 114
`generates a Sensor Signal that together represent a captured
`image with Sufficient resolution to be acceptable as a still
`image. Contemplated resolutions include 640x480 and
`higher for acceptable quality Still images.
`The Sensor 114 generates Sensor Signals in response to an
`image of a Scene formed on the Sensor. The Signal processing
`block 110 then processes the Sensor Signals into captured
`digital image data representing the image projected on the
`Sensor 114. An analog-to-digital (A/D) converter (not
`shown) may be included in the sensor 114, as part of the
`Same Single integrated circuit die, to generate digital Sensor
`Signals (one per pixel) that define a digital image of the
`exposed Scene.
`The captured image data include near image data 170 and
`distant image data 172. These are obtained in part by
`adjusting the optical System 108 to change its focal length
`and, more generally, its modulation transfer function (MTF),
`to focus images of either the near Scene 102 or distant Scene
`103 onto the sensor 114 at the focal plane of the optical
`System. The Signal processing unit 110 processes the Sensor
`Signals according to image processing methodologies to
`yield the near image data 170 or the distant image data 172.
`The near image data may be provided as Video imageS which
`are Streamed to an image processing System Such as a host
`computer (not shown) via the communication interface 154.
`The larger and greater resolution distant image data may also
`be transferred to the host via the interface 154, but at a lower
`frame rate. The image data is then decompressed (if
`necessary), rendered, and/or displayed in the host computer.
`The image data, particularly the distant (or still mode)
`image data, may optionally be Stored in a local Storage 122
`aboard the apparatus 100. The local storage 122 may include
`a FLASH Semiconductor memory and/or a rotating media
`
`15
`
`25
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`US 6,512,541 B2
`
`4
`device such as a hard disk. The FLASH memory may be
`removable, such as the Intel(R) Miniature Card. The rotating
`media may also be removable or fixed, and may be of the
`magnetic disk or other type Suitable for Storing image data
`files.
`The apparatus 100 can be configured to operate in at least
`two modes. A first mode generates near image data 170,
`Suitable for Video operation. A Second mode generates
`distant image data 172, Suitable for Still image capture.
`Mode Selection can be made by the user via mechanical
`control (not shown) on the apparatus 100. Mechanical knob
`Settings can be received and translated by a local user
`interface 158 into control signals and control data that is
`processed by a system controller 160. Alternatively, the
`apparatus 100 can be tethered to the host computer (not
`shown) Such as a personal computer (PC) via the commu
`nication interface 154. The user can then make the mode
`Selection through Software running on the host which in turn
`communicates the proper control Signals and data to the
`system controller 160.
`The system controller 160 orchestrates the capture of
`images in both modes of operation in response to the mode
`Selection made by the user. In particular, the System con
`troller configures the Signal processing block 110 to provide
`the near or distant image data as described in greater detail
`below.
`In the first embodiment of the invention, the effective
`focal length of the optical system 108 must be altered
`between the different modes. FIGS. 2 and 3 illustrate two
`different settings of the optical system 108 corresponding to
`the two different modes of operation for the apparatus 100.
`The optical system 108 as shown includes a lens system 106
`consisting of four lens elements 106a-d that are positioned
`in front of the sensor 114. An adjustable or movable lens and
`aperture combination 105, Such as a Zoom lens, is also
`included. The Zoom lens or combination 105 can be moved
`by the user actuating a lever, ring, or by an electromechani
`cal mechanism Such as a Solenoid or motor. The lens
`combination 105 includes lens elements 106b, 106c as well
`as the aperture 104. For clarity, only the light rays from the
`lower half of the Scenes are shown in the figures. Although
`optical system 108 is shown as a lens system having four
`Separate lenses and a fixed aperture 104, one skilled in the
`art will recognize that other variations are possible which
`yield a Smaller near image with greater angular field of view.
`By simply adjusting the position of combination 105 from
`a near position in FIG. 2 to a distant position in FIG. 3, the
`Size of the image projected onto Sensor 114 can be increased.
`This optical Scaling feature results in a larger imageSize for
`the still image (distant image data) mode of operation.
`The other significant characteristic of the optical Scaling
`is the change in angular field of View. The angular field of
`View can loosely be thought of as relating to the fraction of
`the Scene included in the image projected onto the Sensor
`114. Thus, although the projected image in FIG. 2 is smaller
`than that of FIG. 3, a greater fraction of the scene is included
`in the near image of FIG. 2 as shown by the additional light
`rays that enter the optical system 108 through the first lens
`106a.
`In order to obtain the smaller near image data 170 and
`greater frame rate when the apparatus 100 is configured with
`the optical System 108 in the near position, the Signal
`processing block 110 and the sensor 114 are configured (by
`control Signals and data received from the System controller
`160, see FIG. 1) to process only the pixel signals originating
`from those rows and columns of the sensor 114 that define
`
`ZTE Exhibit 1007 - 9
`
`

`

`S
`the region on which the Smaller image is formed. This can
`be seen in FIG. 4 which shows a near image being formed
`on an array of pixels in the Sensor 114 and using fewer
`bitlines than the maximum resolution of the array. Instead of
`reading all of the Sensor signals (which can be read for
`obtaining the maximum resolution of the image Sensor
`array) shown in FIG. 4, only those Sensor Signals coming
`from the pixels in those rows and columns which define the
`region of the near image are read. The fewer Sensor Signals
`result in both lower processing times in the Signal processing
`block 110, and greater image frame rate through a limited
`bandwidth interface between the sensor 114 and signal
`processing block 110. This in turn results in greater image
`frame rate through the host communication interface 154
`(see FIG. 1).
`The image data can be further compressed and/or Scaled
`by the Signal processing unit 110 as discussed below in
`connection with FIGS. 5 and 6 in order to increase the frame
`rate while transmitting the images through a bandwidth
`limited communication interface 154.
`To further reduce the cost of manufacturing an apparatus
`100 that operates in both still capture and video modes,
`optical System 108 can be fixed to project images on the
`sensor 114 having approximately 55 of angular field of
`View. Such a field of view may be an acceptable compromise
`for both the near and distant Scenes. The distant Scene would
`be captured as a detailed Still image, while the near Scene
`(e.g., videoconferencing Session) would be handled by digi
`tally Scaling (using the signal processing unit 110) the
`detailed Still image to reduce its resolution. This gives
`greater frame rate for the Video images, but no increase in
`the field of View as compared to Still imageS. Embodiments
`of the signal processing unit 110 are shown in FIGS. 5 and
`6 and described below.
`To Summarize, the above-described embodiments of the
`invention are an imaging apparatus (Such as a digital
`camera), a method performed using the imaging apparatus,
`and a circuit that provides analog and digital processing, for
`increasing image field of view while at the Same time
`increasing the image frame rate. The embodiments of the
`invention are, of course, Subject to Some variations in
`Structure and implementation. For instance, the embodiment
`of the invention as Signal processing block 110 can be
`implemented entirely in analog and hardwired logic cir
`cuitry. Alternatively, the digital Scaling and compression
`functions of the Signal processing block 110 can be per
`formed by a programmed processor. Therefore, the Scope of
`the invention should be determined not by the embodiments
`illustrated but by the appended claims and their legal equiva
`lents.
`Signal Processing Architecture
`The image capture apparatus 100 can be electronically
`configured for dual mode operation by configuring the Signal
`processing block 110 to provide either Still image data or a
`Sequence of Video images using the logical block diagram
`and architecture of FIG. 5. In one embodiment, the block
`110 implements digital Signal and image processing func
`tions as logic circuitry and/or a programmed data processor
`to generate compressed image data having a predefined
`resolution and compression ratio from detailed, original
`image data received from the Sensor 114.
`FIG. 5 shows a data flow diagram for an embodiment of
`the invention of the path taken by image data for both video
`and still modes of operation. The processing block 110
`includes a chain of imaging functions which may begin with
`a correction block 210. The correction block 210 is used
`whenever the quality of the original image data received
`
`15
`
`25
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`US 6,512,541 B2
`
`6
`from the Sensor 114 warrants. Some Sort of pre-processing
`before the image is Scaled and compressed. In certain cases,
`the correction block 210 performs pixel Substitution,
`companding, and gamma correction on the original image
`data received from the image Sensor. The original image data
`should be of sufficient detail (e.g., 768x576 spatial resolu
`tion or higher is preferred) to yield still images of acceptable
`quality.
`Pixel Substitution may be performed in block 210 to
`replace invalid pixel data with valid data to provide a more
`deterministic input to Subsequent imaging functions. Com
`panding may be performed to lower the resolution of each
`pixel (the number of bits per pixel). For example, the
`original image data can arrive as 10 bits per pixel, whereas
`a preferred pixel resolution for the logic circuitry may be 8
`bits (1 byte). Conventional gamma correction may also be
`performed to conform the information content of the image
`to that expected by the host computer where the image will
`be ultimately displayed.
`Other functions that may be performed in block 210 on
`each received original image frame include fixed pattern
`noise reduction which is often needed before compressing
`an image. Once again, whether or not any correction func
`tions are performed by block 210 in general depends on the
`quality of the original image data received from the Sensor
`114 and any Subsequent image processing Such as Scaling or
`compression to be performed before the image data is ready
`for Storage or transmission to the host computer.
`Once the original image data has been corrected or
`otherwise processed into the desired size or format by
`correction block 510, the corrected data may be scaled and
`compressed if needed to meet the transmission and Storage
`requirements of the communication interface 154 and the
`optional local storage 122 (see FIG. 1). To meet Such
`requirements, the processing block 110 can include Scaling
`and compression logic 514 to perform any necessary image
`Scaling and compression prior to transmission and Storage.
`For instance, the Scaling and compression logic 214 may
`be configured to reduce image size and resolution to yield
`Smaller, leSS detailed Video images, as compared to larger
`and more detailed Still images. Smaller and leSS detailed
`image data may be required in order to transmit a rapid
`Sequence of Video images that are to be decompressed and
`viewed in a host/PC. However, if the transmission link
`between the apparatus 100 and the host/PC has sufficient
`bandwidth to transmit a Sequence of detailed original image
`data at the needed rate to the host/PC, then the Scaling and
`compression logic 514 can be simplified or even eliminated
`for both still or video operation.
`A number of digital image processing functions are con
`templated for the logic 514. These or others similar in
`function may be configured as described below by one
`skilled in the art depending on the performance (Speed of
`rendering the compressed image data) and image quality
`desired. The imaging functions have been implemented in
`one embodiment as Separate units of logic circuitry as Seen
`in FIG. 5. The functions are described as follows in con
`junction with the flow diagram of FIG. 6.
`The logic 514 can perform a 2-D spatial scaling of the
`corrected image data in order to yield Smaller images that
`may be easier to Store or transmit. The Scaling is done
`according to a Selected Scaling ratio using conventional
`known techniques. The Scaling ratio may be integer or
`fractional. The Scaling can be performed in a 2-dimensional
`fashion by, for instance, utilizing two separate
`1-dimensional Scaling processes.
`The logic 514 can be used for both video and still image
`capture Simply by Selecting the appropriate Scaling ratio, as
`
`ZTE Exhibit 1007 - 10
`
`

`

`7
`indicated in Step 614. For instance, a 4:1. Sub-Sampling of the
`corrected image may be performed in Video mode So that 16
`pixels from the corrected image data are averaged together
`to produce 1 pixel in the Scaled image data. Based on
`Standard Sampling theory, and assuming uncorrelated noise
`Sources, the Sub-Sampling may also improve the Signal to
`noise ratio by V16, or a factor of 4. Lower Scaling ratioS Such
`as 2:1 may also be used, where 4 pixels are averaged to
`generate a single pixel in the Scaled image data, resulting in
`a signal to noise ratio (SNR) improvement of 2. By Scaling
`the more detailed corrected image data in this way during
`operation in Video mode, the imaging System compensates
`for the increased noise due to lower light levels that are
`typically encountered with Video operation, Such as during
`Videoconferencing. The Scaling Step, if needed, appears as
`step 618 in FIG. 6.
`Next in the chain of imaging function blocks in FIG. 5 is
`the decorrelation and encoding logic 522. The Scaled image
`data received from the logic 514 is decorrelated by logic 522
`in preparation for entropy encoding as indicated in Step 622,
`according to a Selected one of a number of decorrelation
`methodologies. Once again, the user may Select a particular
`decorrelation methodology that is Suitable for obtaining the
`normally Smaller Size Video images, as indicated in Step 614.
`The decorrelation function can generate error image data
`as differences between neighboring pixels. One particular
`method that can be used for image decorrelation is digital
`pulse code modulation (DPCM). To obtain more compres
`Sion of the image data, if needed, for example, in transmit
`ting a large number of Video image frames, “loss' may be
`introduced in the form of "quantization' (mapping a first set
`of data to a smaller set of values) errors using DPCM.
`The next stage in the chain of imaging function blocks is
`entropy encoding, also performed by logic 522. The tech
`nique uses a variable length encoding technique to compress
`the decorrelated image data, if needed, in Step 626. For
`instance, a commonly known entropy encoding methodol
`ogy that may be used is Huffman encoding. Entropy encod
`ing involves replacing Symbols in the decorrelated image
`data by bit Strings in Such a way that different Symbols are
`represented by binary Strings of different variable lengths,
`with the most commonly occurring Symbols being repre
`sented by the shortest binary strings. The logic 522 thus
`provides compressed image data having variable size, for
`instance as seen in FIG. 5 where the scaled 8-bit data is
`encoded into compressed data having variable size of 3-16
`bits.
`Once again, the encoding methodologies for obtaining
`Video and Still imageS can be different and may be selected,
`as indicated in Step 614, depending on the mode of operation
`identified earlier in step 610. For instance, a larger set of
`Symbols (having variable binary String lengths) may be used
`for encoding Still image data as compared to Video image
`data. This is because there may be more time allocated in the
`host/PC to decompress Still images than to decompress
`Video imageS. In contrast, for encoding Video images, a more
`limited Set of Symbols having uniform binary String lengths
`should be employed to obtain faster decompression of a
`Series of Video image frames. In addition, having a uniform
`binary String length allows usage of a fixed amount of
`bandwidth to transmit the image data that is specifically
`Suitable for a host/PC interface Such as the Universal Serial
`Bus (USB).
`The image processing System shown in FIG. 5 includes
`additional logic that facilitates the dual mode operation
`described above. In particular, the logic circuitry in blockS
`510, 514, and 522 use programmable look-up tables (LUTs)
`
`15
`
`25
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`US 6,512,541 B2
`
`8
`533, 534, and 535 and random access memories (RAMs)
`535 for flexibility in performing their respective imaging
`functions. Each LUT or RAM provides information to its
`respective imaging function logic as Specified by the
`Selected methodology for the particular mode of operation.
`For instance, the scaling logic 514 uses a RAM 235 as a
`Storage area to Store intermediate Scaling computations.
`Also, the LUT534 for the decorrelation and encoding logic
`522 can be loaded with different rules and data required for
`performing decorrelation and encoding as known in the art,
`depending on whether a Still or a Video image is desired. In
`a particular embodiment, two look-up tables (LUTs) are
`used for LUT534, one for listing the characters (a so-called
`“code book”) and one for listing the String lengths.
`Different techniques may be used to determine the proper
`values to be loaded into the RAM and LUTs. For instance,
`image metering may be performed by the controller 160 to
`determi

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket