throbber
(19) Japan Patent Office (JP)
`
`(12) Gazette of Unexamined
`Patent Applications (A)
`
`(11) Publication Number
`2003-319231
`(P2003-319231A)
`(43) Publication Date November 7, 2003 (2003.11.7)
`
`
`
`
`(51) Int.Cl.7
`H04N 5/225
`
`
`
`
`
`ID Codes
`
`
`
`
`
`
`
`
`
`
`FI
`H04N 5/225
`
` Theme Codes (Ref.)
`
`
` Z
`5C022
`
`Examination Request Not Yet Received No. of Claims 3 OL (Total of 9 Pages)
`
`
`
`
`
`
`
`(21) Application No. 2002-125854
` (P2002-125854)
`(22) Filing Date April 26, 2002 (2002.4.26)
`
`
`
`(54) [Title of the Invention] Digital Camera
`
`(57) [Abstract]
`[Problem] To provide a digital camera that can
`prevent unnatural boundary regions even when a
`telephoto captured image is combined in the
`center of a wide-angle captured image, and that
`can obtain images at various magnifications using
`a simple configuration.
`[Solution] Provided is a digital camera having a
`wide-angle first imaging system and a telephoto
`second imaging system, which generates a scaled
`down image 66 that is scaled down to the same
`magnification as a captured image 62A taken
`using the first imaging system from a captured
`image 62B taken using the second imaging
`system, and this is synthesized with the captured
`image 62A taken using the first imaging system.
`At this time, correction data is generated to
`correct the image in the peripheral regions 68 on
`the basis of image data of the peripheral region
`68 of the scaled down image 66 and image data
`of the peripheral region 68 of the captured image
`62A, and the correction data is used as image
`data of the peripheral regions 68. The correction
`data is determined so that continuity of image
`data is maintained around the peripheral regions.
`
`(71) Applicant 000005201
` Fuji Photo Film. Co. Ltd.
` 210, Nakanuma,
`Minamiashigara-shi, Kanagawa-
`ken
`(72) Inventor Shinji MATSUSHIMA
`Fuji Photo Film. Co. Ltd.
`3-11-46, Senzui, Asaka-shi,
`Saitama-ken
`(74) Agent 100079049
` Attorney Atsushi NAKAJIMA
`(and 3 others)
`F Terms (Reference) 5C022 AA13 AB61 AB68
` AC01 AC42 AC54
` AC69
`
`
`
`
`
`
`
`
`
`Ex.1007 / Page 1 of 26Ex.1007 / Page 1 of 26
`
`TESLA, INC.TESLA, INC.
`
`

`

`
`
`[Claims]
`[Claim 1] A digital camera comprising:
`a first imaging system including a first image
`sensor for imaging a subject and a first lens for
`forming an image of the subject on the first
`image sensor;
`a second imaging system including a second
`image sensor for imaging the subject and a
`second lens with a focal length longer than the
`first lens for forming an image of the subject on
`the second image sensor;
`a scale-down means for scaling down the
`second captured image taken with the second
`image sensor to an image with a shooting angle
`that is almost the same as that of the first
`captured image taken with the first image
`sensor; and
`the
`for correcting
`a synthesizing means
`continuity of the image data in the peripheral
`regions based on the
`image data of a
`predetermined peripheral region of the image
`scaled down by the scale-down means and the
`image data corresponding to the peripheral
`region of the
`first captured
`image, and
`synthesizing the first captured image and the
`image scaled down by the scale-down means.
`[Claim 2] The digital camera according to claim
`1, wherein the synthesizing means comprises:
`a correction data generating means
`for
`generating correction data to correct the
`continuity of the image data in the peripheral
`regions based on the image data of the
`predetermined peripheral region of the image
`scaled down by the scale-down means and the
`image data corresponding to the peripheral
`region of the first captured image; and
`a replacement means for synthesizing the first
`captured image with the image scaled-down by
`the scale-down means while replacing the
`image data in the peripheral regions with the
`correction data.
`[Claim 3] The digital camera according to claim
`2 further comprising: a means for setting the
`shooting angle; and a scale-up means for
`scaling up the captured image of the subject
`taken with the first image sensor to an image at
`the shooting angle,
`wherein the scale-down means scales down the
`captured image taken with the second image
`sensor to an image at the shooting angle,
`the correction data generating means generates
`correction data for correcting the continuity of
`the image data in the peripheral regions based
`on the image data of the predetermined
`peripheral region of the scaled down image
`scaled down to an image at the shooting angle,
`and image data corresponding to the peripheral
`region of the scaled-up image scaled up to an
`image at the shooting angle, and
`
`(2) JP 2003-319231 A
`
`the replacing means synthesizes the scaled-up
`image and the scaled down image while
`replacing the image data in the peripheral
`regions with the correction data.
`[Detailed Description of the Invention]
`[0001]
`[Technical Field of the Invention] The present
`invention relates to a digital camera and, more
`specifically, to a digital camera equipped with
`multiple imaging systems.
`[0002]
`[Prior Art] In recent years, digital cameras
`equipped with image sensors such as CCDs have
`become more widely available. In digital
`cameras, when an image is captured, an image
`of the subject is formed on the light-receiving
`surface of the CCD, and each sensor in the CCD
`performs
`a
`signal
`charge
`conversion
`corresponding to the amount of incident light.
`The signal charges accumulated by the CCD are
`retrieved pixel by pixel and converted into
`image data, which is then recorded on a
`memory card or other recording medium.
`[0003] Digital cameras with multiple image
`sensors have also been proposed. For example,
`a technique has been disclosed in JP 2001-
`103466 A that improves the image quality in the
`center of an image by synthesizing a telephoto
`image with a wide-angle image.
`[0004]
`[Problem to Be Solved by the Invention]
`However,
`the
`conventional
`technology
`experiences a problem in which the boundary
`between an image taken with a wide-angle lens
`and an image taken with a telephoto lens is
`unnatural, causing a deterioration in image
`quality.
`[0005] In addition, in order to obtain images at
`various imaging magnifications, use of a high-
`magnification zoom lens or an electronic zoom
`function
`that
`can
`realize an
`imaging
`magnification not covered by the optical zoom
`using digital processing to scale up the captured
`image is required.
`[0006] When using a high-power zoom lens, a
`problem occurs in that space for an actuator,
`etc., becomes necessary, and the control
`system becomes more complex. When an
`electronic zoom function is used, a problem
`occurs in that the image quality deteriorates if
`excessive electronic zooming is used.
`[0007] It is an object of the present invention
`to solve these problems by providing a digital
`camera that can prevent unnatural boundary
`regions even when a telephoto captured image
`is combined in the center of a wide-angle
`captured image, and that can obtain images at
`various magnifications
`using
`a
`simple
`configuration.
`
`
`Ex.1007 / Page 2 of 26Ex.1007 / Page 2 of 26
`
`TESLA, INC.TESLA, INC.
`
`

`

`(3) JP 2003-319231 A
`
`[0008]
`[Means for Solving the Problem] In order to
`achieve this object, the invention according to
`claim 1 is a digital camera comprising a first
`imaging system including a first image sensor
`for imaging a subject and a first lens for forming
`an image of the subject on the first image sensor,
`a second imaging system including a second
`image sensor for imaging the subject and a
`second lens with a focal length longer than the
`first lens for forming an image of the subject on
`the second image sensor, a scale-down means
`for scaling down the second captured image
`taken with the second image sensor to an image
`with a shooting angle that is almost the same as
`that of the first captured image taken with the
`first image sensor, and a synthesizing means for
`correcting the continuity of the image data in
`the peripheral regions based on the image data
`of a predetermined peripheral region of the
`image scaled down by the scale-down means
`and the image data corresponding to the
`peripheral region of the first captured image,
`and synthesizing the first captured image and
`the image scaled down by the scale-down
`means.
`[0009] The present invention has two imaging
`systems, and each imaging system has an
`image sensor and a lens for forming an image
`of the subject on the image sensor. For example,
`the second lens can be a telephoto lens with a
`focal length that is longer than that of the first
`lens, and the first lens can be a wide-angle lens.
`In other words, the two imaging systems can be
`used to obtain images with shooting angles. In
`addition, the first image sensor and the second
`image sensor are composed of, for example,
`CCDs with approximately the same number of
`pixels.
`[0010] The scale-down means scales down the
`second captured image taken with the second
`image sensor to an image with a shooting angle
`that is almost the same as that of the first
`captured image taken with the first image
`sensor. In other words, an image corresponding
`to the center of the first image is generated
`from the second image.
`[0011] The synthesizing means synthesizes the
`first captured image with the image that has
`been scaled down using the scale-down means.
`At this time, an image that has been scaled
`down using the scale-down means is generated
`from an image taken with the telephoto second
`imaging system, so the amount of information
`is greater than the amount of information in the
`central portion of the first captured image. This
`enables the image quality in the center of the
`image to be improved.
`
`[0012] In addition, there is a risk that the
`boundary between the first image and the
`image that has been scaled down using the
`scaling means will be discontinuous, resulting in
`a deterioration in image quality. Therefore, the
`synthesizing means corrects the continuity of
`the image data in the peripheral regions based
`on the image data of a predetermined peripheral
`region of the image scaled down by the scale-
`down means and the image data corresponding
`to the peripheral region of the first captured
`image during synthesis. This keeps the images
`from looking unnatural at the boundary.
`[0013] As stated in claim 2, the digital camera
`may be configured so that the synthesizing
`means comprises a correction data generating
`means for generating correction data to correct
`the continuity of the image data in the
`peripheral regions based on the image data of
`the predetermined peripheral region of the
`image scaled down by the scale-down means
`and the image data corresponding to the
`peripheral region of the first captured image,
`and a replacement means for synthesizing the
`first captured image with the image scaled-
`down by the scale-down means while replacing
`the image data in the peripheral regions with
`the correction data.
`[0014] As stated in claim 3, the digital camera
`may be configured to further comprise a means
`for setting the shooting angle, and a scale-up
`means for scaling up the captured image of the
`subject taken with the first image sensor to an
`image at the shooting angle, in which the scale-
`down means scales down the captured image
`taken with the second image sensor to an image
`at the shooting angle, the correction data
`generating means generates correction data for
`correcting the continuity of the image data in
`the peripheral regions based on the image data
`of the predetermined peripheral region of the
`scaled down image scaled down to an image at
`the
`shooting
`angle,
`and
`image data
`corresponding to the peripheral region of the
`scaled-up image scaled up to an image at the
`shooting angle, and the replacing means
`synthesizes the scaled-up image and the scaled
`down image while replacing the image data in
`the peripheral regions with the correction data.
`This enables images to be obtained at any
`shooting angle set using the setting means.
`[0015] At least one of the first lens and the
`second lens may be a zoom lens. This provides
`an effect equivalent to a high-power zoom.
`[0016]
`[Embodiments of the Invention]
`(First Embodiment) The following is a detailed
`description of an example of
`the
`first
`
`
`Ex.1007 / Page 3 of 26Ex.1007 / Page 3 of 26
`
`TESLA, INC.TESLA, INC.
`
`

`

`(4) JP 2003-319231 A
`
`embodiment of the present invention with
`reference to the drawings.
`[0017] FIG. 1 is a block diagram showing the
`configuration of a digital camera. This digital
`camera 10 has two independent imaging
`systems (a first imaging system 12A and a
`second imaging system 12B), and subject
`images are formed on the light-receiving
`surfaces of CCD 18A and 18B via imaging optics
`14A and 14B, respectively.
`[0018] The first imaging system 12A is used to
`capture images, and the second imaging system
`12B is used for autofocus (AF) control.
`[0019] Imaging optics 14A are composed of a
`first lens group 19A with an imaging lens 15A
`and a focus lens 16A, and an aperture 17A.
`Similarly, imaging optics 14B are composed of a
`second lens group 19B with an imaging lens 15B
`and a focus lens 16B, and an aperture 17B.
`[0020] Lens group 19A is a wide-angle single
`focal length lens whose shooting angle α is wider
`than the shooting angle β of lens group 19B, for
`example, as shown in FIG. 2 (A) and (B). (In
`other words, its focal length is shorter.) Lens
`group 19B is a telephoto single focal length lens
`whose shooting angle β is narrower than the
`shooting angle α of lens group 19A. (In other
`words, its focal length is longer.) The lens
`groups 19A and 19B may be composed of zoom
`lenses (variable focal length lenses). Either of
`lens groups 19A and 19B may be composed of
`a single focal length lens.
`[0021] In addition, CCD 18A and CCD 18B can
`be, for example, CCDs of the same size, that is,
`the same number of pixels. Therefore, when the
`image captured with the first imaging system
`12A is the captured image 62A shown in FIG. 3
`(A), the image captured by the second imaging
`system 12B is the scaled-up captured image
`62B at the center of FIG. 3 (A), as shown in FIG.
`3 (B).
`[0022] Note that CCD 18A corresponds to the
`first image sensor in the present invention, CCD
`18B corresponds to the second image sensor in
`the present invention, the first lens group 19A
`corresponds to the first lens in the present
`invention, and the second lens group 19B
`corresponds to the second lens in the present
`invention.
`[0023] The subject images formed on the light-
`receiving surfaces of CCD 18A and CCD 18B via
`their respective imaging optics 14A and 14B are
`subjected to signal charge conversion by each
`sensor in the CCD that corresponds to the
`amount of incident light. The accumulated
`signal charges are retrieved by CCD drive pulses
`applied by CCD drive circuits 20A and 20B, and
`are sequentially output from CCD 18A and CCD
`
`18B as voltage signals (analog image signals)
`corresponding to each signal charge.
`[0024] CCD 18A and CCD 18B each have a
`shutter drain via a shutter gate, and the
`accumulated signal charges can be swept to the
`shutter drain by driving the shutter gates with
`shutter gate pulses. In other words, the CCDs
`18 have a so-called electronic shutter function
`that controls the accumulation time (shutter
`speed) of the charge accumulated by each
`sensor via shutter gate pulses.
`[0025] The signals retrieved from CCD 18A and
`CCD 18B are processed by correlated double
`sampling (CDS) in CDS circuits 22A and 22B,
`and by color separation into R, G, and B color
`signals, and the signal level of each color signal
`is adjusted (for example, by prewhite balance
`processing).
`[0026] Image signals that have undergone the
`prescribed analog signal processing are applied
`to A/D converters 24A and 24B, converted to R,
`G, and B digital signals by A/D converters 24A
`and 24B, and then stored in memories 26A and
`26B. The memories 26A and 26B may be a
`single memory, or a separate memory for each
`imaging system. Data other than image data is
`also stored in memories 26A and 26B.
`[0027] A timing signal generating circuit (TG)
`28 provides appropriate timing signals to the
`CCD drive circuits 20A and 20B, the CDS circuits
`22A and 22B, and the A/D converters 24A and
`24B in response to commands from the CPU 30,
`and each circuit is driven synchronously by
`timing signals applied from the timing signal
`generating circuit 28.
`[0028] The CPU 30 is the control unit that
`performs overall control of each circuit in the
`digital camera 10, and is connected via a bus 32
`to, for example, a gain adjustment circuit 34, a
`gamma correction circuit 36, a luminance and
`color difference signal processing circuit (known
`as a YC processing circuit) 38, a compression
`and decompression circuit 40, a card interface
`44 for a memory card 42, and a display driver
`48 for driving a display unit 46.
`[0029] The CPU 30 controls the corresponding
`circuit blocks based on signals inputted from a
`control panel 50, controls
`the zooming
`operations of the imaging lenses 15A and 15B
`and the autofocusing (AF) operations of the
`focus lenses 16A and 16B, and performs
`automatic exposure adjustments (AE).
`[0030] The control panel 50 includes a release
`button used to give an instruction to start
`recording images, a means for selecting the
`camera mode, a zoom control, and other input
`means. These input methods can take a variety
`of forms, such as switch buttons, dials, and
`sliding knobs, or they can take the form of items
`
`
`Ex.1007 / Page 4 of 26Ex.1007 / Page 4 of 26
`
`TESLA, INC.TESLA, INC.
`
`

`

`(5) JP 2003-319231 A
`
`in a settings menu selected on a touch panel or
`using a cursor on a LCD monitor display screen.
`The control panel 50 may be located on the
`camera body, or it can be separated from the
`camera body in the form of a remote control
`transmitter.
`[0031] CPU 30 performs various calculations
`such as focus evaluation calculations and AE
`calculations based on
`the
`image signals
`outputted from CCD 18A and CCD 18B, and the
`driving circuits 52A and 52B of the imaging
`lenses 15A and 15B, the focus lenses 16A and
`16B, and the apertures 17A and 17B are
`controlled based on these calculations. In other
`words, when the lens groups 19A and 19B are
`composed of zoom lenses, the motors 54A and
`54B are driven to zoom the imaging lenses 15A
`and 15B and change the imaging magnification.
`These motors 54A and 54B can be omitted when
`a manual zooming configuration or a single focal
`length lens is used.
`[0032] The motors 56A and 56B are driven to
`move the focus lenses 16A and 16B to the focus
`position and set the apertures 17A and 17B to
`the appropriate aperture values. The motors
`56A and 56B are stepping motors, and the focus
`lens position is controlled by controlling the
`number of steps. The motors 56A and 56B are
`not limited to stepping motors, and can be, for
`example, DC motors. Stepping motors and DC
`motors, etc. can also be used for motors 54A
`and 54B.
`[0033] A contrast AF system is used for AF
`control in which the focus lenses 16A and 16B
`are moved so
`that
`the high-frequency
`component of G signals is maximized. In other
`words, the motors 56A and 56B are driven via
`the drive circuits 52A and 52B to move the focus
`lenses 16A and 16B and position them where
`the contrast value is at the maximum value.
`[0034] For AE control, the subject luminance
`(imaging EV value) is determined based on
`integration of the R, G, and B signals in a single
`frame by the integration circuits 60A and 60B,
`and the aperture value and shutter speed are
`determined based on this imaging EV value. The
`apertures 17A and 17B are driven via the drive
`circuits 52A and 52B, and the CCD 18A and CCD
`18B charge accumulation time is controlled
`using an electronic shutter to reach the
`determined shutter speed. As a result, optimal
`exposure adjustment and
`focusing are
`performed simply by pointing the imaging
`lenses 15A and 15B of the digital camera 10 at
`the subject.
`[0035] During imaging and recording, the AF
`operation is performed when the release button
`is “half-pressed,” and the metering operation is
`repeated several times to obtain the exact
`
`imaging EV. The final aperture value and shutter
`speed for imaging is then determined based on
`the imaging EV. The apertures 17A and 17B are
`then driven to the final aperture value when the
`release button is “fully press”, and the charge
`accumulation time is controlled by the electronic
`shutters at the determined shutter speed.
`Instead of AE control based on image signals
`acquired from CCD 18A and CCD 18B, other
`well-known photometric sensors may be used.
`[0036] The digital camera 10 also has a strobe
`light emitting device 55 and a light-receiving
`element 57 for light adjustment. A mode is
`selected in response to operation of the strobe
`mode setting button on the control panel 50,
`such as a “low-luminance auto-flash mode”
`which automatically triggers the strobe light
`emitting device 55 in low-luminance situations,
`a “forced-flash mode” which triggers the strobe
`light emitting device 55 regardless of subject
`brightness, or a “flash prohibited mode” that
`prohibits the strobe light emitting device 55
`from emitting light.
`[0037] The CPU 30 controls the charging of the
`main capacitor of the strobe light emitting
`device 55 and the timing of discharge (light
`emission) to the light emitting tube (such as a
`xenon tube) based on the strobe mode selected
`by the user, and stops light emission based on
`the measurement results
`from the
`light-
`receiving element 57. The
`light-receiving
`element 57 receives reflected light from the
`subject illuminated by the strobe and converts
`it into electrical signals corresponding to the
`amount of light received. Signals from the light-
`receiving element 57 are integrated by an
`integrating circuit not shown in the figure, and
`when the integrated level of received light
`reaches the predetermined appropriate level of
`received light, the strobe light emitting device
`55 stops emitting light.
`[0038] Data outputted by the A/D converters
`24A and 24B is stored in the memories 26A and
`26B and applied to the integration circuits 60A
`and 60B. The integration circuits 60A and 60B
`divide the imaging screen into multiple blocks
`(for example, 64 blocks in an 8 × 8 format) and
`perform integration operations on the G signals
`received from each block. A luminance signal (Y
`signal) may be generated from R, G, and B data
`to perform the luminance signal integration
`operation. The integration circuits 60A and 60B
`can also be used in conjunction with the AF
`calculation circuit and the AE calculation circuit.
`The information (calculation results) on the
`integrated values obtained by the integration
`circuits 60A and 60B are inputted to the CPU 60.
`[0039] The CPU 30 calculates the evaluation
`value E for the imaging screen based on
`
`
`Ex.1007 / Page 5 of 26Ex.1007 / Page 5 of 26
`
`TESLA, INC.TESLA, INC.
`
`

`

`(6) JP 2003-319231 A
`
`integration
`the
`from
`information received
`circuits 60A and 60B, and determines the gain
`value
`(amplification
`ratio)
`for
`the gain
`adjustment circuit 34 using the resulting
`evaluation value E. The CPU 30 controls the
`amount of gain for the gain adjustment circuit
`34 based on the determined gain value.
`[0040] The R, G, and B image data stored in the
`memories 26A and 26B is sent to the gain
`adjustment circuit 34, where they are amplified.
`The amplified image data is gamma-corrected
`by the gamma correction circuit 36 and then
`sent to the YC processing circuit 38, where it is
`converted from RGB data to luminance signals
`(Y signals) and color difference signals (Cr and
`Cb signals).
`[0041] The luminance and color difference
`signals (abbreviated as YC signals) generated
`by the YC processing circuit 38 are stored in the
`memories 26A and 26B. The YC signals stored
`in the memories 26A and 26B are supplied to
`the display driver 48, converted to signals of a
`predetermined format (for example, NTSC
`format color composite video signals), and
`outputted to display unit 46. A liquid crystal
`display or other color display device can be used
`as the display unit 46. The display unit 46 may
`support YC signal input or support RGB signal
`input, and the driver corresponding to the
`display device input is used.
`[0042] The image data is periodically rewritten
`with the image signals outputted from CCD 18A
`and CCD 18B, and video signals generated from
`the image data are supplied to the display unit
`46. In this way, images captured by CCD 18A
`and CCD 18B are displayed on the display unit
`46 as moving images (live images) in real time
`or as nearly continuous images if not in real time.
`[0043] The display unit 46 can be used as an
`electronic viewfinder, and the photographer can
`check the shooting angle using the image
`displayed on the display unit 46 or an electronic
`viewfinder not shown in the drawings. Image
`data for recording is captured as soon as a
`specific recording instruction (start shooting
`instruction) is received such as pressing the
`release button.
`inputs a
`[0044] When the photographer
`shooting and recording instruction from the
`control panel 50, the CPU 30 sends a command
`to the compression and decompression circuit
`40 if necessary, which causes the compression
`and decompression circuit 40 to compress YC
`data in the memories 26A and 26B using the
`JPEG format or other predetermined format.
`The compressed image data is recorded on a
`memory card 42 via a card interface 44.
`[0045] When
`the mode
`for
`recording
`uncompressed
`image data (uncompressed
`
`mode) is selected, the image data is recorded in
`uncompressed form on the memory card 42
`without
`compression processing by
`the
`compression and decompression circuit 40.
`[0046] The digital camera 10 in the present
`embodiment uses a memory card 42 as a means
`of storing
`image data. More specifically,
`recording media such as smart media, for
`example, can be used. The recording medium is
`not limited to what is listed above. For example,
`a PC card, microdrive, multimedia card (MMC),
`magnetic disk, optical disk, magneto-optical
`disk, or memory stick can be used. The signal
`processing means and interface are based on
`the type of medium used.
`[0047] In playback mode, image data retrieved
`from the memory card 42 is decompressed by
`the compression and decompression circuit 40
`and outputted to the display unit 46 via the
`display driver 48.
`synthesis processing
`the
`[0048] Next,
`performed on captured images taken by the two
`imaging systems will be explained. In the
`present embodiment, by synthesizing a
`captured image 62A taken with the first imaging
`system 12A with a captured image 62B taken
`with the second imaging system 12B, an image
`with improved image quality in the center of the
`captured image is generated.
`[0049] For example, when shooting at the
`widest angle, that is, at shooting angle α,
`indicated by the zoom operation means, as
`shown in FIG. 4 (A) and 4(B), a scaled down
`image 66 is generated from a captured image
`62B taken by the second imaging system 12B,
`which is scaled down to the same magnification
`(shooting angle) as the captured image 62A
`taken by the first imaging system 12B, and this
`is synthesized with the captured image 62A
`taken by the first imaging system 12A. In other
`words, both
`images are synthesized by
`replacing the image in the center of the
`captured image 62A corresponding to the scaled
`down image 66, with the scaled down image 66.
`This improves image quality in the center of the
`image.
`[0050] However, simply replacing the image in
`the center of the captured image 62A with the
`scaled down image 66 may result in an
`unnatural boundary between the two images,
`which may cause a deterioration in image
`quality.
`[0051] Therefore, in the present embodiment,
`as shown in FIG. 4 (A), correction data is
`generated to correct the image in the peripheral
`regions 68 based on the image data in the
`peripheral region 68 (shaded region) of the
`scaled down image 66 and the image data in the
`peripheral region 68 of the captured image 62A.
`
`
`Ex.1007 / Page 6 of 26Ex.1007 / Page 6 of 26
`
`TESLA, INC.TESLA, INC.
`
`

`

`(7) JP 2003-319231 A
`
`This correction data is then used as the image
`data for the peripheral regions 68. This prevents
`unnaturalness from occurring at the boundary
`between the captured image 62A and the scaled
`down image 66, preventing a deterioration in
`image quality.
`the process of generating
`[0052] Next,
`correction data for the peripheral regions 68 will
`be explained.
`[0053] FIG. 5 shows an example of image data
`70 for a specific color (for example, R) between
`A and B in FIG. 4 (A) in image data comprising
`the captured image 62A and image data 72 for
`the scaled down image 66.
`[0054] In the generation of correction data, n
`segments (n = 1, 2, 3, etc.) are created from
`position X1, which indicates the position of the
`outer edge of the peripheral region 68, to the
`position Xn, which indicates the position of the
`inner edge of the peripheral region 68. Then,
`correction data D1 to Dn corresponding to each
`of n divided positions X1 to Xn is obtained.
`[0055] First, correction data D1 for position X1,
`which indicates the position of the outer edge of
`the peripheral region 68, uses image data from
`the image data 70 comprising the captured
`image 62A as shown in FIG. 5, and correction
`data Dn for position Xn, which indicates the
`position of the inner edge of the peripheral
`region 68, uses image data from the image data
`72 comprising the scaled down image 62A as
`
`shown in FIG. 5. Then, correction data D2 to Dn-
`1 up to positions X2 to Xn-1 is obtained using the
`following equation.
`[0056]
`Di = D1-{(Xi-X1)/L} × H … (1)
`Here, i = 2, 3, ... n-1, L is the width of the
`peripheral region 68, which can be expressed as
`Xn - X1, and H is the difference between
`correction data D1 and Dn.
`[0057] In other words, correction data D1 to Dn
`is determined so that the correction data
`approaches image data 72 from image data 70
`from position X1, which is the left end of the
`peripheral region 68, to position Xn, which is the
`right end of the peripheral region 68. The
`correction data for the peripheral region 68 is
`represented by the correction curve indicated by
`the single-dot line in FIG. 5, and because the
`image data between A and B is almost
`continuous, the boundary between the captured
`image 62A and the scaled down image 66 can
`be kept from becoming unnatural.
`[0058] The processing described above is
`performed on all of the image data 70 and 72 in
`the peripheral region 68 on radiating lines 76
`centered on the center point 74 of the captured
`image 62A, as shown in FIG. 6, which generates
`
`correction data for the entire peripheral region
`68.
`[0059] This processing is then performed on all
`R, G, and B images to complete the generation
`of correction data.
`[0060] The generation of correction data is not
`limited to the method described above. Any
`method may be used as long as the image data
`between A and B is nearly continuous.
`[0061] When generating an image at an
`intermediate magnification, that is, between the
`magnification of captured image 62A and the
`magnification of captured image 62B, a scaled
`up image in which captured image 62A is scaled
`up to an intermediate magnification and a
`scaled down image in which captured image 62B
`is scaled down to an intermediate magnification
`can be generated and synthesized using the
`method described above. This allows for an
`infinite variety of intermediate magnification
`images without the use of a zoom lens.
`[0062] Next, the control routine executed by
`CPU 30 to perform the actions of the present
`embodiment will be described with reference to
`the flowchart in FIG. 7.
`[0063] In step 100, it is determined whether the
`release button has been fully pressed. If the
`release button has not been pressed, the
`determination in step 100 is NO and the camera
`goes into standby until the releas

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket