throbber
(19) United States
`(12) Patent Application Publication (10) Pub. No.: US 2013/0070128A1
`SUZUKI et al.
`(43) Pub. Date:
`Mar. 21, 2013
`
`US 2013 007 O128A1
`
`(54) IMAGE PROCESSING DEVICE THAT
`PERFORMS IMAGE PROCESSING
`
`(52) U.S. Cl.
`USPC ................................... 348/246; 348/E05.079
`
`(75) Inventors: Hiroshi SUZUKI, Tokyo (JP);
`Motoyuki KASHIWAGI, Tokyo (JP)
`
`(57)
`
`ABSTRACT
`
`(73) Assignee: CASIO COMPUTER CO.,LTD.,
`Tokyo (JP)
`
`(21) Appl. No.: 13/608, 197
`(22) Filed:
`Sep. 10, 2012
`(30)
`Foreign Application Priority Data
`
`Sep. 20, 2011
`
`(JP) ................................. 2011-205074
`
`Publication Classification
`
`(51) Int. Cl.
`H04N 9/64
`
`(2006.01)
`
`A digital camera (1) includes: an imaging unit (16) having an
`imaging element that includes a plurality of pixels, and gen
`erates a pixel value for each of the plurality of pixels as image
`data; a position specification unit (53) that specifies a position
`of a defective pixel among the plurality of pixels, in the image
`data generated by the imaging unit (16); a region specification
`unit (54) that specifies a region in the image data in which
`image noise occurs due to the defective pixel, based on the
`position specified by the position specification unit (53); and
`a correction unit (55) that corrects a pixel value of each of a
`plurality of pixels included in the region in the image data
`specified by the region specification unit (54), based on a
`weighted average of pixels values of a plurality of pixels
`located at a periphery of the region.
`
`
`
`
`
`/-13
`
`9
`
`2
`COMMENCATION
`
`2
`sensor UNIT
`
`5.
`
`6
`
`OPCAL
`SYSE
`
`IMAGING UNIT
`
`7
`
`IMAGE
`PROSESSING
`
`-
`
`2
`-am
`ERAON N.
`
`SUTER
`
`A.
`
`
`
`
`
`
`
`
`
`14
`
`23
`
`3.
`
`Renovable
`
`EA
`
`Align Ex. 1009
`U.S. Patent No. 9,962,244
`
`0001
`
`

`

`Patent Application Publication
`
`Mar. 21, 2013 Sheet 1 of 6
`
`US 2013/007O128A1
`
`
`
`EAHHOE
`
`0002
`
`

`

`Patent Application Publication
`
`Mar. 21, 2013 Sheet 2 of 6
`
`US 2013/007O128A1
`
`- - - - - - - - - - - - - - - -
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`„LINT, NOHIV/HEIdO
`
`
`
`LINn HoSNES
`
`0003
`
`

`

`Patent Application Publication
`
`Mar. 21, 2013 Sheet 3 of 6
`
`US 2013/007O128A1
`
`FIG.3
`
`
`
`0004
`
`

`

`Patent Application Publication
`
`Mar. 21, 2013 Sheet 4 of 6
`
`US 2013/007O128A1
`
`FG.4
`
`
`
`start IMAGE CAPTURE PRocessING
`
`VAGE DAA. ACUSTON PROCESSING
`
`S2
`DEFECWE PXE CORRESPONNG
`POSITION SPECIFICATION PROCESSING
`
`S3
`CORRECTION TARGET REGION
`SPECIFICATION PROCESSENG
`
`S4
`UNACOUIRED PIXEL VALUE PRESENTVYES
`N. CORRECON ARGE REGON
`S5
`NC
`S7 PIXEL VALUE AcquiSITION PROCESSING
`
`WARIOUS IMAGE PROCESSENG
`
`S8
`PAGE DATA SORAGE PROCESSENG
`
`S6
`PXEL VALUE CORRECON PROCESSING
`
`O END or processNes
`
`0005
`
`

`

`Patent Application Publication
`
`Mar. 21, 2013 Sheet 5 of 6
`
`US 2013/007O128A1
`
`FG.5
`
`OXOOXOO
`
`O
`
`-S
`
`MAGE DAA ACSION PROCESSING
`
`
`
`S2
`EFECWEXE CORRESPONING
`POSION SPECIFICATION PROCESSENG
`
`-S13
`S
`CORRECONTARGE REGON
`SPECIFICATION PROCESSNG
`
`S4
`NACREXEWAE PRESENT
`IN CORRECONTARGET REGON
`
`NO
`
`4S15
`PXEWALUE ACCUISTION
`PROCESSENG
`
`S6
`
`SPATA FREGUENCY
`CACULATION PROCESSING
`
`WEIGHTED AVERAGE PXEWALUE
`CORRECTION PROCESSENG
`
`
`
`
`
`
`
`
`
`
`
`
`
`S20
`-
`WAROSAGE PROCESSING
`
`S21
`MAGE DAA. STORAGE PROCESSENG
`
`
`
`END OF PROCESSNG
`
`/S19
`NEAN FER PXEWALE
`CORRECON PROCESSENG
`
`0006
`
`

`

`Patent Application Publication
`
`Mar. 21, 2013 Sheet 6 of 6
`
`US 2013/0070128A1
`
`
`
`0007
`
`

`

`US 2013/0070 128A1
`
`Mar. 21, 2013
`
`IMAGE PROCESSING DEVICE THAT
`PERFORMS MIAGE PROCESSING
`0001. This application is based on and claims the benefit
`of priority from Japanese Patent Application No. 2011
`205074, filed on 20 Sep. 2011, the content of which is incor
`porated herein by reference.
`
`BACKGROUND OF THE INVENTION
`0002 1. Field of the Invention
`0003. The present invention relates to an image processing
`device, an image processing method and a recording medium.
`0004 2. Related Art
`0005. In digital cameras, portable telephones having an
`image capture function, and the like, the light incident from a
`lens is converted to electrical signals by way of an imaging
`elements CMOS (Complementary Metal Oxide Semiconduc
`tor) or CCD (Charge Coupled Device) type, and these elec
`trical signals are outputted as image data.
`0006. The aforementioned imaging elements photoelec
`trically convert incident light and accumulate charge, and
`have a plurality of pixels that determine the brightness based
`on the amount of accumulated charge. In this plurality of
`pixels, there is a possibility for white defects to occur in which
`a charge is accumulated that exceeds the amount according to
`the incident light.
`0007 Japanese Unexamined Patent Application, Publica
`tion No. 2000-101925 discloses a method of specifying in
`advance a pixel in CCD adopting imaging elements at which
`a white defect is occurring, and correcting a pixel value (im
`age signal) corresponding to this pixel based on the pixel
`values (image signal) corresponding to before and after pix
`els.
`0008. However, in CMOS-type imaging elements, there is
`a possibility for blooming to occur in which the white defect
`expands to pixels arranged in a peripheral region to the pixel
`at which the white defect occurs.
`0009 FIGS. 6A to Fare diagrams illustrating blooming. In
`FIGS. 6A to F, one square box indicates one pixel. The color
`of each pixel indicates the amount of charge, meaning that a
`pixel approaches white as charge is being accumulated.
`0010 First, in FIG. 6A, a white defect occurs in one pixel
`(pixel in the center of the figure). At the initial occurrence, the
`amount of charge accumulated more than necessary is Small,
`and the degree of the white defect is low. However, due to
`environmental changes Such as temperature and humidity or
`ageing, when the amount of charge accumulated more than
`necessary increases and reaches the permitted value (FIG.
`6B), the charge will leak out to adjacent pixels, and a charge
`will accumulated more than necessary also in the adjacent
`pixels (FIG. 6C). Thereafter, the charge being leaked to adja
`cent pixels from the pixels at which a charge more than
`necessary has accumulated in this way occurs like a chain
`reaction, and the white defect grows (FIGS. 6D to F).
`0011. In order to correct the white defect occurring due to
`blooming, for example, a digital camera or the like repeats an
`image capturing action two times consecutively. In other
`words, the digital camera or the like captures a normal Subject
`in a first image capturing action, and captures in a darkened
`state in the second image capturing action. The digital camera
`or the like specifies the pixel at which a charge has accumu
`lated in the results of Such a second image capturing action as
`a pixel in which a white defect has occurred. Then, in the
`image data captured in the first image capturing action, the
`
`digital camera or the like performs correction of the pixel
`value (image signal) of the pixel specified in this way. How
`ever, the method disclosed in Japanese Unexamined Patent
`Application, Publication No. 2000-101925 corrects the pixel
`value of the pixel at which the white defect is occurring based
`on the pixel value before and after; therefore, there has been
`concern over the correction result being unnatural.
`0012. In addition, there has been concern over the overall
`photography time taking too long, if a second photography
`action is made to specify the pixel at which the white defect
`occurs during photography in order to raise the accuracy in
`specifying the pixel in which a white defect occurs.
`0013 As a result, an image processing device and image
`processing method have been desired that can correct accu
`rately and effectively correct pixel values in image data (im
`age signals).
`
`SUMMARY OF THE INVENTION
`0014. According to one aspect of the present invention,
`0015 an image processing device the performs image pro
`cessing includes:
`0016 an imaging unit that has an imaging element includ
`ing a plurality of pixels, and that generates a pixel value for
`each of the plurality of pixels as image data;
`0017 a position specification unit that specifies a position
`of a defective pixel among the plurality of pixels, in the image
`data generated by the imaging unit;
`0018 a region specification unit that specifies a region in
`the image data in which image noise occurs due to the defec
`tive pixel, based on the position specified by the position
`specification unit; and
`0019 a correction unit that corrects a pixel value of each of
`a plurality of pixels included in the region in the image data
`specified by the region specification unit, based on a weighted
`average of pixels values of a plurality of pixels located at a
`periphery of the region.
`0020. In addition, according to another aspect of the
`present invention,
`0021 in an image processing method executed by an
`image processing device having an imaging element includ
`ing a plurality of pixels, the method includes the steps of
`0022 generating, as image data, a pixel value of each of
`the plurality of pixels;
`0023 specifying a position of a defective pixel among the
`plurality of pixels, in the image data generated in the step of
`generating:
`0024 specifying a region in the image data in which image
`noise occurs due to the defective pixel, based on the position
`specified in the step of specifying a position; and
`0025 correcting a pixel value of each of the plurality of
`pixels included in the region of the image data specified in the
`step of specifying a region, based on a weighted average of
`pixel values of a plurality of pixels located at a periphery of
`the region.
`0026. Furthermore, according to yet another aspect of the
`present invention,
`0027 a computer readable recording medium is encoded
`with a program that causes a computer of an image processing
`device having an imaging element including a plurality of
`pixels to execute the steps of
`0028 generating, as image data, a pixel value of each of
`the plurality of pixels;
`
`0008
`
`

`

`US 2013/0070 128A1
`
`Mar. 21, 2013
`
`0029 specifying a position of a defective pixel among the
`plurality of pixels, in the image data generated in the step of
`generating:
`0030 specifying a region in the image data in which image
`noise occurs due to the defective pixel, based on the position
`specified in the step of specifying a position; and
`0031 correcting a pixel value of each of the plurality of
`pixels included in the region of the image data specified in the
`step of specifying a region, based on a weighted average of
`pixel values of a plurality of pixels located at a periphery of
`the region.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`0032 FIG. 1 is a block diagram showing a hardware con
`figuration of a digital camera as an embodiment of an image
`capturing device according to the present invention;
`0033 FIG. 2 is a functional block diagram showing a
`functional configuration for the digital camera of FIG. 1 to
`execute image capture processing:
`0034 FIG. 3 is a graph illustrating the correction of a
`region in which image noise occurs due to a defective pixel in
`image data:
`0035 FIG. 4 is a flowchart showing an example of the flow
`of image capture processing executed by the digital camera of
`FIG. 2:
`0036 FIG. 5 is a flowchart showing another example of
`the flow of image capture processing executed by the digital
`camera of FIG. 2; and
`0037 FIGS. 6A to F are graphs illustrating blooming.
`
`DETAILED DESCRIPTION OF THE INVENTION
`0038 Hereinafter, an embodiment relating to the present
`invention will be explained while referencing the drawings.
`0039 FIG. 1 shows a hardware configuration diagram for
`a digital camera 1 as an embodiment of an image signal
`processing device according to the present invention.
`0040. Referring to FIG. 1, the digital camera 1 includes a
`CPU (Central Processing Unit) 11, ROM (Read Only
`Memory) 12, RAM (Random Access Memory) 13, a bus 14,
`an optical system 15, an imaging unit 16, an image processing
`unit 17, a storage unit 18, a display unit 19, an operation unit
`20, a communication unit 21, a sensor unit 22, and a drive 23.
`0041. The CPU 11 executes various processing in accor
`dance with programs recorded in the ROM 12, or programs
`loaded from the storage unit 18 into the RAM 13. In addition
`to programs for the CPU 11 to execute various processing, the
`ROM 12 stores the necessary data and the like upon the CPU
`11 executing various processing, as appropriate.
`0042. For example, programs for realizing the respective
`functions of the image controller 51 to the correction unit 55
`in FIG.2 described later are stored in the ROM12 and storage
`unit 18 in the present embodiment. Therefore, the CPU 11 can
`realize the respective functions of the image controller 51 to
`the correction unit 55 in FIG. 2 described later, by executing
`the processing in accordance with these programs, and coop
`erating as appropriate with the image processing unit 17
`described later.
`0043. The CPU 11, ROM12 and RAM 13 are connected to
`each other via the bus 14. The optical system 15, imaging unit
`16, image processing unit 17, storage unit 18, display unit 19,
`operation unit 20, communication unit 21, sensor unit 22 and
`drive 23 are also connected to this bus 14.
`
`0044) The optical system 15 is configured so as to include
`a lens that condenses light in order to capture an image of a
`Subject, e.g., a focus lens, Zoom lens, etc. The focus lens is a
`lens that causes a Subject image to form on the light receiving
`Surface of imaging elements of the imaging unit 16. The Zoom
`lens is a lens that causes the focal length to freely change in a
`certain range. Peripheral devices that adjust the focus, expo
`sure, etc. can also be provided to the optical system 15 as
`necessary.
`0045. The imaging unit 16 is configured from a plurality of
`imaging elements, AFE (Analog Front End), etc., and gener
`ates image data containing pixels obtained from the plurality
`of imaging elements. In the present embodiment, the imaging
`elements are configured from photoelectric transducers of
`CMOS (Complementary Metal Oxide Semiconductor) sen
`Sor-type. A color filter Such as a Bayer array is installed on the
`imaging elements. Every fixed time period, the imaging ele
`ments photoelectrically convert (capture) an optical signal of
`an incident and accumulated Subject image during this period
`during this period, and sequentially Supply the analog electric
`signals obtained as a result thereof to the AFE.
`0046. The AFE conducts various signal processing such as
`A/D (Analog/Digital) conversion processing on these analog
`electric signals, and outputs the digital signals obtained as a
`result thereof as output signals of the imaging unit 16. It
`should be noted that the output signal of the imaging unit 16
`will be referred to as “image data' hereinafter. Therefore, the
`image data is outputted from the imaging unit 16, and Sup
`plied as appropriate to the image processing unit 17, etc. In
`the present embodiment, a unit of image data outputted from
`the imaging unit 16 is image data of an aggregate of pixel
`values (image signals) of each pixel constituting the imaging
`elements, i.e. of a frame or the like constituting one static
`image or dynamic image.
`0047. The image processing unit 17 is configured from a
`DSP (Digital Signal Processor), VRAM (Video Random
`Access Memory), etc.
`0048. In addition to image processing Such as noise reduc
`tion and white balance on image data input from the imaging
`unit 16, the image processing unit 17 conducts various image
`processing required in the realization of the respective func
`tions of the image acquisition unit 52 to the correction unit 55
`described later, in cooperation with the CPU 11. The image
`processing unit 17 causes image data on which various image
`processing has been conducted to be stored in the storage unit
`18 or removable media 31.
`0049. The storage unit 18 is configured by DRAM (Dy
`namic Random Access Memory), etc., and temporarily stores
`image data outputted from the image processing unit 17. In
`addition, the storage unit 18 also stores various data and the
`like required in various image processing.
`0050. The display unit 19 is configured as a flat display
`panel consisting of an LCD (Liquid Crystal Device) and LCD
`driver, for example. The display unit 19 displays images
`representative of the image data Supplied from the storage
`unit 18 or the like.
`0051 Although not illustrated, the operation unit 20 has a
`plurality of switches in addition to the shutter switch 41, such
`as a power Switch, photography mode Switch and playback
`switch. When a predetermined switch among this plurality of
`Switches is subjected to a pressing operation, the operation
`unit 20 Supplies a command assigned for the predetermined
`Switch to the CPU 11.
`
`0009
`
`

`

`US 2013/0070 128A1
`
`Mar. 21, 2013
`
`0.052 The communication unit 21 controls communica
`tion with other devices (not illustrated) via a network includ
`ing the Internet.
`0053. The sensor unit 22 measures the ambient tempera
`ture of the imaging elements of the imaging unit 16, and
`provides the measurement result to the CPU 11.
`0054 The removable media 31 made from a magnetic
`disk, optical disk, magneto-optical disk, semiconductor
`memory, or the like is installed in the drive 23 as appropriate.
`Then, programs read from the removable media 31 are
`installed in the storage unit 18 as necessary. In addition,
`similarly to the storage unit 18, the removable media 31 can
`also store various data such as the image data stored in the
`storage unit 18.
`0055 FIG. 2 is a functional block diagram showing a
`functional configuration for executing a sequence of process
`ing (hereinafter referred to as "image capture processing”),
`among the processing executed by the digital camera 1 of
`FIG. 1, from capturing an image of a Subject until recording
`image data of the captured image obtained as a result thereof
`in the removable media 31.
`0056. As shown in FIG. 2, in a case of image capture
`processing being executed, the imaging controller 51 func
`tions in the CPU 11, and the image acquisition unit 52, posi
`tion specification unit 53, region specification unit 54 and
`correction unit 55 function in the image processing unit 17. It
`should be noted that the functions of the image controller 51
`do not particularly need to be built into the CPU 11 as in the
`present embodiment, and the functions can also be assigned
`to the image processing unit 17. Conversely, the respective
`functions of the image acquisition unit 52 to the correction
`unit 55 do not particularly need to be built into the image
`processing unit 17 as in the present embodiment, and at least
`a part of these functions can also be assigned to the CPU 11.
`0057 The image controller 51 controls the overall execu
`tion of image capture processing.
`0058. Herein, with the imaging elements of the imaging
`unit 16, defective pixels may occur due to damage or the like
`at a stage during production, for example. Then, at a position
`corresponding to the defective pixel, a charge greater than
`necessary will accumulate, thereby causing image noise
`(white defect) to occur in which the position corresponding to
`the defective pixel is displayed as whitening. Furthermore,
`there is a possibility for blooming (refer to FIGS. 6A to F) to
`occur in which the image noise expands to the pixels adjacent
`to the defective pixel due to environmental changes Such as
`the temperature and humidity and ageing in imaging elements
`configured from CMOS-type photoelectric conversion ele
`mentS.
`0059. As a result, in the digital camera 1 according to the
`present embodiment, the image acquisition unit 52 to the
`correction unit 55 execute the following such processing
`under the control of the image controller 51.
`0060 FIG. 3 is a graph illustrating the correction of a
`region in which image noise occurs in the image data due to
`a defective pixel. It should be noted that FIG. 3 is a graph in
`which a plurality of pixels are arranged in a box grid. Here
`inafter, the processing of the image acquisition unit 52 to the
`correction unit 55 will be explained while referencing FIG.3.
`0061 The image acquisition unit 52 receives an acquisi
`tion command issued from the image controller 51, acquires
`image data generated and outputted from the imaging unit 16,
`and causes the acquired image data to be stored in the VRAM.
`
`0062. The position specification unit 53 specifies the posi
`tion, in the image data generated by the imaging unit 16 and
`stored in the VRAM, of at least one defective pixel included
`in the imaging elements. The position of the defective pixel is
`stored in list format in the storage unit 18 as a defective pixel
`position list. In other words, the position information of the
`defective pixels in the image data is stored in advance in the
`storage unit 18. For example, in FIG. 3, the position of a
`defective pixel is specified as a position E by the position
`specification unit 53.
`0063. The region specification unit 54 specifies a region in
`which image noise occurs in the image data due to a defective
`pixel, based on the position specified by the position specifi
`cation unit 53. More specifically, via the image controller 51,
`the region specification unit 54 acquires an ambient tempera
`ture of the imaging elements of the imaging unit 16 measured
`in the sensor unit 22 and an exposure time when image cap
`turing in the imaging unit 16. Then, the region specification
`unit 54 sets the acquired ambient temperature of the imaging
`elements and the exposure time as the state during image data
`generation by the imaging unit 16. Next, the region specifi
`cation unit 54 specifies the size of the region in which image
`noise occurs, based on the position specified by the position
`specification unit 53 and the state during generation of the
`image data by the imaging unit 16. For example, in FIG. 3,
`position A to position I centered around position E of a defec
`tive pixel are specified by the region specification unit 54 as
`the region in which image noise occurs.
`0064. It should be noted that it may be configured so that
`the size of the region corresponding to the ambient tempera
`ture of the imaging elements and exposure time is made to be
`stored in advance in the storage unit 18 as a correspondence
`table, and the region specification unit 54 specifies the size of
`a region corresponding to a combination of the ambient tem
`perature of the imaging elements and the exposure time based
`on this correspondence table. It addition, the region specifi
`cation unit 54 may fix the size of a region in which image
`noise occurs as a fixed size in advance.
`0065. It should be noted that a region specified by the
`region specification unit 54 will be referred to hereinafter as
`correction target region.
`0066. The correction unit 55 corrects pixels values of a
`plurality of pixels included in the correction target region,
`based on the pixel values of a plurality of pixels located at the
`periphery of the correction target region.
`0067 More specifically, the correction unit 55 specifies a
`plurality of pixels located at the periphery of the correction
`target region, by selecting pixels located in a predetermined
`range from the position specified by the position specification
`unit 53.
`0068. Next, the correction unit 55 selects, for each of the
`plurality of pixels included in the correction target region,
`pixels to be used in the correction of pixel values, from among
`the plurality of pixels located at the periphery of the correc
`tion target region, depending on the position of each of the
`pixels to be a correction target.
`0069. Herein, the correction unit 55 preferably uniformly
`selects pixels to be used in the correction of pixel values from
`a plurality of pixels located at the periphery of the correction
`target region. In addition, the correction unit 55 preferably
`selects pixels of a similar color as the pixel of the correction
`target from the plurality of pixels located at the periphery of
`the region. For example, in FIG. 3, in a case of the pixels
`corresponding to position A, position A1 to position A8,
`
`0010
`
`

`

`US 2013/0070 128A1
`
`Mar. 21, 2013
`
`being colors of red, pixels corresponding to position A1 to
`position A8 will be selected as the pixels to be used in the
`correction of the pixel at position A.
`0070 Next, the correction unit 55 sequentially sets each of
`the plurality of pixels included in the correction target region
`to pixels to be given attention as the target of processing
`(hereinafter referred to as “attention pixel’’), and for the pixel
`value of an attention pixel, performs correction based on a
`weighted average of pixel values of the pixels selected from
`the plurality of pixels located at the periphery of the correc
`tion target region. Herein, the correction unit 55 preferably
`causes the weight of each of the pixel values of the plurality of
`pixels to be used incorrection to vary in the weighted average,
`depending on the position of each of the plurality of pixels
`included in the correction target region.
`0071. For example, in FIG. 3, when the pixels of position
`A1 to position A8 are being selected as the pixels to be used
`in correction in the case of the pixel of position Abeing set as
`the attention pixel, the correction unit 55 increases the
`weighting value more as the distance relative to the position A
`decreases to calculate the weighted average.
`0072 More specifically, the correction unit 55 corrects the
`pixel value of the attention pixel at position A in accordance
`with the following formula (1), for example.
`
`0073. In formula (1), VA indicates the pixel value of the
`attention pixel at the position A after correction, and VA1 to
`VA8 respectively indicate the pixel values at position A1 to
`position A8. In addition, the weighting for each of the pixel
`values VA1 to VA8 is set based on the inverse of the additional
`value of the distance in the horizontal direction and the dis
`tance in the vertical direction from the position A of the
`attention pixel.
`0074. When generalizing formula (1), it is rewritten as the
`following formula (2).
`
`0075 Formula (2) is a formula that obtains the corrected
`pixel value VP0 of an attention pixel P0 located at the coor
`dinates (x+1, y +1), in a case of blooming occurring at a
`peripheral N pixel centered around a pixel P located at the
`coordinates (x,y), i.e. in a case of the correction target region
`having the size of (2N+1)x(2N+1).
`0076. In formula (2), Vn (n indicating any among posi
`tions A1 to A8) indicates the pixel value at position n.
`0077. Herein, position A1 is separated from the position of
`the attention pixel P0 by the distance Lt above in the vertical
`direction, and separated by the distance Ll to the left in the
`horizontal direction.
`0078. The position A2 is separated from the position of the
`attention pixel P0 by the distance Lt above in the vertical
`direction.
`007.9 The position A3 is separated from the position of the
`attention pixel P0 by the distance Lt above in the vertical
`direction, and separated by the distance Lr to the right in the
`horizontal direction.
`0080. The position A4 is separated from the position of the
`attention pixel P0 by the distance Ll to the left in the horizon
`tal direction.
`
`I0081. The position A5 is separated from the position of the
`attention pixel P0 by the distance Lr to the right in the hori
`Zontal direction.
`I0082. The position A6 is separated from the position of the
`attention pixel P0 by the distance Lb below in the vertical
`direction, and separated by the distance Ll to the left in the
`horizontal direction.
`I0083. The position A7 is separated from the position of the
`attention pixel P0 by the distance Lb below in the vertical
`direction.
`I0084. The position A8 is separated from the position of the
`attention pixel P0 by the distance Lb below in the vertical
`direction, and separated by the distance Lr to the right in the
`horizontal direction.
`I0085. Herein, in a case of the imaging element being a
`Bayer array, it is defined as a 2, and in the case of not being
`a Bayer array, if it is defined as a 1, for example, the length Lt
`is Nt-j+a, the length Lb is N-j+a, the length L1 is N+i+a, and
`the length Lr is N-i+a.
`I0086. In addition, Lall in formula (2) is expressed as the
`following formula (3).
`
`I0087. It should be noted that the weighting of the weighted
`average is not particularly limited to the aforementioned
`example, and will be sufficient so long as being weighting
`based on the distance from the attention pixel.
`I0088. For example, in the case of the state shown in FIG.3,
`when setting the length of a side of one box to 1, the distance
`relative to the position A of the attention pixel will be 2 for
`position A2 and position A4, 2.8 for position A1, 4 for posi
`tions A5 and A7. 4.5 for position A3 and position A6, and 5.7
`for position A8. Therefore, the correction unit 55 increases
`the weighting for positions A2 and A4 having the closest
`distance to the position A, and decreases the weighting for
`position A8 having the farthest distance to the position A.
`I0089. In addition, in the aforementioned example, only a
`part of the pixels present at the positions A1 to A8 are used as
`the pixels to be used in correction among the peripheral pixels
`of the correction target region. However, this is an exempli
`fication, and it is possible to use any number of pixels at any
`position among the peripheral pixels of the correction target
`region, as the pixels to be used in correction. Understandably,
`all of the peripheral pixels of the correction target region can
`be used as pixels to be used in correction.
`0090. In addition, although the correction unit 55 has been
`configured to perform correction of the pixel values in the
`correction target region based on the weighted average of a
`plurality of pixel values, it is not limited thereto, and it may be
`configured so as to perform correction defining the pixel
`value that is an intermediate value of the plurality of pixels
`used in correction as the pixel value. In this case, the correc
`tion unit 55 performs Fourier transform of the pixel values of
`a plurality of pixels included in the correction target region,
`and calculates the spatial frequency of the correction target
`region. Then, in a case of the spatial frequency thus calculated
`being lower than a predetermined value, the correction unit 55
`sets each of the pixel values of the plurality of pixels included
`in the correction target region sequentially to the attention
`pixel, and performs correction so as to make the intermediate
`value of the pixel values of the plurality of pixels to be used in
`correction (each pixel of positions A1 to A8 in the example of
`FIG. 3) the pixel value of the attention pixel. In a case of the
`
`0011
`
`

`

`US 2013/0070 128A1
`
`Mar. 21, 2013
`
`spatial frequency being low, it is possible to perform more
`appropriate correction by performing correction according to
`a median filter.
`0091. In other words, generally, in the case of many pixels
`being present at the periphery of a defective pixel and having
`a pixel value close to the correct pixel value of this defective
`pixel, by using the intermediate value according to the median
`filter, it becomes possible to correct the pixel value of a
`defective pixel to a pixel value close to the correct pixel value,
`without blurring the image (while keeping the edge of the
`image).
`0092. However, the range of the peripheral pixels used in
`correction also widens if the correction target region widens,
`and particularly in the case of the spatial frequency being
`high, the probability of a pixel having a pixel value close to
`the correct pixel value of the defective pixel is low, and the
`median filter will no longer effectively work; therefore, using
`the median filter is effective only in cases of the spatial fre
`quency being low.
`0093. In addition, the correction unit 55 may be configured
`So as to acquire the ambient temperature of the imaging
`elements of the imaging unit 16 measured at the sensor unit
`22, and the exposure time when image capturing in the imag
`ing unit 16, via the image controller 51, and to control
`whether or not to perform correction based on the acquired
`ambient temperature of the imaging elements and exposure
`time.
`0094. The functional configuration of the digital camera 1
`to which the present invention is applied has been explained

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket