throbber
(12) United States Patent
`Morgan-Mar et al.
`
`(10) Patent No.:
`(45) Date of Patent:
`
`US 8,989,517 B2
`Mar. 24, 2015
`
`US008989.517B2
`
`BOKEHAMPLIFICATION
`
`(56)
`
`References Cited
`
`(54)
`(71)
`(72)
`
`(73)
`(*)
`
`(21)
`(22)
`(65)
`
`(30)
`
`Applicant: Canon Kabushiki Kaisha, Tokyo (JP)
`
`Inventors: David Peter Morgan-Mar,
`Wollstonecraft (AU); Kieran Gerard
`Larkin, Putney (AU); Matthew
`Raphael Arnison, Umina Beach (AU)
`Assignee: Canon Kabushiki Kaisha, Tokyo (JP)
`
`Notice:
`
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U.S.C. 154(b) by 0 days.
`
`Appl. No.: 14/079,481
`
`Filed:
`
`Nov. 13, 2013
`
`Prior Publication Data
`US 2014/O15288.6 A1
`Jun. 5, 2014
`
`Foreign Application Priority Data
`
`Dec. 3, 2012
`
`(AU) ................................ 2O12258467
`
`(51)
`
`(52)
`
`(58)
`
`(2006.01)
`(2006.01)
`(2006.01)
`(2006.01)
`(2006.01)
`(2006.01)
`
`Int. C.
`G6K 9/36
`G06K9/40
`H04N 5/225
`H04N 5/228
`H04N 5/262
`H04N 5/232
`U.S. C.
`CPC ................................. H04N 5/23212 (2013.01)
`USPC ..................... 382/280; 348/207. 1; 348/222.1;
`348/239;382/255; 382/276
`Field of Classification Search
`USPC ............... 348/207.1-207.11, 208.99-208.16,
`348/222.1, 239, 241, 345-357, 362-368
`See application file for complete search history.
`
`U.S. PATENT DOCUMENTS
`
`6/2006 Alon et al.
`7,065,256 B2
`8,422,827 B2 * 4/2013 Ishii et al. ..................... 382,299
`8.498,483 B2 * 7/2013 Noguchi et al. .............. 382, 181
`8,624,986 B2 *
`1/2014 Li ......................
`... 348,208.13
`8,704,909 B2 * 4/2014 Kanaris et al. ............. 348.222.1
`8.737,756 B2 * 5/2014 Daneshpanah et al. ....... 382/255
`(Continued)
`
`FOREIGN PATENT DOCUMENTS
`
`WO
`
`2008. 149363 A2 12/2008
`OTHER PUBLICATIONS
`
`Bae, Soonmin, and Durand, Frédo. “Defocus Magnification.” Com
`puter Graphics Forum: Proceedings of Eurographics 2007, Prague,
`Sep. 3-7, 2007. Ed. Cohen-Or, D and Slavik, P. Oxford, UK:
`Blackwell Publishing, 2007. 26.3:571-579.
`(Continued)
`
`Primary Examiner — Michael Osinski
`(74) Attorney, Agent, or Firm — Canon U.S.A., Inc. IP
`Division
`
`ABSTRACT
`(57)
`A method of modifying the blur in at least a part of an image
`of a scene captures at least two images of the scene with
`different camera parameters to produce a different amount of
`blur in each image. A corresponding patch in each of the
`captured images is selected each having an initial amount of
`blur is used to calculate a set offrequency domain pixel values
`from a function of transforms of the patches. Each of the pixel
`values in the set are raised to a predetermined power, forming
`an amplified set offrequency domain pixel values. The ampli
`fied set of frequency domain pixel values is combined with
`the pixels of the patch in one of the captured images to
`produce an output image patch with blur modified relative to
`the initial amount of blur in the image patch.
`15 Claims, 13 Drawing Sheets
`
`
`
`750
`w
`Ya.
`
`X-----.
`
`03
`|
`30a
`I
`w
`Form spectral
`ratio FF,
`
`Start
`
`)
`
`10:
`--
`
`Fourier tailsfort
`paiches f, and f.
`023
`x
`f 16then r
`3patch is less
`blurre?
`
`030
`---
`
`-
`i. w
`Form spectral
`ratio F, F,
`
`048
`--
`50
`
`-----
`
`~! Modify spectral ratio
`
`Raise spectral ratic
`to power N
`
`6
`f-which N/-/
`atch is less
`Nblurred?
`1070a
`OFO
`s
`rh - i.
`Multiply amplified
`Multiply amplified.-----
`spectral ratio by
`spectal ratic by
`F to give is
`F, to give Fo
`103
`
`
`
`85
`Artificia
`s-/
`bokeh patch
`i093
`----/
`C Eidy
`
`APPL-1009 / Page 1 of 30
`APPLE INC. v. COREPHOTONICS LTD.
`
`

`

`US 8,989,517 B2
`Page 2
`
`(56)
`
`References Cited
`
`U.S. PATENT DOCUMENTS
`
`2001/0008418 A1* 7/2001 Yamanaka et al. ............ 348.222
`2002fO145671 A1* 10, 2002 Alon et al.
`348,241
`2003/0002746 A1
`1/2003 Kusaka ......................... 382/255
`ck
`2007/0O36427 A1
`2, 2007 Nakamura et al. ............ 382,154
`2008/00 13861 A1
`1/2008 Li et al. ..........
`382.286
`2008/0175508 A1* 7/2008 Bando et al. .................. 382/255
`2009/0115860 A1* 5/2009 Nakashima et al. ..... 348,208.99
`2009, O141163 A1
`6, 2009 Attar et al.
`2009/0297.056 A1* 12/2009 Lelescu et al. ................ 382.261
`2011/0033132 A1
`2/2011 Ishii et al. ..................... 382,275
`
`2011/0090352 A1* 4/2011 Wang et al. ................ 348,208.6
`2011/0205382 A1* 8, 2011 Kanaris et al. ..
`348,222.1
`2012/0206630 A1* 8/2012 Nguyen et al. ......
`... 348,241
`2013/0063566 A1* 3/2013 Morgan-Maret al. .......... 348/46
`2013/0266210 A1* 10/2013 Morgan-Maret al. ........ 382,154
`
`
`
`OTHER PUBLICATIONS
`Kubota, Akira, and Aizawa, Kiyoharu. “Reconstructing Arbitrarily
`Focused Images From Two Differently Focused Images. Using Linear
`Filters.” IEEE Transactions on Image Processing 14.11 (2005): 1848
`1859.
`
`* cited by examiner
`
`APPL-1009 / Page 2 of 30
`APPLE INC. v. COREPHOTONICS LTD.
`
`

`

`U.S. Patent
`
`Mar. 24, 2015
`
`Sheet 1 of 13
`
`US 8,989,517 B2
`
`
`
`APPL-1009 / Page 3 of 30
`APPLE INC. v. COREPHOTONICS LTD.
`
`

`

`U.S. Patent
`
`Mar. 24, 2015
`
`Sheet 2 of 13
`
`US 8,989,517 B2
`
`
`
`Y
`
`255
`
`Fig. 2
`
`APPL-1009 / Page 4 of 30
`APPLE INC. v. COREPHOTONICS LTD.
`
`

`

`U.S. Patent
`
`Mar. 24, 2015
`
`Sheet 3 of 13
`
`US 8,989,517 B2
`
`3OO
`
`320
`
`
`
`30
`
`330
`
`Fig. 3A
`
`Fig. 3B
`
`APPL-1009 / Page 5 of 30
`APPLE INC. v. COREPHOTONICS LTD.
`
`

`

`U.S. Patent
`
`Mar. 24, 2015
`
`Sheet 4 of 13
`
`US 8,989,517 B2
`
`N - N
`N
`(Wide-Area)
`Communications
`Network 420
`
`/
`X
`N
`N a1
`
`Yee e
`421
`N/
`
`Printer 45
`
`480
`417
`
`W 4.
`(N',
`424
`v.
`- N -
`(local-Area)
`COmmunications
`Network 422 1
`W
`- 1
`1y
`f - -
`400
`A1
`?
`
`423
`
`Ext.
`i"
`Moden
`
`f
`
`
`
`
`
`Audio-Video
`Interface 407
`
`fC) interfaces
`4.08
`
`Local Net.
`Ifface 411
`
`48
`
`Storage
`Devices
`409
`404
`
`hold 410
`
`41
`9
`
`PrOCeSSOf
`405
`
`fC) interface
`413
`
`Memory
`406
`
`Optical Disk
`Drive 42
`
`Keyboard 402
`
`Camera 427
`
`
`
`403
`
`Disk Storage
`Medium 425
`
`Fig. 4A
`
`APPL-1009 / Page 6 of 30
`APPLE INC. v. COREPHOTONICS LTD.
`
`

`

`U.S. Patent
`
`Mar. 24, 2015
`
`Sheet 5 of 13
`
`US 8,989,517 B2
`
`434
`
`433
`
`Instruction (Part 1) 428
`Instruction (Part 2) 429
`)
`
`48 Data 435
`Data 436
`
`432
`
`431
`
`Instruction 430
`
`Data 437
`
`ROM 449
`POS
`BIOS
`450
`451
`
`Bootstrap
`Loader 452
`
`Operating
`System 453
`
`input Variables 454
`
`419
`
`a
`
`404
`
`
`
`418
`
`APPL-1009 / Page 7 of 30
`APPLE INC. v. COREPHOTONICS LTD.
`
`

`

`U.S. Patent
`
`Mar. 24, 2015
`
`Sheet 6 of 13
`
`US 8,989,517 B2
`
`
`
`Fig. 5B
`
`APPL-1009 / Page 8 of 30
`APPLE INC. v. COREPHOTONICS LTD.
`
`

`

`U.S. Patent
`
`Mar. 24, 2015
`
`Sheet 7 of 13
`
`US 8,989,517 B2
`
`642
`
`600
`
`652
`
`610
`
`N
`
`APPL-1009 / Page 9 of 30
`APPLE INC. v. COREPHOTONICS LTD.
`
`

`

`U.S. Patent
`
`Mar. 24, 2015
`
`Sheet 8 of 13
`
`US 8,989,517 B2
`
`700
`Y
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`Capture two
`images of
`SCere
`
`
`
`Select
`Corresponding
`image patches f
`and f2
`
`Select less
`blurred patch
`
`Form artificial
`bokeh patch
`fon)
`
`More
`patches?
`
`710
`
`720
`
`740
`
`750
`
`760
`
`NO
`Assemble rendered patches
`into rendered image
`
`770
`
`775
`
`Fig. 7
`
`Rendered
`image
`
`End
`
`78O
`
`APPL-1009 / Page 10 of 30
`APPLE INC. v. COREPHOTONICS LTD.
`
`

`

`U.S. Patent
`
`Mar. 24, 2015
`
`Sheet 9 of 13
`
`US 8,989,517 B2
`
`
`
`710
`
`Set up camera
`aimed at Scene
`
`Set focus, Zoom,
`aperture
`
`Take first image
`of Scene
`
`Change focus,
`ZOOm, or
`aperture
`
`Take Second
`image of scene
`
`APPL-1009 / Page 11 of 30
`APPLE INC. v. COREPHOTONICS LTD.
`
`

`

`U.S. Patent
`
`Mar. 24, 2015
`
`Sheet 10 of 13
`
`US 8,989,517 B2
`
`740
`
`
`
`940
`
`Calculate
`variance of of
`patch f
`
`
`
`
`
`Calculate
`variance of of
`patch f.
`
`
`
`Select f, as less
`blurred patch
`
`Select f. as less
`blurred patch
`
`910
`
`92O
`
`930
`
`945
`
`950
`
`APPL-1009 / Page 12 of 30
`APPLE INC. v. COREPHOTONICS LTD.
`
`

`

`U.S. Patent
`
`Mar. 24, 2015
`
`Sheet 11 of 13
`
`US 8,989,517 B2
`
`750
`Y
`
`1030a
`
`
`
`Fourier transform
`patches f and f.
`
`f
`
`Which
`patch is less
`blurred?
`
`Form spectral
`ratio F2/F
`
`
`
`Form spectral
`ratio F/F
`
`O20
`
`O3Ob
`
`1040
`
`
`
`
`
`1070a
`
`Modify spectral ratio
`
`1050
`
`Raise spectral ratio
`to power N
`
`
`
`f
`
`Which
`patch is less
`blurred?
`
`1060
`
`O7Ob
`
`1080
`
`1085
`
`1090
`
`Multiply amplified
`spectral ratio by
`F, to give FN
`
`
`
`Multiply amplified
`spectral ratio by
`F2 to give FN
`
`
`
`inverse Fourier
`transform F(N) to give fin)
`
`Artificial
`bokeh patch
`
`Fig. 10
`
`APPL-1009 / Page 13 of 30
`APPLE INC. v. COREPHOTONICS LTD.
`
`

`

`U.S. Patent
`
`Mar. 24, 2015
`
`Sheet 12 of 13
`
`US 8,989,517 B2
`
`770
`Y
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`1110
`
`1120
`
`1130
`
`1140
`
`Form output image
`
`Select pixel of output
`image
`
`Determine patches
`fy containing
`corresponding pixels
`
`More
`than one
`
`Calculate output pixel
`from Corresponding
`pixel in patch
`
`Calculate output pixel
`from Corresponding
`pixels in all patches
`
`1160
`
`1170
`
`pixels?
`
`No
`
`End
`
`Fig.11
`
`APPL-1009 / Page 14 of 30
`APPLE INC. v. COREPHOTONICS LTD.
`
`

`

`U.S. Patent
`
`Mar. 24, 2015
`
`Sheet 13 of 13
`
`US 8,989,517 B2
`
`1220
`
`1230
`
`1240
`
`1250
`
`1260
`
`1200
`
`Capture two
`images of scene
`
`1210
`
`Segment images into
`foreground and background
`
`Determine object
`boundary regions
`
`Select corresponding image
`patches f and f2 in boundary region
`
`
`
`
`
`
`
`Select less blurred
`patch
`
`Form artificial
`bokeh patch fin)
`
`More
`patches?
`
`1270
`
`NO
`Assemble rendered patches into rendered
`image of boundary region
`
`1280
`
`1285
`
`1292
`
`Artificial bokeh
`rendering
`
`Form composite
`rendered image
`
`Composite
`rendered image
`End
`
`1290
`
`1295
`
`Fig. 12
`
`APPL-1009 / Page 15 of 30
`APPLE INC. v. COREPHOTONICS LTD.
`
`

`

`1.
`BOKEHAMPLIFICATION
`
`US 8,989,517 B2
`
`REFERENCE TO RELATED PATENT
`APPLICATION(S)
`
`This application claims the benefit under 35 U.S.C. S 119 of
`the filing date of Australian Patent Application No.
`2012258467, filed Dec. 3, 2012, hereby incorporated by ref
`erence in its entirety as if fully set forth herein.
`
`10
`
`TECHNICAL FIELD
`
`The current invention relates to digital image processing
`and, in particular, to rendering a photographic image with
`modified blur characteristics.
`
`15
`
`BACKGROUND
`
`2
`tantly, an SLR camera will always be able to achieve a sig
`nificantly Smaller depth of field than a compact camera. The
`depth of field is largely dictated by the size of the camera
`SSO.
`A method of producing artificial bokeh with a compact
`camera, mimicking the amount and quality of background
`blur produced by an SLR camera, would provide a major
`improvement in image quality for compact camera users.
`Camera manufacturers and professional photographers
`have recognised the depth of field limitations of small format
`cameras for decades. With the advent of digital camera tech
`nology, it has become feasible to process camera images after
`capture to modify the appearance of the photo. The genera
`tion of SLR-like bokeh from compact camera images has
`been an early target for research in the field of digital camera
`image processing. However, no solution providing results of
`high (i.e. visually acceptable) aesthetic quality has been dem
`onstrated.
`To accurately mimic Small depth of field given a large
`depth of field photo, objects in the image must be blurred by
`an amount that varies with distance from the camera. The
`most common prior approach tackles this problem in two
`steps:
`(1a). Estimate the distance of regions in the image from the
`camera to produce a depth map.
`(1b). Apply a blurring operation using a blur kernel size
`that varies with the estimated distance.
`Step (1a) is a difficult problem in itself, and the subject of
`active research by many groups. The three main methods of
`depth map estimation from camera images (i.e. excluding
`active illumination methods) are:
`(i) Stereo: taking photos from different camera positions
`and extracting depth from parallax. A major disadvantage of
`this approach is the requirement to take photos from multiple
`viewpoints, making it impractical for compact cameras.
`(ii) Depth from focus (DFF): taking a series of many
`images focused at different distances and measuring in
`patches which photo corresponds to a best focus at that patch,
`usually using maximal contrast as the best focus criterion. A
`major disadvantage of this approach is that many exposures
`are required, necessitating a long elapsed time. During the
`exposures the camera or Subject may inadvertently move,
`potentially blurring the Subject and introducing additional
`problems caused by image misalignment.
`(iii) Depth from defocus (DFD): quantifying the difference
`in amount of blur between two images taken with different
`focus and equating the blur difference to a distance. This is the
`most Suitable approach for implementation in a compact cam
`era, as it does not require stereo camera hardware and can be
`performed with as few as two photos. However, it has the
`disadvantages that accuracy is typically relatively low, par
`ticularly around the boundaries of objects in the scene, and
`that consistency is adversely affected by differing object tex
`tures in the scene. Some DFD methods show better accuracy
`around object edges, at the cost of using computationally
`expensive algorithms unsuited to implementation in camera
`hardware.
`Step (1b) is computationally expensive for optically real
`istic blur kernel shapes. A fallback is to use a Gaussian blur
`kernel, which produces a blur that looks optically unrealistic,
`making the resulting image aesthetically unpleasing.
`To more easily approach artificial bokeh, many prior meth
`ods use a simplified version of the above two-step method,
`being:
`(2a). Segment the image into a foreground region and a
`background region.
`(2b). Apply a constant blurring operation to the back
`ground region only.
`
`25
`
`30
`
`35
`
`50
`
`Single-lens reflex (SLR) and digital single-lens reflex
`(DSLR) cameras have large aperture optics which can pro
`duce a narrow depth of field. Depth of field measures the
`distance from the nearest object to the camera which is in
`focus, to the farthest object from the camera which is in focus.
`(D)SLR cameras typically have a depth of filed of order
`significantly less than 1 meter for a typical portrait scenario of
`a subject a few meters from the camera. This allows the
`foreground Subject of a photo to be rendered in sharp focus,
`while the background is blurred by defocus. The result is
`visually pleasing as it provides a separation between the
`Subject and any distracting elements in the background. The
`aesthetic quality of background blur (encompassing both the
`quantity and “look” of the blur) is known as bokeh. Bokeh is
`especially important for photos of people, or portraits.
`Compact digital cameras are more popular than DSLRs
`with consumers because of their Smaller size, lighter weight,
`and lower cost. However, the Smaller optics on a compact
`camera produce a large depth of field, of order greater than
`approximately 1 meter for the same typical portrait scenario,
`which renders the background in typical portrait shots as
`sharp and distracting.
`40
`Depth of field varies significantly depending on the geom
`etry of the photographic scene. The following examples are
`for taking a photo of a person about 3 meters from the camera:
`(i) the depth of field for a full frame SLR camera at 50 mm
`focal length and aperture f2.8 is about 0.5 meters. For a
`45
`portrait scenario, a photographer would typically want to use
`a depth of field this size, or even smaller, maybe 0.2 meters or
`even 0.1 meters. An SLR camera can also be configured with
`a smalleraperture to achieve very large depth of field, though
`this is not usually done for portraits.
`(ii) the depth of field for a small compact camera (e.g.
`CanonTM IXUSTM model) at 50 mm full-frame equivalent
`focal length and aperture f/2.8, is 6 meters.
`(iii) a large compact camera (e.g. CanonTM G12) at 50 mm
`full-frame equivalent focal length and aperture f74 is 1.6
`55
`meters. (This camera cannot achieve f(2.8 aperture if it
`could, its depth of field would be 1.2 meters.) It is practically
`impossible for a camera with a compact form factor to achieve
`a depth of field under about 1 meter, for a subject at 3 meters
`distance. Technically, such is possible, but would require very
`60
`large and expensive lenses. Depth of field for compact cam
`eras under normal conditions can easily be tens of meters or
`even infinity, meaning that everything from the Subject to the
`far distance is in focus.
`If the person is closer to the camera than 3 meters, all the
`depth of field distances discussed above will be smaller, and
`if the person is further away, they will all be larger. Impor
`
`65
`
`APPL-1009 / Page 16 of 30
`APPLE INC. v. COREPHOTONICS LTD.
`
`

`

`US 8,989,517 B2
`
`3
`Assuming step (2a) is done correctly, step (2b) is straight
`forward. However, step (2a) is still difficult and has not been
`achieved satisfactorily within the constraints of a compact
`camera. In particular, the accuracy of segmentation around
`the edges of objects at different depths in the scene is poor.
`Even if this simplified method can be achieved without error,
`the resulting images can look artificial, since intermediate
`levels of blur between the foreground and background will be
`absent.
`An alternative approach to artificial bokeh is to:
`(3a). Estimate the amount of blur at different places in an
`image, compared to a blur-free representation of the
`Subject scene.
`(3.b). Apply a blurring operation using a blur kernel size
`that varies with the estimated blur amount.
`A compact camera does not have an infinite depth of field,
`so the background will show a small amount of blurring
`relative to an in-focus foreground object. If such blurred
`regions can be identified accurately, they can be blurred more,
`producing increased blur in the background.
`Step (3a) can be performed with a single image, or by using
`multiple images of the scene captured with different camera
`parameters. Estimating blur from a single image is under
`constrained and can only be achieved under certain assump
`tions. For example, one assumption is that edges detected in
`the image are step function edges in the scene, blurred by the
`camera optics, and that regions away from edges may be
`accurately infilled from the edge blur estimates. These
`assumptions are often false, resulting in poor blur estimates.
`Estimating blur from multiple images is akin to DFF or DFD,
`because blur amount is directly related to depth, and shares
`the same problems.
`
`SUMMARY
`
`4
`a tiling Substantially covering the area of the captured images,
`and the output image is formed by tiling the output image
`patches. Generally the plurality of corresponding image
`patches in each of the captured images overlap, and the output
`image is formed by combining the pixel values of the output
`image patches.
`In a specific implementation the plurality of corresponding
`image patches in each of the captured images coverpart of the
`area of the captured images; and the output image patches are
`combined with the area of at least one of the captured images
`not covered by the plurality of corresponding image patches
`to produce an output image. Desirably at least part of the area
`of the at least one of the captured images not covered by the
`plurality of corresponding image patches is blurred by con
`volution with a blur kernel.
`According to another aspect, disclosed is a camera com
`prising an image capture system coupled to memory in which
`captured images are stored, a processor, and a program
`executable by the processor to modify the blur in at least apart
`of an image of a scene, said program comprising: code for
`causing the capture system to capture at least two images of
`the scene, said images being captured with different camera
`parameters to produce a different amount of blur in each of
`the captured images; code for selecting a corresponding
`image patch in each of the captured images, each of the
`selected image patches having an initial amount of blur, code
`for calculating a set of frequency domain pixel values from a
`combined function of Fourier transforms of two of the
`selected image patches; code for raising each of the pixel
`values in the set of frequency domain pixel values to a pre
`determined power, thereby forming an amplified set of fre
`quency domain pixel values; and code for combining the
`amplified set of frequency domain pixel values with the pixels
`of the selected image patch in one of the captured images to
`produce an output image patch with blur modified with
`respect to the initial amount of blur in the image patch,
`wherein the amount of modification with respect to blur var
`ies across different regions of the image patch.
`Another aspect is a camera system comprising: a lens
`formed of optics producing a relatively large depth of field; a
`sensor configured capture an image of a scene focussed
`through the lens; a memory in which images captured by the
`sensor are stored; a capture mechanism configured to capture
`at least two images of the scene with different capture param
`eters and to store the images in the memory; a processor, a
`program stored in the memory and executable by the proces
`sor to modify blur in at least a part of one of the captured
`images of the scene, said program comprising: code for caus
`ing the capture system to capture at least two images of the
`scene with different camera parameters to produce a different
`amount of blur in each of the captured images; code for
`selecting a corresponding image patch in each of the captured
`images, each of the selected image patches having an initial
`amount of blur, code for calculating a set of frequency
`domain pixel values from a combined function of Fourier
`transforms of two of the selected image patches; code for
`raising each of the pixel values in the set of frequency domain
`pixel values to a predetermined power, thereby forming an
`amplified set of frequency domain pixel values; and code for
`combining the amplified set of frequency domain pixel values
`with the pixels of the selected image patch in one of the
`captured images to produce an output image patch with blur
`modified with respect to the initial amount of blur in the
`image patch, wherein the amount of modification with respect
`to blur varies across different regions of the image patch.
`In another aspect disclosed is a computer readable storage
`medium having a program recorded thereon, the program
`being executable by a processor to modify blur in at least a
`part of an image of a scene, the program comprising: code for
`receiving at least two images of the scene, said images being
`
`5
`
`10
`
`15
`
`25
`
`30
`
`35
`
`40
`
`50
`
`55
`
`60
`
`65
`
`According to the present disclosure there is provided a
`method of modifying the blur in at least a part of an image of
`a scene, sad method comprising: capturing at least two
`images of the scene, said images being captured with differ
`ent camera parameters to produce a different amount of blur
`in each of the captured images; selecting a corresponding
`image patch in each of the captured images, each of the
`selected image patches having an initial amount of blur, cal
`culating a set of frequency domain pixel values from a com
`bined function of Fourier transforms of two of the selected
`45
`image patches; raising each of the pixel values in the set of
`frequency domain pixel values to a predetermined power,
`thereby forming an amplified set of frequency domain pixel
`values; and combining the amplified set of frequency domain
`pixel values with the pixels of the selected image patch in one
`of the captured images to produce an output image patch with
`blur modified with respect to the initial amount of blur in the
`image patch, wherein the amount of modification with respect
`to blur varies across different regions of the image patch.
`Preferably, the set of frequency domain pixel values are
`modified before being raised to the predetermined power.
`Generally the modification includes a median filtering opera
`tion. Alternatively the modification may include a smoothing
`filtering operation. The modification may include a normali
`sation operation and/or a weighting operation. The weights
`for the weighting operation are determined by the phases of
`the set of frequency domain pixel values.
`Typically the at least two images of the scene are divided
`into a plurality of corresponding image patches in each of the
`captured images; and the output image patches are combined
`to produce an output image. Desirably the plurality of corre
`sponding image patches in each of the captured images form
`
`APPL-1009 / Page 17 of 30
`APPLE INC. v. COREPHOTONICS LTD.
`
`

`

`5
`captured with different camera parameters to produce a dif
`ferent amount of blur in each of the captured images; code for
`selecting a corresponding image patch in each of the captured
`images, each of the selected image patches having an initial
`amount of blur, code for calculating a set of frequency
`domain pixel values from a combined function of Fourier
`transforms of two of the selected image patches; code for
`raising each of the pixel values in the set of frequency domain
`pixel values to a predetermined power, thereby forming an
`amplified set of frequency domain pixel values; and code for
`combining the amplified set offrequency domain pixel values
`with the pixels of the selected image patch in one of the
`captured images to produce an output image patch with blur
`modified with respect to the initial amount of blur in the
`image patch, wherein the amount of modification with respect
`to blur varies across different regions of the image patch.
`Other aspects are disclosed.
`
`5
`
`10
`
`15
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`25
`
`30
`
`35
`
`At least one embodiment of the invention will now be
`described with reference to the following drawings, in which:
`FIG. 1 is a schematic diagram of a scene and an image
`capture device positioned to capture an image of the scene;
`FIG. 2 is a schematic diagram illustrating the geometry of
`a lens forming two different images at two different focal
`planes;
`FIGS. 3A and 3B illustrate a two-dimensional Gaussian
`function and a two-dimensional pillbox function, and one
`dimensional cross-sections thereof;
`FIGS. 4A and 4B collectively form a schematic block
`diagram of a general purpose computer on which various
`implementations may be practised;
`FIGS. 5A, 5B, and 5C illustrate example images upon
`which artificial bokeh processing according to the present
`disclosure may be performed;
`FIG. 6 is a diagram illustrating the correspondence
`between pixels and image patches within a first image and a
`second image of a scene;
`FIG. 7 is a schematic flow diagram illustrating an exem
`40
`plary method of determining an artificial bokeh image from
`two images of a scene, according to the present disclosure;
`FIG. 8 is a schematic flow diagram illustrating one example
`of a method of capturing two images as used in the method of
`FIG.7;
`45
`FIG.9 is a schematic flow diagram illustrating one example
`of a method of asymmetrical patch selection as used in the
`method of FIG. 7;
`FIG. 10 is a schematic flow diagram illustrating one
`example of a method of determining an artificial bokeh image
`patch from two corresponding patches of two images of a
`scene as used in the method of FIG. 7:
`FIG. 11 is a schematic flow diagram illustrating one
`example of a method of assembling artificial bokeh patches
`into an artificial bokeh image as used in the method of FIG.7:
`and
`FIG. 12 is a schematic flow diagram illustrating a second
`exemplary method of determining an artificial bokeh image
`from two images of a scene, according to the present disclo
`SUC.
`
`50
`
`55
`
`60
`
`DETAILED DESCRIPTION INCLUDING BEST
`MODE
`
`Introduction
`The present disclosure is directed to providing methods of
`rendering a photographic image taken with large depth of
`
`65
`
`US 8,989,517 B2
`
`6
`field so as to mimic a photo taken with a smaller depth of field
`by modifying blur already present in the image taken with a
`large depth of field. The methods seek to offer one or more of
`improved accuracy, improved tolerance to imaging noise,
`improved tolerance to differences of object texture in the
`image, and improved aesthetic appearance of the final image,
`all of these particularly in regions at and near the boundaries
`of objects in the scene.
`Context
`
`Thin Lens Equation, Basic Geometry
`
`The technical details of accurately rendering artificial
`bokeh rely on key aspects of the geometry and optics of
`imaging devices. Most scenes that are captured using an
`imaging device. Such as a camera, contain multiple objects,
`which are located at various distances from the lens of the
`device. Commonly, the imaging device is focused on an
`object of interest in the scene. The object of interest shall be
`referred to as the subject of the scene. Otherwise, objects in
`the scene, which may include the Subject, shall simply be
`referred to as objects.
`FIG. 1 is a schematic diagram showing the geometrical
`relationships between key parts of an imaging device and
`objects in a scene to be captured. FIG. 1 shows an imaging
`device or system (e.g. a camera) 100 which includes a lens
`110, and a sensor 115. For the purposes of this description, the
`camera 100 is typically a compact digital camera and the lens
`110 has relatively small optics producing a large depth of
`field, particularly in comparison to an SLR camera. FIG. 1
`also shows an in-focus plane 130 and a general object 140
`formed by sphere positioned upon a rectangular prism, form
`ing part of the scene but not necessarily the Subject of the
`scene to be captured. The image plane 120 of the imaging
`device 100, also referred to as the focal plane, is defined to be
`at the location of the sensor 115. When projected through the
`lens 110, the image plane 120 forms the in-focus plane 130,
`which can be considered to be a virtual plane in the geometri
`cal region of the object 140. A distance 150 from the lens 110
`to the image plane 120 is related to a distance 160 from the
`lens 110 to the in-focus plane 130, by the thin lens law
`according to the equation
`
`(1)
`
`where f is the focal length of the lens 110, Z, is the lens-to
`sensor distance 150, and Z is the distance 160 from the lens
`110 to the in-focus plane 130. The general scene object 140 is
`located at a distance 170 from the lens 110 and at a distance
`180 from the in-focus plane 130. This distance 170 is referred
`to as Z. The distance 180 from the object 140 to the in-focus
`plane 130 is given by Z-Z and may be positive, Zero, or
`negative. If the object 140 is focused onto the image plane
`120, then ZZ and the object 140 is located in the in-focus
`plane 130. IfZ is less than or greater than Z, then the object
`140 is located behind or in front of the in-focus plane 130
`respectively, and the image of the object 140 will appear
`blurred on the image plane 120.
`FIG. 1 illustrates a relatively simple geometrical optics
`model of imaging. This model relies on approximations
`including the thin lens approximation, paraxial imaging rays,
`and a lens free of aberrations. These approximations ignore
`Some aspects of the optics that are inherent in actual imaging
`
`APPL-1009 / Page 18 of 30
`APPLE INC. v. COREPHOTONICS LTD.
`
`

`

`US 8,989,517 B2
`
`7
`systems, but are sufficient for general understanding of imag
`ing behaviour, as is understood by those skilled in the art.
`Focusing is carried out either manually by the user or by
`using an autofocus mechanism that is built into the imaging
`device 100. Focusing typically manipulates the lens-to-sen
`sor distance 150 in order to place the in-focus plane 130 such
`that the distance Z. 160 is equal to the distance Z. 170 to a
`specific object of interest, i.e. to place the Subject in the
`in-focus plane 130. Other objects in the scene that have a
`distance Z from the lens 110 that is different from that of the
`subject are located either behind or in front of the in-focus
`plane 130. These other objects will appear blurred to some
`degree on the image plane 120 and thus in the image captured
`on the sensor 115. This blur is referred to as defocus blur.
`
`10
`
`15
`
`8
`Smaller size that has been blurred by Some unknown amount,
`or by an object in the scene that resembles a blurred disc,
`rendered in sharp focus. Given this ambiguity, it is impossible
`to determine the blur radius O. Thus, in terms of equation (2),
`even if the parameters Z, f, and Aare known, it is not possible
`to determine depth from a single image of an unconstrained
`SCCC.
`In the majority of circumstances, Scenes a

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket