throbber
(19) United States
`(12) Patent Application Publication (10) Pub. No.: US 2010/0321511 A1
`Koskinen et al.
`(43) Pub. Date:
`Dec. 23, 2010
`
`US 20100321511A1
`
`Publication Classification
`
`(51) Int. Cl.
`(2006.01)
`H04N 5/225
`(2006.01)
`HO4N5/335
`(52) U.S. Cl. .............. 348/218.1: 348/E05.024; 348/311;
`348/E05091
`(57)
`ABSTRACT
`At a digital imaging system, a first set of samples of a scene
`are digitally captured with a first array of image sensing nodes
`while simultaneously a second set of samples of the scene are
`digitally captured with a second array of image sensing
`nodes. Image sensing nodes of the second array are oriented
`in a rotated position relative to the image sensing nodes of the
`first array. The first and second sets of samples are integrated
`with one another while correcting for the rotated orientation
`of the image sensing nodes of the second array relative to the
`image sensing nodes of the first array, and from that integra
`tion is output a high resolution image. In specific embodi
`ments, there may be additional arrays of image sensing nodes,
`different arrays may sample different size portions of the
`scene, and be also rotated relative to other sensing node
`arrays.
`
`(54) LENSLET CAMERA WITH ROTATED
`SENSORS
`
`(75) Inventors:
`
`Samu T. Koskinen, Tampere (FI):
`Juha H. Alakarhu, Helsinki (FI);
`Eero Salmelin, Tampere (FI)
`
`Correspondence Address:
`HARRINGTON & SMITH
`4 RESEARCH DRIVE, Suite 202
`SHELTON, CT 06484-6212 (US)
`
`(73) Assignee:
`
`Nokia Corporation
`
`(21) Appl. No.:
`
`12/456,543
`
`(22) Filed:
`
`Jun. 18, 2009
`
`104, IMAGE
`SENSING NODES
`(PIXELS)
`-N-
`
`102, READ-
`OUT CIRCUIT
`
`s
`
`N-
`
`106, LENSLETS
`-1N-
`
`
`
`108
`
`APPL-1017 / Page 1 of 15
`APPLE INC. v. COREPHOTONICS LTD.
`
`

`

`APPL-1017 / Page 2 of 15
`APPLE INC. v. COREPHOTONICS LTD.
`
`

`

`Patent Application Publication
`
`Dec. 23, 2010 Sheet 2 of 7
`
`US 2010/0321511 A1
`
`2O6
`
`204
`
`
`
`
`
`O
`
`BLUE
`
`GREEN
`
`FIG.2
`
`APPL-1017 / Page 3 of 15
`APPLE INC. v. COREPHOTONICS LTD.
`
`

`

`Patent Application Publication
`
`Dec. 23, 2010
`
`Sheet 3 of 7
`
`US 2010/0321511 A1
`
`+ + + +
`-+ +- + +
`—— —— —— ——
`+ + + + $ + + + + #
`
`—— —— —— ——
`
`+ + + + + + + + + + + + ? + + + + g + + + + £ + + + +# + + + + + + + + + ''+" + + + + + + FIG.3B
`+,+,+,+) + + + +
`
`APPL-1017 / Page 4 of 15
`APPLE INC. v. COREPHOTONICS LTD.
`
`

`

`Patent Application Publication
`
`Dec. 23, 2010 Sheet 4 of 7
`
`US 2010/0321511 A1
`
`a.
`
`?y
`
`. . . .
`
`.
`
`w
`. . . . .
`s
`
`a
`
`:
`--
`f
`s
`
`w
`
`w
`
`rt.
`
`as w
`
`: x -
`:
`Y
`--
`2-
`'... .
`. . ."
`
`-- *
`two
`r:
`p
`
`.
`*-
`
`---
`- .
`
`e
`X
`
`X . . . .
`
`s
`
`s
`:
`a 8
`
`as
`s
`-- f
`:
`
`a
`
`s
`--
`
`f
`
`re
`
`502, ARRAY OF IMAGE
`SENSING NODES
`
`506, OUTLYING AREAS
`
`
`
`504, SCENE TO BE
`SAMPLED
`
`508, COLUMN -SS
`FIG.5
`
`APPL-1017 / Page 5 of 15
`APPLE INC. v. COREPHOTONICS LTD.
`
`

`

`Patent Application Publication
`
`Dec. 23, 2010 Sheet 5 of 7
`
`US 2010/0321511 A1
`
`o wydaw
`
`a aev
`
`§ §§§§
`
`
`
`-$
`
`
`
`----- - - - - - - - - - - - - - - - - - - - -*-------·····:·º·:·º·:·
`
`APPL-1017 / Page 6 of 15
`APPLE INC. v. COREPHOTONICS LTD.
`
`

`

`Patent Application Publication
`
`Dec. 23, 2010 Sheet 6 of 7
`
`US 2010/0321511 A1
`
`
`
`APPL-1017 / Page 7 of 15
`APPLE INC. v. COREPHOTONICS LTD.
`
`

`

`Patent Application Publication
`
`Dec. 23, 2010 Sheet 7 of 7
`
`US 2010/0321511 A1
`
`
`
`DIGITALLY CAPTURING A FIRST SET OF SAMPLES OF A SCENE
`WITH A FIRST ARRAY OF IMAGE SENSING NODES WHILE
`SIMULTANEOUSLY DIGITALLY CAPTURING A SECOND SET OF SAMPLES - 802
`OF THE SCENE WITH A SECOND ARRAY OF IMAGE SENSING NODES
`(THE IMAGE SENSING NODES OF THE SECOND ARRAY ARE ORIENTED
`IN A ROTATED POSITION RELATIVE TO THE IMAGE SENSING NODES
`OF THE FIRST ARRAY)
`
`INTEGRATING THE FIRST AND SECOND SETS OF SAMPLES WITH
`ONE ANOTHER WHILE CORRECTING FOR THE ROTATED ORIENTATION
`OF THE IMAGE SENSING NODES OF THE SECOND ARRAY RELATIVE
`TO THE IMAGE SENSING NODES OF THE FIRST ARRAY (e.g., SUPER
`RESOLUTION ALGORITHM)
`
`804
`
`FIG.8
`
`APPL-1017 / Page 8 of 15
`APPLE INC. v. COREPHOTONICS LTD.
`
`

`

`US 2010/03215 11 A1
`
`Dec. 23, 2010
`
`LENSLET CAMERAWITH ROTATED
`SENSORS
`
`TECHNICAL FIELD
`
`0001. The exemplary and non-limiting embodiments of
`this invention relate generally to digital imaging devices Such
`as digital cameras having one or more arrays of image sensors
`and corresponding lenslets.
`
`BACKGROUND
`
`0002 This section is intended to provide a background or
`context to the invention that is recited in the claims. The
`description herein may include concepts that could be pur
`Sued, but are not necessarily ones that have been previously
`conceived or pursued. Therefore, unless otherwise indicated
`herein, what is described in this section is not prior art to the
`description and claims in this application and is not admitted
`to be prior art by inclusion in this section.
`0003 Digital imaging can be broken into two main cat
`egories. Complementary metal-oxide semiconductor CMOS
`technology uses an array of pixels whose outputs are read out
`by an integrated circuit. Often this read-out circuit and the
`pixel array are made as one semiconductor device, and
`together they are termed an image sensor. Each pixel contains
`a photodetector and possibly an amplifier. There are many
`types of such active pixel sensors, and CMOS is the technol
`ogy commonly used in mobile phone cameras, Web cameras,
`and some digital single-lens reflex camera systems. The other
`main category is a charge coupled device CCD which uses an
`array of diodes, typically embodied as an array of p-n junc
`tions on a semiconductor chip. Analog signals at these diodes
`are integrated at one or chains of capacitors and the signal is
`also processed by a read-out circuit, and the capacitor
`arrangements may be within the readout circuit. Often, the
`term pixel is used generically, referring to both an active
`sensor pixel of CMOS devices and to a diode junction of a
`CCD System. The term image sensing node is used herein
`generically for an individual image capturing element, and
`includes a CMOS pixel, a CCD diode junction, and individual
`image capture nodes of other technologies now available or
`yet to be developed.
`0004 Digital imaging systems (cameras) use an array of
`image sensing nodes aligned behind an array of lenslets
`which focus light onto the image sensing nodes. Each image
`sensing node has a corresponding lenslet, though this does not
`always meana one-to-one correspondence of lensletto image
`sensing node. It is known to use multiple arrays of image
`sensing nodes to improve resolution. For example, some
`commercial embodiments of this include multiple charge
`coupled devices CCD, in which each CCD is a separate diode
`array and the multiple CCDs each image the scene simulta
`neously. For CMOS implementations, the different image
`sensing node arrays may be different image sensors disposed
`adjacent to one another behind the system aperture. There
`may be a single lenslet array for the multiple sensor arrays or
`a separate lenslet array corresponding to each sensor array.
`0005 Whatever the underlying image-capture technol
`ogy, the lower resolution output of each of these different
`arrays is integrated into a higher resolution image by a Super
`resolution algorithm. For example, one particular imaging
`system may have four arrays of image sensing nodes, each
`with a 2 MP (mega-pixel) resolution capacity, so that in the
`
`ideal the Super resolution algorithm can generate from those
`four lower resolution images that are input to it a single 8 MP
`image.
`0006 Note that the above characterization is in the ideal.
`The super resolution algorithms work well if the arrays of
`image sensing nodes are aligned so that the system nodes are
`sampling at as high a frequency as possible. Super resolution
`algorithms rely on perfectly aligned sets of image sensing
`nodes. But for the case where the nodes are not correctly
`aligned with one another, the portion of the scene that the
`mis-aligned nodes captures overlaps, and the extent of the
`overlap represents oversampling of the scene and a reduction
`from a theoretical maximum resolution that the Super resolu
`tion algorithm might otherwise generate. For example, if a
`CMOS system had two pixel arrays of 2 MP each and they
`were perfectly aligned, the resultant image from the Super
`resolution algorithm would be 4 MP. With a 25% overlap due
`to pixel array mis-alignment, the final image would have a
`resolution of 3.5 MP (since 25% of the image captured by one
`pixel array is identical to that captured by the other array and
`so cannot add to resolution). This alone understates the true
`amount of resolution reduction. If we assume that perfect
`alignment would have each pixel imaging a separate and
`non-overlapping portion of the scene, the 25% overlap nec
`essarily means that there is 25% of the scene imaged by one
`of the pixel arrays, or 12.5% of the entire scene, that is never
`captured. This is because the amount of the sample overlap
`with other pixels directly diminishes what is captured from
`the scene itself, since perfect alignment would have no over
`lap in the image captured by individual pixels. Thus while the
`resultant image may in fact be 3.5 MP, there are 12.5% dis
`continuities at the pixel level which occur during scene cap
`ture. Typically these are resolved by smoothing software
`before the final image is output for viewing by a user.
`0007. The above problem arises because it is very difficult
`to achieve perfectly accurate Sub pixel alignment accuracies
`due to mechanical tolerances. In practical implementation,
`during mass production of a population of end-user imaging
`systems there would be a distribution of image resolution
`output by the different units of that population. To assume an
`extreme example, assume a population of four camera lenslet
`systems in which the best units have perfect alignment and the
`worst units have 100% overlap of all four lenslet systems.
`Those best units then have four times higher resolution than
`the worst systems, even though their underlying design and
`manufacturing processes are identical.
`
`SUMMARY
`
`0008. The foregoing and other problems are overcome,
`and other advantages are realized, by the use of the exemplary
`embodiments of this invention.
`0009. In a first exemplary and non-limiting aspect of this
`invention there is a method which comprises: digitally cap
`turing a first set of samples of a scene with a first array of
`image sensing nodes while simultaneously digitally captur
`ing a second set of samples of the scene with a second array of
`image sensing nodes. The image sensing nodes of the second
`array are oriented in a rotated position relative to the image
`sensing nodes of the first array. Further in the method, the first
`and second sets of Samples are integrated with one another
`while correcting for the rotated orientation of the image sens
`ing nodes of the second array relative to the image sensing
`nodes of the first array, and a high resolution image is output.
`
`APPL-1017 / Page 9 of 15
`APPLE INC. v. COREPHOTONICS LTD.
`
`

`

`US 2010/03215 11 A1
`
`Dec. 23, 2010
`
`0010. In a second exemplary and non-limiting aspect of
`this invention there is an apparatus comprising: at least a first
`array of image sensing nodes and a second array of image
`sensing nodes. The second array of image sensing nodes is
`oriented in a rotated position relative to the image sensing
`nodes of the first array. The apparatus further comprises at
`least one array of lenslets disposed to direct light from exter
`nal of the apparatus toward the first and second arrays. The
`apparatus also comprises a memory storing a program that
`integrates outputs of the first and second arrays to a high
`resolution image while correcting for the rotated orientation
`of the image sensing nodes of the second array relative to the
`image sensing nodes of the first array. And this particular
`embodiment of the apparatus comprises also at least one
`processor that is configured to execute the stored program on
`outputs of the first and second arrays.
`0011. In a third exemplary and non-limiting aspect of this
`invention there is a computer readable memory storing a
`program of executable instructions. When executed by a pro
`cessor the program result in actions comprising: digitally
`capturing a first set of samples of a scene with a first array of
`image sensing nodes while simultaneously digitally captur
`ing a second set of samples of the scene with a second array of
`image sensing nodes. The image sensing nodes of the second
`array are oriented in a rotated position relative to the image
`sensing nodes of the first array. The actions further comprise
`integrating the first and second sets of samples with one
`another while correcting for the rotated orientation of the
`image sensing nodes of the second array relative to the image
`sensing nodes of the first array, and outputting a high resolu
`tion image.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`0012 FIG. 1 is a high level schematic diagram showing
`arrangement of read-out circuit, pixels and lenslets with
`respect to a scene being imaged.
`0013 FIG. 2 is a schematic diagram illustrating different
`color photosensors within a pixel array.
`0014 FIGS. 3A-C illustrate conceptually the pixel mis
`alignment problem.
`0015 FIGS. 4A-4B illustrate conceptually two example
`embodiments of the present invention which minimize the
`mis-alignment problem of the prior art.
`0016 FIG. 5 is a schematic diagram of a pixel array rela
`tive to a scene being imaged according to an example embodi
`ment of the invention.
`0017 FIG. 6 is a schematic diagram showing relative ori
`entation of four pixel arrays in a host device, and exaggerated
`portions of a scene captured by a single pixel of each of the
`arrays.
`0018 FIG. 7 shows a more particularized block diagram
`of a user equipment embodying a camera with pixel arrays
`arranged according to an exemplary embodiment of the
`invention and a corresponding Super resolution algorithm
`stored in a memory of the user equipment.
`0019 FIG. 8 is a logic flow diagram that illustrates the
`operation of a method, and a result of execution of computer
`program instructions embodied on a computer readable
`memory, in accordance with the exemplary embodiments of
`this invention.
`
`DETAILED DESCRIPTION
`0020 FIG. 1 is a high level schematic diagram showing
`arrangement of read-out circuit 102, one row of an array of
`
`image sensing nodes 104 (specifically, pixels of a CMOS
`array) and a corresponding row of an array of lenslets 106.
`The lenslets define the system aperture, and focus light from
`the scene 108 being imaged to the surface of the photocon
`ducting pixels 104. Typically the array of image sensing
`nodes 104 and the array of lenslets 106 are rectilinear, each
`being arranged in rows and columns. FIG. 1 illustrates one
`lenslet 106 corresponding to one pixel 104 but in some
`embodiments one lenslet may correspond to more than one
`pixel. The array of image sensing nodes 104 and/or the array
`of lenslets 106 may be planar as shown or curved to account
`for optical effects.
`0021
`FIG. 2 shows an example embodiment of an imag
`ing system in which there are four parallel cameras 202, 204,
`206, 208 which image red, blue and green from the target
`scene. These parallel cameras each have an array of lenslets
`and an array of image sensing nodes. In some embodiments
`there may be a single read-out circuit on which each of the
`four image sensing node arrays are disposed (e.g., a common
`CMOS substrate), or each of the image sensing node arrays
`may have its own read-out circuit and the four cameras are
`each stand-alone imaging systems whose individual outputs
`are combined and integrated via Software such as a Super
`resolution algorithm to result in a single higher resolution
`image which is output to a computer readable memory or to a
`graphical display for viewing by a user.
`0022 FIGS. 3A-C illustrate the alignment problem for
`imaging systems with multiple arrays of image sensing nodes
`such as is shown at FIG. 2. Each + at FIGS. 3A-C represents
`a center of focus of a portion of the scene imaged by an
`individual image sensing node. For example, the scene 108 of
`FIG. 1 is shown in side view but FIG. 3A shows the scene as
`viewed from the array of lenslets 106. One might consider
`that a generally circular area centered on each + at FIG. 3A is
`the portion of the overall scene which is captured by any
`individual image sensing node that corresponds to that por
`tion. It is known in the visible wavelength imaging arts that
`the size of such a circular area which can be reliably captured
`is limited, as a function of focal distance. Physical spacing of
`the individual image sensing nodes in the array is therefore a
`function of focal length as well as economics (more image
`sensing nodes are more costly to manufacture).
`0023 FIG. 3B illustrates similar to FIG. 3A but for a
`two-array imaging system. Center-points corresponding to
`individual image sensing nodes of one array are shown as a
`Solid line +, and center-points corresponding to individual
`image sensing nodes of the other array are shown as a dashed
`line +. The center-points, and thus the image sensing nodes
`that define them, are perfectly aligned at FIG.3B in that there
`is a center-point corresponding to a node of one array exactly
`centered among four adjacent center-points corresponding to
`four nodes of the other array. Overlap of the portion of the
`scene captured by adjacent center-points is minimized, and
`therefore the resolution which can be achieved by the super
`resolution Software that integrates all the information cap
`tured by both arrays is maximized. This is the ideal.
`0024 FIG. 3C illustrates similar to FIG. 3B but for a
`practical commercial imaging system that is Subject to manu
`facturing error. The center-points of image sensing nodes of
`one array are not centrally disposed among adjacent center
`points corresponding to nodes of the other array. If one were
`to image circles of focus centered on each of the Solid +
`center-points, there would be substantial overlap with circles
`centered on each of the dashed + center-points. This overlap
`
`APPL-1017 / Page 10 of 15
`APPLE INC. v. COREPHOTONICS LTD.
`
`

`

`US 2010/03215 11 A1
`
`Dec. 23, 2010
`
`represents generally the loss in resolution caused by the mis
`aligned arrays of image sensing nodes, as compared to reso
`lution which could be achieved by the super resolution soft
`ware if the arrays were perfectly aligned as in FIG. 3B.
`0025 FIGS. 4A-B illustrate two different embodiments of
`the invention which illustrate center-points on a scene corre
`sponding to image sensing nodes of four different arrays.
`FIG. 4A illustrates an embodiment in which the four arrays
`are rotated relative to one another. FIG. 4B illustrates an
`embodiment in which the size of the portion imaged by the
`image sensing nodes differs for each of the four arrays. These
`embodiments can be combined (e.g., different size and
`rotated as compared to another array of image sending
`nodes), and the number of image sensing arrays may be any
`number greater than one.
`0026 Consider FIG. 4A. Center-points corresponding to
`image sensing nodes of the four different arrays are distin
`guished by Solid + mark, dashed + mark, dotted + mark, and
`double-line + mark. Consider the solid + marks as corre
`sponding to a first array of image sensing nodes, which we
`conveniently use as a reference orientation. A second array of
`image sensing nodes designed by the dashed + marks is
`rotated clockwise approximately 30 degrees as compared to
`the first array. A third array of image sensing nodes designed
`by the double-line + marks is rotated counter-clockwise
`approximately 45 degrees as compared to the first array.
`0027. A fourth array of image sensing nodes designed by
`the dotted + marks is oriented the same as the first array (rows
`and columns are parallel as between those two arrays) and the
`pixel sizes as illustrated appear to be the same (the size of the
`circle which is the portion of the scene that one pixel cap
`tures). In fact, they differ slightly as will be appreciated by an
`example below in which one pixel size is 0.9 units and another
`pixel size is 1.1 units. Note that the dashed + center-points of
`the fourth array in the FIG. 4A embodiment are not perfectly
`aligned (e.g., aligned to maximize their combined resolution,
`see FIG. 3B) with the solid + center-points of the first array.
`While similar mis-alignment at FIG. 3C was an unintended
`consequence of manufacturing imprecision, at FIG. 4A it is a
`purposeful mis-alignment as between the first array and the
`fourth array because whether or not the first and fourth arrays
`are perfectly aligned or mis-aligned, both the second array
`(dashed + center-points) and the third array (double-line +
`center-points) are rotated relative to them both.
`0028 Now consider FIG. 4B. Center-points of the four
`arrays of image sensing nodes are similarly distinguished by
`solid, dashed, dotted and double-line + marks. The orienta
`tion of the image sensing nodes of these four arrays are not
`rotated relative to one another, but instead FIG. 4B illustrates
`that the individual image sensing nodes of the different arrays
`capture a different size of the scene being imaged, as com
`pared to nodes in other arrays.
`0029. Using the same convention as in FIG. 4A for first,
`second, third and fourth arrays, individual image sensing
`nodes corresponding to the Solid + marks of the first array
`capture a portion of the scene that is a first size. As is known
`in the photography arts, this size may be objectively measured
`using a circle of confusion CoC diameter limit, which is often
`used to calculate depth of field. There are different ways to
`find the CoC diameter limit, for example the Zeiss formula
`d/1730 or anticipated viewing distance (cm)/desired reso
`lution (lines/mm) for a 25 cm viewing distance/anticipated
`enlargement factor/25 (where 25 cm viewing distance is
`used as the standard for the closest comfortable viewing
`
`distance at which a person with good vision can usually
`distinguish an image resolution of 5 line pairs per millimeter
`(equivalent to a CoC diameter limit of 0.2 mm in the final
`image).
`0030 Individual image sensing nodes of the second array,
`which correspond to the dashed + marks at FIG. 4B, capture
`a portion of the scene that is a second size, in this example
`Smaller than the first size. Individual image sensing nodes of
`the third array, which correspond to the double-line + marks
`at FIG. 4B, capture a portion of the scene that is a third size,
`in this example larger than the first size. Finally, individual
`image sensing nodes of the fourth array, which correspond to
`the dotted + marks at FIG. 4B, capture a portion of the scene
`that is a fourth size, in this example nearly the same size as the
`first size. Image sensing nodes of each of the first array (solid
`+ center-points), the second array (dashed + center-points),
`the third array (double-line + center-points), and the fourth
`array (dotted + center-points) of FIG. 4B capture a different
`size portion of the image as compared to nodes in any of the
`other four arrays in FIG. 4B.
`0031. From the examples at FIGS. 4A-B it is clear that
`embodiments of the invention set the image sensing nodes so
`that the sampling at the scene being imaged is “randomized'
`regardless of the alignment of the individual sensors. In one
`implementation the image sensing nodes (or the entire array
`of them) are rotated. This rotation can be slight or significant,
`and the 30 and 45 degree rotations at FIG. 4A are both
`considered significant rotations. This rotation changes the
`orientation of the image sending node so that its correspond
`ing sampling at the scene being imaged occurs within a
`desired rectangular area even if the image sensing node is
`rotated.
`0032. As shown at FIG. 4B, different pixel sizes can also
`be used to randomize the sampling at the scene. These differ
`ent sized pixels which capture different size portions of the
`scene being imaged may be disposed on different arrays or on
`the same array of image sensing nodes. In order to better
`randomize the different-size pixels, resolution from the super
`resolution algorithm that integrates the information from the
`different size pixels is maximized when the larger pixels are
`not simply integer multiple sizes over the Smaller pixels. The
`following examples make this point clearly, and these
`examples assume for simplicity of the explanation that there
`are only two different size pixels in the imaging system, those
`pixels of a first size are in a first array pixels of a second size
`are in a second array.
`0033 Assume that the first size is 1 arbitrary unit. If the
`second size is 2 units, there are many integer values which
`yield the same results for both arrays: e.g., 2*1 =1*2:
`4*1=2*2: 6*1=3*2, etc. This is a poor design from the per
`spective of increasing resolution regardless of nodes that are
`not perfectly aligned.
`0034. If instead the first size is 1.5 arbitrary units while the
`second size is 2 units, then there are still many integer values
`which yield the same result as these two arrays: e.g., 4*1.
`5=3* 2: 8*1.5=6*2; 12* 1.5=9*2; etc. There are fewer solu
`tions than in the first example so the design is a bit improved,
`but not yet sufficiently good.
`0035. For a third example, assume that the first size is 0.9
`arbitrary units while the second size is 1.1 units. In this case
`there are very few integer values which yield the same result
`as both arrays. One example is 33*0.9–27* 1.1, but sincethere
`
`APPL-1017 / Page 11 of 15
`APPLE INC. v. COREPHOTONICS LTD.
`
`

`

`US 2010/03215 11 A1
`
`Dec. 23, 2010
`
`are relatively few occurrences for this size disparity this
`design is a better choice than either of the previous two
`examples.
`0036. As noted above, the rotation of image sensing nodes
`may also be used in combination with using nodes that cap
`ture different size portions of the scene being imaged. Varies
`the sampling even more than either one option alone.
`0037. Once the sensors are properly disposed, whether
`rotated relative to one another and/or different pixel sizes,
`then by calibrating the Super resolution algorithm the end
`result is an improved resolution for the final image as com
`pared to what the prior art would produce under conditions of
`manufacturing mis-alignment of the pixels in different arrays.
`This calibration computes the exact location of each sampling
`pixel, which is simply done because the randomized sampling
`arises from the rotation or different size pixels to begin with.
`Clearly sometimes individual pixels that correspond to one
`another but from different arrays will still overlap in some
`embodiments. While this causes a localized drop in resolution
`(as compared to idealized alignment) at that point of overlap,
`it is an acceptable outcome because the randomized sampling
`mitigates the likelihood that portions of the scene will not be
`imaged at all. In the prior art, Such un-sampled portions are
`filled in by smoothing software, but in fact it remains that
`there is no actual sampling at those Smoothed over disconti
`nuities. Embodiments of this invention accept a reasonable
`likelihood of overlap for a much reduced likelihood that there
`will be un-sampled portions of the scene.
`0038. This randomness of sampling can be readily imple
`mented for digital film. A digital film system can be imple
`mented as a lenslet camera and utilizing the same approach as
`detailed above for image capture on a CMOS or CCD hard
`ware array. For the digital film embodiment, individual nodes
`of the digital film to which individual lenslets image corre
`sponding portions of the scene are in the position of the
`hardware nodes of the above CMOS or CCD implementa
`tions.
`0039 FIG. 5 illustrates two examples of how a rotated
`array embodiment of the invention can be implemented in
`practice. At FIG.5 there is shown one sensor array502, which
`by example is rotated relative to another array. For conve
`nience we assume that other array is aligned with the Carte
`sian x-y axes of the overall FIG. 5 drawing (e.g. the illustrated
`array 502 has about a 30 degree clockwise rotation). The area
`to be imaged, the scene, is shown at FIG. 5 as 504. In a first
`implementation, the entire sensor array is active but less than
`all image sensing nodes of the array actually capture any
`portion of the scene 504. Any information captured at pixels
`within the outlying sections 506 is filtered out and only those
`pixels that capture a portion of the scene itself are integrated
`with information captured by the other array. If the other
`sensor (not shown) is matched equally to the scene 504, then
`clearly this sensor 504 shown at FIG. 5 has a larger optical
`format than the one not shown.
`0040. In another implementation shown at FIG. 5, the
`parallel slanted lines shown within the outline of the scene
`504 represent all the pixels which are active in the overall
`sensor array 502. In this implementation the sensor 502 is
`considered to have varying line length; individual pixels/
`sensor nodes of any individual row can be selectively made
`active or not active for capturing a particular image 504. Of
`course, entire rows or columns can be shut off also. In this
`implementation, the readout circuitry for the columns, shown
`as 508 in FIG. 5, receives information only from the active
`
`pixels which are shown in FIG. 5 by the parallel slanted lines.
`All pixels in the outlying areas 506 are not active and so
`provide no signal that needs to be filtered.
`0041
`FIG. 6 illustrates an overview of four sensor arrays
`disposed according to an exemplary embodiment of these
`teachings and a scene for which corresponding nodes of the
`arrays sample corresponding portions. The four sensor arrays
`or modules A, B, C and D are arranged in the host device/
`imaging system Substantially as shown at FIG. 6: adjacent to
`and separate from one another. Each square in each array is an
`individual image sensing node. There is also an array of
`lenslets (not shown) between the illustrated arrays A, B, C, D
`and the viewer, so that each lenslet directs light to one (or
`more) of the image sensing nodes. In this example embodi
`ment there is a different lenslet array for each sensor array.
`0042 Nodes in array A are oriented with the page and are
`used as an arbitrary reference orientation and size. Nodes in
`array B are rotated about 50 degrees relative to nodes in array
`A. Nodes in array C are rotated about 25 degrees relative to
`nodes in array A. Nodes in array D capture a larger size
`portion of the scene (larger pixel size) as compared to nodes
`in array A. Optionally, array D may be also rotated with
`respect to array A.
`0043. For convenience each rotated or different size node
`is shown as lying in a physically distinct Substrate as com
`pared to nodes of array A, but in an embodiment the different
`nodes may be disposed on the same Substrate and nodes of
`one like-size or like-orientation array may be interspersed
`with nodes of a different like-size or like-orientation array.
`While this is relatively straightforward to do with different
`size nodes on a common Substrate, from a manufacturing
`perspective it is anticipated to be a bit more costly to manu
`facture nodes with different rotation orientation on the same
`substrate, particularly for pixels in a CMOS implementation.
`0044 Shown are individual image sensing nodes A1 of
`array A, B1 of array B, C1 of array Cand D1 of array D. These
`are corresponding nodes because they are positioned so as to
`capture a similar (at least partially overlapping) portion of the
`overall scene. At the lower portion of FIG. 6 is the scene
`which the imaging system captures simultaneously with its
`four sensor arrays. It is the same scene repeated four times for
`clarity of description; when the aperture is opened (power
`applied to the CMOS or removed from the diodes) array A
`sees scene 602A, array B sees scene 602B, array C sees scene
`602C and array D sees scene 602D. Node A1 captures the
`blanked portion of scene 602A; node B1 captures the blanked
`portion of scene 602B, node C1 captures the blanked portion
`of scene 602C, and node D1 captures the blanked portion of
`scene 602D. It is understood these portions being captured by
`the single image sensing nodes are exaggerated in size for
`purposes of describing the image capture. While there is
`overlap among the portions

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket