throbber
some embodiments, the digital camera apparatus provides the user with the ability to manually
`
`adjust the exposure time directly, similar to adjusting an iris on a conventional film camera.
`
`In some embodiments, the digital camera apparatus employs relative movement between
`
`an optics portion ( or one or more portions thereof) and a sensor array ( or one or more portions
`
`5
`
`thereof), to provide a mechanical iris for use in auto exposure control and/or manual exposure
`
`control. As stated above, such movement may be provided for example using actuators, e.g.,
`
`MEMS actuators and by applying appropriate control signal(s) to one or more of the actuators to
`
`cause the one or more actuators to move, expand and/or contract to thereby move the associated
`
`optics portion.
`
`10
`
`As with each of the embodiments disclosed herein, the above embodiments may be
`
`employed alone or in combination with one or more other embodiments disclosed herein, or
`
`portions thereof.
`
`In addition, it should also be understood that the embodiments disclosed herein may also
`
`be used in combination with one or more other methods and/or apparatus, now known or later
`
`15
`
`developed.
`
`As mentioned above, the inventions described and illustrated in the U.S. Provisional
`
`Application Serial No. 60/695,946, entitled "Method and Apparatus for use in Camera and
`
`Systems Employing Same", filed July 1, 2005, may be employed in conjunction with the present
`
`inventions. For the sake of brevity, those discussions will not be repeated. It is expressly noted
`
`20
`
`that the entire contents of the aforementioned U.S. Provisional Application, including, for
`
`example, the features, attributes, alternatives, materials, techniques and/or advantages of all of
`
`the inventions/embodiments thereof, are incorporated by reference herein.
`
`The output of the exposure control is supplied to the Auto/Manual focus control portion,
`
`which helps make the objects (e.g., the target(s) of an image) that are within the field of view
`
`25
`
`appear in focus. Generally, objects in an image appear blurred if the image is over focus or
`
`under focus. The image may have peak sharpness when the lens is in focus point. In some
`
`embodiments, the auto focus control portion detect the amount of blurriness of an image, e.g.,
`
`while the digital camera apparatus is in a preview mode, and provides control signals that cause
`
`the lens assembly to move back and forth, accordingly, until the auto focus control portion
`
`30
`
`determines that the lens is at the focus point. Many of the digital still cameras available today
`
`utilize such type of mechanism.
`
`Page 158
`
`Ex.1030 / Page 456 of 1435
`TESLA, INC.
`
`

`

`In some embodiments, the auto/manual focus portion is adapted to help increase the
`
`Depth of Focus of the digital camera apparatus. Depth of Focus can be viewed as a measure of
`
`how much an object that is in focus within a field of view can be moved forward or backward
`
`before the object becomes "out of focus". Depth of Focus is based at least in part on the lens
`
`5
`
`employed in the optical portion. Some embodiments employ one or more optical filters in
`
`combination with a one or more algorithms to increase the Depth of Focus. The optical filter or
`
`filters may be conventional optical filters for increasing Depth of Focus and may be disposed
`
`superjacent ( on or above) the top of the lens, although this is not required. Any type of optical
`
`filter and positioning thereof may be employed. Similarly, the algorithm or algorithms may be a
`
`10
`
`conventional wave front encoding algorithm, although this is not required. Any type of
`
`algorithm or algorithms may be employed. In some embodiments, the auto focus mechanism
`
`increases the Depth of Focus by a factor often (e.g., the Depth of Focus provided with the auto
`
`focus mechanism is ten time as large as the Depth of Focus of the lens alone (without the auto
`
`focus mechanism), to make the system less sensitive or insensitive to the position of objects
`
`15 within a field of view. In some embodiments, the auto focus mechanism increases the Depth of
`
`Focus by a factor of twenty or more (e.g., the Depth of Focus provided with the auto focus
`
`mechanism is twenty time as large as the Depth of Focus of the lens alone (without the auto
`
`focus mechanism), to further decrease the sensitivity of the position of the object within a field
`
`of view and/or to make the system insensitive to the position of objects within a field of view.
`
`20
`
`In some embodiments, the digital camera apparatus may provide the user with the ability
`
`to manually adjust the focus.
`
`In some embodiments, the digital camera apparatus employs relative movement between
`
`an optics portion ( or one or more portions thereof) and a sensor array ( or one or more portions
`
`thereof), to help provide an auto focus and/or manual focus. As stated above, such movement
`
`25 may be provided for example using actuators, e.g., MEMS actuators and by applying appropriate
`
`control signal(s) to one or more of the actuators to cause the one or more actuators to move,
`
`expand and/or contract to thereby move the associated optics portion. (See, for example, U.S.
`
`Provisional Application Serial No. 60/695,946, entitled "Method and Apparatus for use in
`
`Camera and Systems Employing Same", filed July 1, 2005, which is again incorporated by
`
`30
`
`reference).
`The auto/manual focus is not limited to the above embodiments. Indeed, any other type
`
`of auto/manual focus now known or later developed may be employed.
`Page 159
`
`Ex.1030 / Page 457 of 1435
`TESLA, INC.
`
`

`

`In addition, as with each of the embodiments disclosed herein, the above embodiments
`
`may be employed alone or in combination with one or more other embodiments disclosed herein,
`
`or portions thereof.
`
`It should be understood that each of the embodiments disclosed herein may also be used
`
`5
`
`in combination with one or more other methods and/or apparatus, now known or later developed.
`
`It should also be understood that auto focus and manual focus are not required. Further,
`
`the focus portion may provide auto focus without regard to whether the ability to manual focus is
`
`provided. Similarly, the focus portion may provide manual focus without regard to whether the
`
`ability to· auto focus is provided.
`
`The output of the auto focus control is supplied to the zoom controller.
`
`Figure 111 S is schematic block diagram of one embodiment of the zoom controller,
`
`which may for example, help provide "optical zoom" and/or "digital zoom" capability. The
`
`optical zoom may be any type of optical zoom now know or later developed. An example of
`
`conventional optical zoom (which moves the one or more lens elements backward and forward)
`
`15
`
`is described herein above. Similarly, the digital zoom may be any type of digital zoom now
`
`know or later developed. Note that the determination of the desired zoom window may be
`
`predetermined, processor controlled and/or user controlled.
`
`One drawback to digital zooming is a phenomenon referred to as aliasing. For example,
`
`when a television anchor on a news channels wears a striped tie, the television image of the
`
`20
`
`striped tie sometimes includes color phenomena that do not appear on the actual tie. Aliasing of
`
`this type is common when a system does not have sufficient resolution to accurately represent
`
`one or more features of an object within the field of view. In the above example, the television
`
`camera does not have enough resolution to accurately capture the striped pattern on the tie.
`
`In some embodiments, the digital camera apparatus employs relative movement between
`
`25
`
`an optics portion ( or one or more portions thereof) and a sensor array ( or one or more portions
`
`thereof), to help increase resolution, thereby helping to reduce and/or minimize aliasing that
`
`might otherwise occur as a result of digital zooming. As stated above, such relative movement
`
`may be provided for example using actuators, e.g., MEMS actuators and by applying appropriate
`
`control signal(s) to one or more of the actuators to cause the one or more actuators to move,
`
`30
`
`expand and/or contract to thereby move the associated optics portion.
`
`In some embodiments, for example, an image is captured and an optics portion is
`
`thereafter moved in the x direction by a distance equal to ½ of the width of a pixel. An image is
`Page 160
`
`Ex.1030 / Page 458 of 1435
`TESLA, INC.
`
`

`

`captured with the optics in the new position. The captured images may be combined to increase
`
`the effective resolution. In some embodiments, the optics portion is moved in the y direction
`
`instead of the x direction. In some other embodiments, the optics portion is moved in the x
`
`direction and the y direction and an image is captured at such position. In further embodiments,
`
`5
`
`an image is also captured at all four positions (i.e., no movement, moved in x direction, moved in
`
`y direction, moved in x direction and y direction) and the images are then combined to further
`
`increase the resolution and further help reduce, minimize and or eliminate aliasing as a result of
`
`zooming. For example, by doubling the resolution, it may be possible to zoom in by a factor of
`
`two without significantly increasing the aliasing.
`
`10
`
`In some embodiments, the relative movement is in the form of a 1/3 pixel x 1/3 pixel
`
`pitch shift in a 3 x 3 format. In some embodiments, it may be desirable to employ a reduced
`
`optical fill factor.
`
`In some embodiments, one or more of the sensor arrays provides enough
`
`resolution to allow the digital camera apparatus to perform digital zoom without excessive
`
`aliasing. For example, if an embodiment requires 640x480 pixels for each every image, with or
`
`15 without zoom, one or more of the sensor arrays may be provided with 1280x 1024 pixels. In such
`
`embodiment, such sensor portion(s) have enough pixels to provide the digital camera apparatus
`
`with the resolution needed to zoom on ¼ of the image and yet still provide the required
`resolution of 640x480 pixels (e.g.,½ x 1280 = 640, ½ x 1024 = 512).
`Figures 111 T-111 V are explanatory views of a process carried out by a zoom portion of a
`
`20
`
`digital camera apparatus in accordance with one such embodiment of the present invention. In
`
`some embodiments, the subsystem may use only ¼ of the pixels (e.g.,., ½ x 1280 = 640, ½ x
`1024 = 512) when not in zoom mode, or may employ downsampling to reduce the number of
`pixels. In some other of such embodiments, the digital camera apparatus output all of the pixels,
`
`e.g., 1280xl 024, even when not in zoom mode. The determination as to how many pixels to use
`
`25
`
`and the number of pixels to output when not in zoom mode may be predetermined, processor
`
`controlled and/or user controlled.
`
`The output of the zoom controller is supplied to the gamma correction portion, which
`
`helps to map the values received from the camera channels into values that more closely match
`
`the dynamic range characteristics of a display device (e.g., a liquid crystal display or cathode ray
`
`30
`
`tube device). The values from the camera channels are based, at least in part, on the dynamic
`
`range characteristics of the sensor, which often does not match the dynamic range characteristics
`
`Page 161
`
`Ex.1030 / Page 459 of 1435
`TESLA, INC.
`
`

`

`of the display device. The mapping provided by gamma correction portion helps to compensate
`
`for the mismatch between the dynamic ranges.
`
`Figure 111 W is a graphical representation showing an example of the operation of the
`
`gamma correction portion.
`
`5
`
`Figure 11 IX shows one embodiment of the gamma correction portion.
`
`In this
`
`embodiment, the gamma correction portion employs a conventional transfer function to provide
`
`gamma correction. The transfer function may be any type of transfer function including a linear
`
`transfer function, a non-linear transfer function and/or combinations thereof. The transfer
`
`function may have any suitable form including but not limited to one or more equations, lookup
`
`10
`
`tables and/or combinations thereof. The transfer function may be predetermined, adaptively
`
`determined and/or combinations thereof.
`
`The output of the gamma correction portion is supplied to the color correction portion,
`
`which helps to map the output of the camera into a form that matches the color preferences of a
`
`user.
`
`15
`
`In this embodiment, the color correction portion generates corrected color values using a
`
`correction matrix that contains a plurality of reference values to implement color preferences as
`
`follows (The correction matrix contains sets of parameters that are defined, for example, by the
`
`user and/or the manufacturer of the digital camera):
`
`Re
`Ge
`Be
`
`Rr Gr Br
`Rg Gg Bg
`Rb Gb Bb
`
`X
`
`R
`G
`B
`
`20
`
`such that:
`
`R corrected= (Rr x Run-corrected)+ (Gr x Gun-corrected)+
`
`(Br x B un-corrected),
`
`G corrected= (Rg x Run-corrected)+ (Gg x Gun-corrected)+
`
`(Bg x B un-corrected), and
`
`25
`
`B corrected= (Rb x Run-corrected)+ (Gb x Gun-corrected)+
`
`(Bb x B un-corrected)
`
`where
`
`Page 162
`
`Ex.1030 / Page 460 of 1435
`TESLA, INC.
`
`

`

`Rr is a value indicating the relationship between the output values from
`
`the red camera channel and the amount of red light desired from the display
`
`device in response thereto,
`
`5
`
`the green camera channel and the amount of red light desired from the display
`
`Gr is a value indicating the relationship between the output values from
`
`device in response thereto,
`
`Br is a value indicating the relationship between the output values from
`
`the blue camera channel and the amount of red light desired from the display
`
`device in response thereto,
`
`10
`
`Rg is a value indicating the relationship between the output values from
`
`the red camera channel and the amount of green light desired from the display
`
`device in response thereto,
`
`Gg is a value indicating the relationship between the output values from
`
`the green camera channel and the amount of green light desired from the display
`
`15
`
`device in response thereto,
`
`Bg is a value indicating the relationship between the output values from
`
`the blue camera channel and the amount of green light desired from the display
`
`device in response thereto,
`
`Rb is a value indicating the relationship between the output values from
`
`the red camera channel and the amount of blue light desired from the display
`
`device in response thereto,
`
`Gb is a value indicating the relationship between the output values from
`
`the green camera channel and the amount of blue light desired from the display
`
`device in response thereto, and
`
`Bb is a value indicating the relationship between the output values from
`
`the blue camera channel and the amount of blue light desired from the display
`
`device in response thereto.
`
`20
`
`25
`
`Figure 111 Y shows one embodiment of the color correction portion. In this embodiment,
`
`the color correction portion includes a red color correction circuit, a green color correction
`
`30
`
`circuit and a blue color correction circuit.
`
`Page 163
`
`Ex.1030 / Page 461 of 1435
`TESLA, INC.
`
`

`

`The red color correction circuit includes three multipliers. The first multiplier receives
`
`the red value ( e.g., P An) and the transfer characteristic Rr and generates a first signal indicative of
`
`the product thereof. The second multiplier receives the green value (e.g., Psn) and the transfer
`
`characteristic Gr and generates a second signal indicative of the product thereof. The third
`
`5 multiplier receives the green value (e.g., Pen) and the transfer characteristic Br and generates a
`
`third signal indicative of the product thereof. The first, second and third signals are supplied to
`
`an adder which produces a sum that is indicative of a corrected red value ( e.g., P An corrected)-
`
`The green color correction circuit includes three multipliers. The first multiplier receives
`
`the red value ( e.g., P An) and the transfer characteristic Rg and generates a first signal indicative
`
`IO
`
`of the product thereof. The second multiplier receives the green value (e.g., Pan) and the transfer
`
`characteristic Gg and generates a second signal indicative of the product thereof. The third
`
`multiplier receives the green value ( e.g., Pen) and the transfer characteristic Bg and generates a
`
`third signal indicative of the product thereof. The first, second and third signals are supplied to
`
`an adder which produces a sum indicative of a corrected green value ( e.g., Psn corrected).
`
`15
`
`The blue color correction circuit includes three multipliers. The first multiplier receives
`
`the red value ( e.g., P An) and the transfer characteristic Rb and generates a first signal indicative
`
`of the product thereof. The second multiplier receives the green value (e.g., P8n) and the transfer
`characteristic Gb and generates a second signal indicative of the product thereof. The third
`
`multiplier receives the green value (e.g., Pen) and the transfer characteristic Bb and generates a
`
`20
`
`third signal indicative of the product thereof. The first, second and third signals are supplied to
`
`an adder which produces a sum indicative of a corrected blue value (e.g., Pen correctect).
`
`The output of the color corrector is supplied to the edge enhancer/sharpener, the purpose
`
`of which is to help enhance features that may appear in an image.
`
`Figure 11 lZ shows one embodiment of the edge enhancer/sharpener.
`
`In this
`
`25
`
`embodiment, the edge enhancer/sharpener comprises a high pass filter that is applied to extract
`
`the details and edges and apply the extraction information back to the original image.
`
`The output of the edge enhancer/sharpener is supplied to a random noise reduction
`
`portion, which reduces random noise in the image. Random noise reduction may include, for
`
`example, a linear or non-linear low pass filter with adaptive and edge preserving features. Such
`
`30
`
`noise reduction may look at the local neighborhood of the pixel in consideration. In the vicinity
`
`of edges, the low pass filtering may be carried out in the direction of the edge so as to prevent
`
`blurring of such edge. Some embodiments may apply an adaptive scheme. For example, a low
`Page 164
`
`Ex.1030 / Page 462 of 1435
`TESLA, INC.
`
`

`

`pass filter (linear and/or non linear) with a neighborhood of relatively large size may be
`
`employed for smooth regions. In the vicinity of edges, a low pass filter (linear and/or non-linear)
`
`and a neighborhood of smaller size may be employed, for example, so as not to blur such edges.
`
`Other random noise reduction may also be employed, if desired, alone or in combination
`
`5 with one or more embodiments disclosed herein. In some embodiments, random noise reduction
`
`is carried out in the channel processor, for example, after deviant pixel correction. Such noise
`
`reduction may be in lieu of, or in addition to, any random noise reduction that may be carried out
`
`in the image pipeline.
`
`The output of the random noise reduction portion is supplied to the chroma noise
`
`10
`
`reduction portion, the purpose of which is to reduce color noise.
`
`Figure 111 AA shows one embodiment of the chroma noise reduction portion. In this
`
`embodiment, the chroma noise reduction portion includes an RGB to YUV converter, first and
`
`second low pass filters and a YUV to RGB converter. The output of the random noise reduction
`
`portion, which is a signal in the form of RGB values, is supplied to the RGB to YUV converter,
`
`15 which generates a sequence of YUV values in response thereto, each YUV value being
`
`indicative of a respective one of the RGB values.
`
`The Y values or components (which indicate the brightness of an image) are supplied to
`
`the YUV to RGB converter. The U and V values or components (which indicate the color
`
`components of the image) are supplied to the first and second low pass filters, respectively,
`
`20 which reduce the color noise on the U and V components, respectively. The output of the filters
`
`are supplied to the YUV to RGB converter, which generates a sequence of RGB values in
`
`response thereto, each RGB value being indicative of a respective one of the YUV values.
`
`The output of the chroma noise reduction portion is supplied to the Auto/Manual white
`
`balance portion, the purpose of which is to help make sure that a white colored target appears as
`
`25
`
`a white colored target, rather than reddish, greenish, or bluish.
`
`Figure 11 IAB is an explanatory view showing a representation of a process carried out
`
`by the white balance portion in one embodiment. More particularly, Figure 11 IAB depicts a
`
`rectangular coordinate plane having an RIG axis and a B/G axis. The rectangular coordinate
`
`plane has three regions, i.e., a redish region, a white region and a bluish region. A first reference
`
`30
`
`line defines a color temperature that separates the redish region from the white region. A second
`
`reference line defines a color temperature that separates the white region from the bluish region.
`
`Page 165
`
`Ex.1030 / Page 463 of 1435
`TESLA, INC.
`
`

`

`The first reference line is disposed, for example at color temperature of 4 700 Kelvin. The second
`
`reference line is disposed, for example at color temperature of 7000 Kelvin.
`
`In this embodiment, the automatic white balance portion determines the positions, in the
`
`rectangular coordinate plane defined by the RIG axis and the BIG axis, of a plurality of pixels
`
`5
`
`that define the original image. The positions of the plurality of pixels are treated as representing
`
`a cluster of points in the rectangular coordinate plane. The automatic white balance portion
`
`determines a center of the cluster of points and changes that could be applied to the R, G, B,
`
`pixel values of the original image to effectively translate the center of the cluster into the white
`
`image region of the coordinate plane, e.g., to a color temperature of 6500 Kelvin. The output of
`
`10
`
`the automatic white balance portion is an output image where a pixel value in the output image is
`
`based on the corresponding pixel value of the original image and the changes to the R, G, B pixel
`
`values that had been determined could be used to translate the center of the cluster for the
`
`original image into the white region, such that the center of a cluster for the output image is
`
`disposed in the white image region of the coordinate plane, e.g., a color temperature of 6500
`
`15 Kelvin.
`
`The desired color temperature may be predetermined, processor controlled and/or user
`
`controlled. In some embodiments, for example, a reference value indicative of a desired color
`
`temperature is supplied by the user so that images provided by the digital camera apparatus will
`
`have color temperature characteristics desired by the user. In such embodiments, manual white
`
`20
`
`balance may be performed by determining the changes that could be applied to translate the
`
`center of the cluster for the original image to a color temperature corresponding to a reference
`
`value provided by the user.
`
`The white balance strategy may use, for example, one or more conventional color
`
`enhancement algorithms, now know or later developed.
`
`25
`
`It should be understood that the white balance portion is not limited to the techniques set
`
`forth above. Indeed, the white balance portion may employ any white balance technique now
`
`known or later developed. It should also be understood that color white balance is not required.
`
`The output of the white balance portion is supplied to the Auto/Manual color enhancement
`
`portion.
`
`30
`
`Figure 11 lAC is a block diagram of one embodiment of the color enhancement portion,
`
`in accordance with one embodiment. In this embodiment, the color enhancement portion adjusts
`
`the brightness, contrast and/or saturation to enhance the color appearance in accordance with one
`Page 166
`
`Ex.1030 / Page 464 of 1435
`TESLA, INC.
`
`

`

`or more enhancement strategies. This process is similar in some respects to adjusting color
`
`settings of a TV or computer monitor. Some embodiments may also adjust the hue. The
`
`enhancement strategy may use, for example, one or more conventional color enhancement
`
`algorithms, now know or later developed.
`
`5
`
`Referring to Figure 111 AC, data indicative of the image is supplied to the brightness
`
`enhancement portion, which further receives an adjustment value and generates output data
`
`indicative of an image adjusted for brightness in accordance therewith. In this embodiment, each
`
`pixel value in the output image is equal to the sum of an adjustment value and a corresponding
`
`pixel in the input image. The adjustment value may be predetermined, processor controlled
`
`10
`
`and/or user controlled. In some embodiments, for example, the adjustment value is supplied by
`
`the user so that images provided by the digital camera apparatus will have the characteristics
`
`desired by the user. In some embodiments, an adjustment value having a positive magnitude
`
`makes the output image appear brighter than the input image. An adjustment value having a
`
`negative magnitude may make the output image appear darker than the input image.
`
`15
`
`The output of the brightness enhancement portion is supplied to the contrast enhancement
`
`portion, which further receives an adjustment value and generates an output image adjusted for
`
`contrast in accordance therewith.
`
`In this embodiment, contrast adjustment can be viewed as
`
`"stretching" the distance between dark (e.g., indicated by a pixel value having a small
`
`magnitude) and light (e.g., indicated by a pixel value having a large magnitude). An adjustment
`
`20
`
`value having a positive magnitude makes dark areas in the input image appear darker in the
`
`output image and makes light areas in the input image appear lighter in the output image. An
`
`adjustment value having a negative magnitude may have the opposite effect. One or more
`
`conventional algorithms, for example, now know or later developed may be employed. The
`
`adjustment value may be predetermined, processor controlled and/or user controlled. In some
`
`25
`
`embodiments, for example, the adjustment value is supplied by the user so that images provided
`
`by the digital camera apparatus will have the characteristics·desired by the user.
`
`The output of the contrast enhancement portion is supplied to the saturation enhancement
`
`portion, which further receives an adjustment value and generates an output image adjusted for
`
`saturation in accordance therewith. In this embodiment, saturation adjustment can be viewed as
`
`30
`
`"stretching" the distance between R, G, B, components of a pixel (which is similar in some
`
`respects to contrast adjustment). An adjustment value having a positive magnitude makes dark
`
`areas in the input image appear darker in the output image and makes light areas in the input ·
`Page 167
`
`Ex.1030 / Page 465 of 1435
`TESLA, INC.
`
`

`

`image appear lighter in the output image. An adjustment value having a negative magnitude may
`
`have the opposite effect. One or more conventional techniques, for example, now know or later
`
`developed may be employed. The technique may employ a color correction matrix, for example,
`
`similar to that employed by the color correction portion described hereinabove. The adjustment
`
`5
`
`value may be predetermined, processor controlled and/or user controlled. In some embodiments,
`
`for example, the adjustment value is supplied by the user so that images provided by the digital
`
`camera apparatus will have the characteristics desired by the user.
`
`It should be understood that the color enhancement portion is not limited to the
`
`enhancement techniques set forth above. Indeed, the color enhancement portion may employ
`
`10
`
`any enhancement technique now known or later developed. It should also be understood that
`
`color enhancement is not required.
`
`The output of the Auto/Manual color enhancement portion is supplied to the image
`
`scaling portion, the purpose of which is to reduce or enlarge the image, for example, by
`
`removing or adding pixels to adjust the size of an image.
`
`15
`
`The image scaling portion receives data, indicative of an imaged to be scaled (e.g.,
`
`enlarged or reduced). The magnitude of the scaling may be predetermined or preset, processor
`
`controlled or manually controlled. In some embodiments, a signal indicative of the magnitude of
`the scaling, if any, is received. If the signal indicative of the desired scaling magnitude indicates
`If the signal
`
`that the image is to be enlarged, then the scaling portion performs upscaling.
`
`20
`
`indicative of the desired scaling magnitude indicates that the image is to be reduced, then the
`
`scaling portion performs downscaling.
`
`Figure 11 IAD-11 lAE are a schematic block diagram and an explanatory view, showing
`
`a representation of upscaling, respectively, in accordance with one embodiment. More
`
`25
`
`particularly, Figure 11 IAE depicts a portion of.an image to be enlarged and a portion of the
`image to be formed therefrom. In this example, the portion of the image to be enlarged includes
`nine pixels, indicated for purposes of explanation as P11-P33, shown arranged in an array having
`
`three rows and three columns. The portion of the image to be formed therefrom includes twenty
`
`five pixels, indicated for purposes of explanation as A-Y, shown arranged in an array having five
`
`rows and five columns. (Note that the portion of the image to be formed could alternatively be
`
`30
`
`represented as P11-Pss.)
`
`In this embodiment, the image scaling portion employs an upscaling strategy in which the
`
`pixel values at the intersection of an odd numbered column and an odd numbered row, i.e., A, C,
`Page 168
`
`Ex.1030 / Page 466 of 1435
`TESLA, INC.
`
`

`

`E, K, M, 0, U, W and Y, are taken from the pixel values in the image to be enlarged. For
`
`example,
`
`5
`
`A=P11
`
`C=P21
`
`E=P31
`
`K=P12
`
`M=P22
`
`0=P32
`
`U=P13
`
`W=P23
`
`Y=P33
`
`The other pixel values, i.e., pixel values disposed in either an even numbered column or
`
`an even numbered row, i.e., B, D, F, G, H, I, J, L, N, P, Q, R, S, T, V and X, are generated by
`
`interpolation. Each pixel value is generated based on two or more adjacent pixel values, for
`
`15
`
`example,
`
`20
`
`25
`
`30
`
`B =(A +C)/2
`D=(C + E)/2
`F=(A + K)/2
`H=(C + M)/2
`J = (E + 0)/2
`L= (K + M)/2
`N= (M +0)/2
`P= (K + U)/2
`R=(M+W)/2
`T=(0 + Y)/2
`V=(U+W)/2
`
`X=(W+Y)/2
`
`G=(B+L)/2
`I= (D + N)/2
`Q= (L+ V)/2
`S =(N + X)/2
`
`Page 169
`
`Ex.1030 / Page 467 of 1435
`TESLA, INC.
`
`

`

`In some embodiments, upscaling increases the number of pixels from 640x480 pixels to
`
`1280x1024 pixels, however, any magnitude of upscaling may be employed.
`
`In some
`
`embodiments, the digital camera apparatus provides the user with the ability to determine
`
`whether upscaling is to be performed and if so, the magnitude of the upscaling.
`
`5
`
`In some embodiments, the scaling portion employ one or more of the techniques
`
`described herein for the zoom controller, with or without cropping.
`
`It should be understood that the scaling portion is not limited to the upscaling strategy set
`
`forth above. Indeed, the scaling portion may employ any upscaling technique now known or
`
`later developed. It should also be understood that upscaling is not required.
`
`10
`
`The scaling portion may have the ability to downscale, without regard to whether scaling
`
`portion has the ability to upscale. In some embodiments, downscaling decreases the number of
`
`pixels from 1280x1024 pixels to 640x480 pixels, however, any magnitude of downscaling may
`
`be employed. In some embodiments, the digital camera apparatus provides the user with the
`
`ability to determine whether downscaling is to be performed and if so, the magnitude of the
`
`15
`
`downscaling.
`
`It should be understood that any downscaling technique now known or later developed
`may be employed. It should also be understood that downscaling is not required.
`
`The output of the image scaling portion is supplied to the color space conversion portion,
`
`the purpose of which is to convert color format from RGB to YCrCB or YUV for compression.
`
`20
`
`In this embodiment, the conversion is accomplished using the following equations:
`
`(0.257 * R) + (0.504 * G) + (0.098 * B) + 16
`Y =
`Cr= V = (0.439 * R) - (0.368 * G) - (0.071 * B) + 128
`Cb= U = -(0.148 * R) - (0.291 * G) + (0.439 * B) + 128
`
`The output of the color space conversion portion is supplied to the image compression
`
`25
`
`portion of the post processor. The purpose of the image compression portion is to reduce the size
`
`of image file. This may be accomplished, or example, using an off the

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket