throbber
(19) United States
`(12) Patent Application Publication (10) Pub. No.: US 2004/0218086 A1
`VOSS et al.
`(43) Pub. Date:
`Nov. 4, 2004
`
`US 2004O218086A1
`
`(54) SYSTEM AND METHOD FOR PROVIDING
`CAMERA FOCUS FEEDBACK
`
`(76) Inventors: James S. Voss, Fort Collins, CO (US);
`Jim Owens, Fort Collins, CO (US)
`Correspondence Address:
`HEWLETTPACKARD DEVELOPMENT
`COMPANY
`Intellectual Property Administration
`P.O. BOX 272400
`Fort Collins, CO 80527-2400 (US)
`
`(21) Appl. No.:
`
`10/428,243
`
`(22) Filed:
`
`May 2, 2003
`
`Publication Classification
`
`(51) Int. Cl." ..................................................... H04N 5/232
`(52) U.S. Cl. .............................................................. 348/345
`(57)
`ABSTRACT
`Disclosed are Systems and method for providing feedback to
`a user. In one embodiment, a System and a method pertain
`to analyzing levels of focus of discrete portions of an image,
`evaluating a relative focus of the image portions, and
`identifying to a user the image portions having the highest
`level of focus.
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`LENS
`SYSTEM 200
`
`
`
`MAGE
`SENSOR(S)
`
`SENSOR
`DRIVERS 204
`
`USER
`INTERFACE
`213
`
`
`
`
`
`
`
`CAMERA
`CONTROL
`INTERFACE 210
`
`AUTOFOCUS SYSTEM 218
`FOCUSING ALGORITHM(S) 220
`
`
`
`
`
`
`
`FOCUS FEEDBACK SYSTEM 222
`- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
`
`
`
`-
`
`100
`
`A.
`
`STORAGE
`MEMORY
`224
`
`
`
`
`
`
`
`DEVICE
`INTERFACE 226
`
`
`
`Qualcomm, Exh. 2014, p. 1
`Apple v. Qualcomm, 2018-01277
`
`

`

`Patent Application Publication Nov. 4, 2004 Sheet 1 of 8
`
`US 2004/0218086 A1
`
`
`
`116
`
`FIG. 1
`
`Qualcomm, Exh. 2014, p. 2
`Apple v. Qualcomm, 2018-01277
`
`

`

`Patent Application Publication Nov. 4, 2004 Sheet 2 of 8
`
`US 2004/0218086 A1
`
`
`
`
`
`colº
`
`
`
`gŽ? HOVHHELNI
`
`EOLAEC]
`
`
`
`(S) NOSNES
`
`ESO\/W|
`
`
`
`Õ?Z WELSÅS
`
`SNET
`
`
`
`7?Ž SHEAING
`
`HOSNES
`
`(S)>JOLOW
`
`ZLZ
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`Qualcomm, Exh. 2014, p. 3
`Apple v. Qualcomm, 2018-01277
`
`

`

`Patent Application Publication Nov. 4, 2004 Sheet 3 of 8
`
`US 2004/0218086 A1
`
`222 l
`
`
`
`ANALYZE FOCUS OF
`DISCRETE PORTIONS
`OF MAGE
`
`EVALUATE RELATIVE
`FOCUS OF THE DISCRETE
`IMAGE PORTIONS
`
`DENTFY TO USER
`THE MAGE PORTIONS
`OF GREATEST FOCUS
`
`FIG. 3
`
`Qualcomm, Exh. 2014, p. 4
`Apple v. Qualcomm, 2018-01277
`
`

`

`Patent Application Publication Nov. 4, 2004 Sheet 4 of 8
`
`US 2004/0218086 A1
`
`218,
`
`222 l
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`START
`
`FOCUS ON VIEWED SCENE TO
`FORMA FOCUSED IMAGE
`
`IDENTIFY DISCRETE PORTIONS
`OF THE FOCUSED IMAGE
`
`ANALYZE LEVEL OF FOCUS OF
`EACH DISCRETE PORTION
`OF THE FOCUSED IMAGE
`
`EVALUATE RELATIVE FOCUS
`OF DISCRETE PORTIONS
`OF THE FOCUSED IMAGE
`
`400
`
`402
`
`404
`
`406
`
`408
`
`GENERATE GRAPHICAL INDICA
`FOR DISPLAY THAT IDENTFY THE
`IMAGE PORTIONS HAVING
`THE GREATEST FOCUS
`
`
`
`IMAGE
`CAPTURED?
`
`Qualcomm, Exh. 2014, p. 5
`Apple v. Qualcomm, 2018-01277
`
`

`

`Patent Application Publication Nov. 4, 2004 Sheet 5 of 8
`
`US 2004/0218086 A1
`
`218,
`222
`
`412
`
`414
`
`416
`
`418
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`IDENTIFY DISCRETE PORTIONS
`OF THE CAPTURED IMAGE
`
`ANALYZE LEVEL OF FOCUS OF
`EACH DISCRETE PORTION
`OF THE CAPTURED IMAGE
`
`EVALUATE RELATIVE FOCUS
`OF DISCRETE PORTIONS
`OF THE CAPTURED IMAGE
`
`GENERATE GRAPHICAL INDICA
`FOR DISPLAY THAT IDENTIFY
`THE IMAGE PORTIONS HAVING
`THE GREATEST FOCUS
`
`FIG. 4B
`
`Qualcomm, Exh. 2014, p. 6
`Apple v. Qualcomm, 2018-01277
`
`

`

`Patent Application Publication Nov. 4, 2004 Sheet 6 of 8
`
`US 2004/0218086 A1
`
`
`
`
`
`FIG. 5A
`
`514
`
`?ae | || I || ||
`!\,
`
`504
`
`
`
`FIG. 5B
`
`512
`
`Qualcomm, Exh. 2014, p. 7
`Apple v. Qualcomm, 2018-01277
`
`

`

`Patent Application Publication Nov. 4, 2004 Sheet 7 of 8
`
`US 2004/0218086 A1
`
`514
`
`5
`
`516
`
`502
`
`(TOD) · | || No.
`
`504
`
`
`
`5
`
`FIG. 5C
`
`514
`
`518
`
`502
`
`12
`
`XXXXXX
`
`512
`
`F.G. 5D
`
`
`
`
`
`
`
`
`
`Qualcomm, Exh. 2014, p. 8
`Apple v. Qualcomm, 2018-01277
`
`

`

`Patent Application Publication Nov. 4, 2004 Sheet 8 of 8
`
`US 2004/0218086 A1
`
`
`
`500
`
`512
`
`g, '%
`

`
`FIG. 5E
`
`Qualcomm, Exh. 2014, p. 9
`Apple v. Qualcomm, 2018-01277
`
`

`

`US 2004/0218086 A1
`
`Nov. 4, 2004
`
`SYSTEMAND METHOD FOR PROVIDING
`CAMERA FOCUS FEEDBACK
`
`BACKGROUND
`0001 Most cameras, including digital cameras, comprise
`an autofocus feature with which objects in a viewed Scene
`can be automatically focused by the camera. The autofocus
`functionality can either be continuous, wherein the camera
`continually adjusts the camera focus as the viewed Scene
`changes, or Single, wherein autofocusing only occurs when
`a user depresses (e.g., halfway depresses) a shutter button.
`0002 Irrespective of the autofocus mode that is used,
`focusing is typically achieved by analyzing the viewed Scene
`with a focusing algorithm. In particular, discrete portions of
`the viewed Scene are analyzed independently and values are
`assigned to each as to the degree of focus that is observed.
`These portions may comprise portions of the entire Scene, or
`only a part of it (e.g., the center of the Scene). The values are
`assigned to the various analyzed portions by evaluating the
`perceived sharpness of objects in each portion. After the
`analysis has been conducted and values assigned, the lens
`System is manipulated to alter the focus, and the analysis is
`conducted again to generate new values for the various
`portions. The new values for the portions are then compared
`to those previously assigned to the respective portions to
`determine whether the focus improved or got worse. This
`proceSS continues until the optimum focus has been deter
`mined.
`0003. The autofocus method described above works well
`in most conditions. Sometimes, however, unintended results
`can occur. For example, in Situations in which the Subject
`(e.g., a person) is in the foreground of a viewed Scene, but
`higher contrast objects are in the background, the camera
`may, contrary to the user's intent, focus on the background
`instead of the Subject. To cite another example, if the Subject
`is to the side within a viewed Scene, the background (which
`occupies the center of the framed Scene) may be used as the
`object of interest by the autofocus system. Therefore, if the
`users friend Stands before a mountain range but is not in the
`center of the composed shot, it is likely that the mountain
`range, and not the friend, will be in focus.
`0004 Although such problems can typically be avoided
`by first focusing only on the Subject, locking the focus (e.g.,
`by pressing the Shutter button halfway), and then composing
`the picture before capturing an image, most casual camera
`users are not that Savvy. Therefore, many users capture
`images in which objects are out-of-focus.
`0005 One benefit of digital cameras is that they allow the
`user to immediately view a captured shot. Despite this
`capability, the user is not likely to detect an out-of-focus
`condition in that the displays of most cameras are too Small,
`and their resolutions are too low, for the user to readily
`identify this condition. The situation is even worse when the
`display is used to compose the shot. In that live view images
`shown in the display while a picture is being composed are
`typically very low resolution images (to enable images to be
`shown real time), it is very difficult for the user to tell
`whether the Subject is or is not in focus.
`
`SUMMARY
`0006 Disclosed are systems and method for providing
`feedback to a user. In one embodiment, a System and a
`
`method pertain to analyzing levels of focus of discrete
`portions of an image, evaluating a relative focus of the
`image portions, and identifying to a user the image portions
`having the highest level of focus.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`0007 FIG. 1 is a rear perspective view of an embodiment
`of an example camera that provides focus feedback.
`0008 FIG. 2 is an embodiment of an architecture of the
`camera shown in FIG. 1.
`0009 FIG. 3 is a flow diagram of a first embodiment of
`a method for providing focus feedback.
`0010 FIGS. 4A and 4B provide a flow diagram of a
`Second embodiment of a method for providing focus feed
`back.
`0011 FIG. 5A is a schematic view of a camera display
`that is displaying a viewed or captured image.
`0012 FIG. 5B is a schematic view of the image of FIG.
`5A overlaid with an array or grid in which discrete portions
`of the image are identified.
`0013 FIGS. 5C-5E are schematic views that illustrate
`focus feedback provided with respect to the image of FIG.
`5A.
`
`DETAILED DESCRIPTION
`0014 AS identified in the foregoing, camera users often
`do not realize that an object in an image they have captured
`or are about to capture is out of focus. In Such situations, the
`user is likely to miss a desired shot. This is true even when,
`as with most digital cameras, the camera includes a display
`that shows the composed and/or captured image in that Such
`displays are typically too Small in size and/or their resolution
`is too low to provide such feedback.
`0015 AS is described below, however, feedback as to the
`focus of a composed or captured image can be provided to
`the user by evaluating discrete portions of the focused or
`captured image and then displaying indicia on the display
`that conveys to the user which aspects of the image are most
`in focus. With Such feedback, the user can determine
`whether the intended Subject is or is not in focus and,
`depending upon when the feedback is provided, either
`recompose or recapture the image until the desired result is
`achieved.
`0016 Described below are systems and methods which
`provide focus feedback to the user. Although particular
`embodiments are identified in an effort to fully describe the
`disclosed Systems and methods, these embodiments are
`provided for purposes of example only. Referring now to the
`drawings, in which like numerals indicate corresponding
`parts throughout the several views, FIG. 1 illustrates an
`embodiment of a camera 100 that provides focus feedback
`to users. In the example of FIG. 1, the camera 100 is a digital
`Still camera. Although a digital camera implementation is
`shown in the figures and described herein, the camera can,
`alternatively, comprise any camera that provides visual
`feedback relative to a composed or captured image.
`0017. As indicated in FIG. 1, the camera 100 includes a
`body 102 that is defined by an outer housing 104. The top
`portion of the camera 100 comprises a shutter button 106
`
`Qualcomm, Exh. 2014, p. 10
`Apple v. Qualcomm, 2018-01277
`
`

`

`US 2004/0218086 A1
`
`Nov. 4, 2004
`
`that is used to open the camera shutter (not visible in FIG.
`1). Formed with the camera body 102 is a viewfinder 108,
`such as an electronic viewfinder (EVF), which includes a
`view window 110. The back panel of the camera 100 may
`include a flat panel display 112 that, for example, comprises
`a liquid crystal display (LCD) or light emitting diode (LED)
`display.
`Various control buttons 114 are also provided on
`0.018
`the back panel of the camera 100. These buttons 114 can be
`used to, for instance, Scroll through captured images shown
`in the display 112, make Selections from camera menus, etc.
`Also shown in FIG. 1 is a compartment 116 that is used to
`house a battery and/or a memory card.
`0.019
`FIG. 2 illustrates an example architecture for the
`camera 100. As indicated in this figure, the camera 100
`includes a lens system 200 that conveys images of viewed
`Scenes to one or more image Sensors 202. By way of
`example, the image Sensors 202 comprise charge-coupled
`devices (CCDs) that are driven by one or more sensor
`driverS 204. The analog image Signals captured by the
`Sensors 202 are then provided to an analog-to-digital (A/D)
`converter 206 for conversion into binary code that can be
`processed by a processor 208.
`0020 Operation of the sensor drivers 204 is controlled
`through a camera controller 210 that is in bi-directional
`communication with the processor 208. Also controlled
`through the controller 210 are one or more motors 212 that
`are used to drive the lens System 200 (e.g., to adjust focus
`and Zoom). Operation of the camera controller 210 may be
`adjusted through manipulation of the user interface 213. The
`user interface 213 comprises the various components used to
`enter selections and commands into the camera 100 and
`therefore at least includes the shutter button 106 and the
`control buttons 114 identified in FIG. 1.
`0021. The digital image signals are processed in accor
`dance with instructions from the camera controller 210 and
`the image processing System(s) 216 Stored in permanent
`(non-volatile) device memory 214. Processed images may
`then be Stored in Storage memory 224, Such as that contained
`within a removable Solid-state memory card (e.g., Flash
`memory card). In addition to the image processing System(s)
`216, the device memory 214 further comprises an autofocus
`system 218 that includes at least one focusing algorithm 220.
`Furthermore, the device memory 214 includes a focus
`feedback System 222 that, as is described in greater detail
`below, is used to provide feedback to the user in a camera
`display (either a viewfinder display or rear panel display) as
`to focus attributes of a composed or captured image.
`Although the autofocus system 218 and the focus feedback
`System 222 are illustrated as Separate modules, the two
`Systems may be combined and/or components of one System
`may be integrated into or shared with the other System. For
`example, a focusing algorithm 220 of the autofocus System
`218 may be incorporated into the focus feedback system
`222, if desired.
`0022. The camera embodiment shown in FIG. 2 further
`includes a device interface 226, Such as a universal Serial bus
`(USB) connector, that is used to download images from the
`camera to another device Such as a personal computer (PC)
`or a printer, and which likewise can be used to upload
`images or other information.
`0023 FIG. 3 is a flow diagram of a first embodiment of
`a method for providing focus feedback to a camera user. Any
`
`process steps or blocks described in this or other flow
`diagram of this disclosure may represent modules, Segments,
`or portions of program code that includes one or more
`executable instructions for implementing Specific logical
`functions or Steps in the process. Although particular
`example process Steps are described, alternative implemen
`tations are feasible. Moreover, Steps may be executed out of
`order from that shown or discussed, including Substantially
`concurrently or in reverse order, depending on the function
`ality involved.
`0024. Beginning with block 300 of FIG. 3, the focus
`feedback System 222 analyzes the focus of discrete portions
`of an image. This image can comprise a composed image of
`a viewed Scene (i.e., a focused image) prior to capturing or
`a captured image. Regardless, the discrete portions of the
`image are analyzed to determine the level of focus that is
`obtained. This determination may be made, for instance, by
`applying the focus algorithm 220 that was used to focus the
`Viewed Scene during an autofocusing procedure of the
`camera 100. Alternatively, the determination may be made
`by applying the focus algorithm 220 to an image that was
`manually focused by the user. In either case, the focus
`feedback system 222 evaluates the relative focus of the
`discrete portions of the image, as indicated in block 302. In
`particular, the System 222 determines the level of focus of
`each discrete portion to identify which portions are most in
`focus.
`0025. Once the relative focus of the discrete portions has
`been evaluated, the system 222 identifies the portions of the
`image having the greatest focus to the user, as indicated in
`block 304. By way of example, the system 222 generates
`graphical indicia to be displayed in the camera display that
`identifies the portions of the image having the greatest focus.
`After these high focus portions of the image have been
`identified, flow for this session of the system 222 is termi
`nated.
`0026 FIGS. 4A and 4B illustrate a second embodiment
`of a method for providing focus feedback to a camera user.
`In this embodiment, the autofocus system 216 and the focus
`feedback system 222 of the camera 100 work in conjunction
`with each other to indicate to the user which portions of an
`image are in focus and, therefore, which portions may be out
`of focus. Beginning with block 400 of FIG. 4A, a viewed
`scene is focused by the autofocus system 218. By way of
`example, this focusing occurs in response to the user having
`halfway depressed the shutter button of the camera after
`having composed a shot of which the user would like to
`capture an image while the camera is in an autofocus mode.
`Accordingly, the autofocus System 216 analyzes the viewed
`Scene using a focusing algorithm 220 and assigns focus
`values to discrete portions of the viewed Scene to determine
`the degree of focus of the discrete portions. By way of
`example, the focusing algorithm analyzes rectangular Sec
`tions of the viewed Scene and focus values are assigned to
`each Section.
`0027. Once that analysis has been conducted and the
`various focus values have been assigned, the lens System
`200 is manipulated to adjust the camera's focus. This
`procedure is repeated to generate new values for the various
`identified Scene Sections and the new values for these
`Sections are then compared to the previous values to deter
`
`Qualcomm, Exh. 2014, p. 11
`Apple v. Qualcomm, 2018-01277
`
`

`

`US 2004/0218086 A1
`
`Nov. 4, 2004
`
`mine whether the focus improved or got worse. This proceSS
`continues until what is determined to be an optimum focus
`has been achieved.
`0028. At this point, a focused image has been generated
`and, if desired, that image can be captured to Store it in
`camera memory (e.g., storage memory 224). It is noted that,
`although the image is “focused, the intended Subject of the
`image may not be in proper focus. An example focused
`image is depicted in FIG. 5A. In particular, illustrated is a
`focused image 502 that is presented in a display 500, for
`instance back panel display 112 (FIG. 1). As shown in FIG.
`5A, the scene in this image 502 example comprises two
`persons 504 and 506 that together comprise the intended
`Subject of the Shot and, therefore, are desired to be in focus.
`As is further indicated in FIG. 5A, background information,
`including a window 508 and a lattice-work fence 510, are
`also visible in the focused image.
`0029. With reference back to FIG. 4A, discrete portions
`of the focused image are next identified by the focus
`feedback system 222, as indicated in block 402. These
`portions can be the same portions that were identified during
`the focusing process described above. Alternatively, how
`ever, these portions may be portions other than those iden
`tified during the focusing process. FIG. 5B illustrates por
`tions of the focused image from FIG. 5A that have been
`identified. As shown in FIG. 5B, the portions may comprise
`rectangles 512 of an array or grid 514 into which the focused
`image 502 has been divided.
`0030 Again returning to FIG. 4A, the focus feedback
`System 222 analyzes the level of focus of each discrete
`portion (e.g., rectangle 512, FIG. 5B) of the focused image,
`as indicated in block 404. By way of example, the analysis
`may be conducted by a focusing algorithm 220 of the
`autofocus System 218. In Such a case, the focus feedback
`System 222 leverages this resource of the autofocus System
`218 to conduct the focus analysis. In another example, the
`focus feedback System 222 uses its own focusing algorithm.
`In any case, the discrete portions are analyzed to determine
`how much each is in focus. Notably, the more focused
`portions of the image will be those that the autofocus System
`218 relied upon to focus the image. This analysis may, for
`instance, yield focus values on a given Scale (e.g., 1 to 10)
`that is indicative of the level of focus that has been achieved.
`0031) Next, with reference to block 406, the focus feed
`back system 222 evaluates the relative focus of the discrete
`image portions of the focused image. In Situations in which
`the various portions have been assigned numerical values
`through the analysis conducted in block 404, this evaluation
`comprises comparing the determined values to see which
`portions have the highest values and, therefore, the greatest
`level of focus.
`0032. Once the relative focus of the discrete portions has
`been determined, the focus feedback System 222 generates
`graphical indicia for display to the user that identify the
`portions of the image having the greatest focus, as indicated
`in block 408. These indicia can take several different forms.
`Generally Speaking, however, the indicia are highly intuitive
`Such that the user may easily determine which portions of
`the focus image are most in focus. Various example indicia
`are illustrated in FIGS. 5C-5E. In these figures, the array or
`grid 514 is displayed so as to convey the bounds of each
`identified image portion. Alternatively, however, the grid
`
`can be hidden from the user. Beginning with FIG. 5C, the
`indicia comprise dots 516 that overly the discrete portions
`having the highest level of focus. In the example of FIG. 5C,
`the rectangles 512 that overlie the window 508 and the
`lattice-work fence 510 are provided with the dots 516 in that,
`due to their high contrast, the autofocus System 218 used
`these features to focus the image 502.
`0033. With reference next to FIG. 5D, the high focus
`portions of the focused image 502 are identified with shad
`ing 518 that fills the various rectangles 512 of the image
`having the greatest focus. Next, referring to FIG. 5E, focus
`graduations are indicated by providing varying indicia over
`the focused image 502. In particular, three dots 520 are
`provided in rectangles 512 having the greatest focus, two
`dots are provided in rectangles having a medium level of
`focus, and one dot is provided in rectangles having a
`relatively low level of focus.
`0034) From the examples of FIGS. 5C-5E, the user can
`easily appreciate that the intended Subject of the focused
`image 502, i.e. persons 504 and 506, are not well focused
`relative to other aspects of the image. Therefore, the user
`will realize that the image should be recomposed and/or
`focus adjusted in some manner so that the persons 504 and
`506 are brought into the desired degree of focus. Accord
`ingly, before wasting a shot, the user is notified that the
`intended result will not be achieved unless Some action is
`taken on the user's part. Notably, although the indicia are
`described as a tool to ensure high focus of an intended
`Subject, these indicia could be used to ensure other levels of
`focus, e.g., where the user wishes the Subject in the fore
`ground to be in “soft’ focus relative to something in the
`background.
`0035. After being provided with the focus feedback
`described above, the user may wish to recompose the shot or
`adjust the focus of the camera. In the former case, the user
`may, for example, Zoom in on the persons 504 and 506 that
`the user wishes to capture. In the latter case, the user may
`lock the focus on one of the persons 504 and 506 before
`capturing an image. Alternatively, the user may reinitiate the
`autofocusing process by releasing the Shutter button and
`depressing it again to a halfway point. In yet another
`alternative, the user may adjust the autofocus Settings of the
`camera Such that the reference points used to focus the
`image by the autofocus system 218 coincide with the
`positions of the persons 504 and 506. In another alternative,
`the user may Switch the camera to a manual focus mode.
`0036). In any case, it is determined whether an image is to
`be captured, as indicated in decision block 410. This deter
`mination is made, for example, in regard to whether the
`shutter button is fully depressed or whether the user does
`Something else (e.g., recomposes the shot, reinitiates the
`autofocusing process). If an image is not to be captured, flow
`returns to block 400 and the focusing process and analysis/
`evaluation proceSS described above begins again. If, on the
`other hand, an image is captured, the captured image is
`displayed in the camera display (e.g., display 112, FIG. 1)
`and the analysis/evaluation proceSS is practiced again, this
`time on the captured image. Accordingly, with reference to
`block 412 of FIG. 4B, discrete portions of the captured
`image are identified by the focus feedback System 222.
`Again, these portions may comprise the rectangles of an
`array or grid such as that illustrated in FIG. 5B.
`
`Qualcomm, Exh. 2014, p. 12
`Apple v. Qualcomm, 2018-01277
`
`

`

`US 2004/0218086 A1
`
`Nov. 4, 2004
`
`0037 Next, with reference to block 414, the focus feed
`back System 222 analyzes the level of focus of each discrete
`portion of the captured image and, as indicated in block 416,
`evaluates the relative focus of the discrete portions of the
`captured image. Once the relative focus of the discrete
`portions has been determined, the focus feedback System
`222 generates graphical indicia for display that identify the
`portions of the image having the greatest focus, as indicated
`in block 418. Again, these indicia can take Several different
`forms and examples include those illustrated in FIGS.
`5C-5E.
`0.038. In view of the above, the user can determine, from
`the indicia provided by the focus feedback system 222, that
`the intended Subject of the captured image, i.e. perSons 504
`and 506, are not well focused relative to other aspects of the
`image. Therefore, the user will realize that the image should
`be recomposed and/or focus adjusted in Some manner, and
`the image recaptured. With Such notification, the user will
`immediately know whether he or she got the shot the user
`wanted and, therefore, will have the opportunity to try again
`if not.
`0039 While particular embodiments of the invention
`have been disclosed in detail in the foregoing description
`and drawings for purposes of example, it will be understood
`by those skilled in the art that variations and modifications
`thereof can be made without departing from the Scope of the
`invention as Set forth in the claims.
`0040 Various programs (software and/or firmware) have
`been identified above. These programs can be Stored on any
`computer-readable medium for use by or in connection with
`any computer-related System or method. In the context of
`this document, a computer-readable medium is an elec
`tronic, magnetic, optical, or other physical device or means
`that can contain or Store programs for use by or in connec
`tion with a computer-related System or method. The pro
`grams can be embodied in any computer-readable medium
`for use by or in connection with an instruction execution
`System, apparatus, or device, Such as a computer-based
`System, processor-containing System, or other System that
`can fetch the instructions from the instruction execution
`System, apparatus, or device and execute the instructions.
`The term “computer-readable medium' encompasses any
`means that can Store, communicate, propagate, or transport
`the code for use by or in connection with the instruction
`execution System, apparatus, or device.
`0041. The computer-readable medium can be, for
`example but not limited to, an electronic, magnetic, optical,
`electromagnetic, infrared, or Semiconductor System, appa
`ratus, device, or propagation medium. More specific
`examples (a nonexhaustive list) of the computer-readable
`media include an electrical connection having one or more
`wires, a portable computer diskette, a random acceSS
`memory (RAM), a read-only memory (ROM), an erasable
`programmable read-only memory (EPROM, EEPROM, or
`Flash memory), an optical fiber, and a portable compact disc
`read-only memory (CDROM). Note that the computer
`readable medium can even be paper or another Suitable
`medium upon which a program is printed, as the program
`can be electronically captured, via for instance optical
`Scanning of the paper or other medium, then compiled,
`interpreted or otherwise processed in a Suitable manner if
`necessary, and then Stored in a computer memory.
`
`What is claimed is:
`1. A method for providing feedback to a user, comprising:
`analyzing levels of focus of discrete portions of an image;
`evaluating a relative focus of the image portions, and
`identifying to a user the image portions having the highest
`level of focus.
`2. The method of claim 1, wherein analyzing levels of
`focus comprises analyzing levels of focus of discrete por
`tions of a composed image prior to image capture, So as to
`provide feedback to the user before the user captures an
`image.
`3. The method of claim 1, wherein analyzing levels of
`focus comprises analyzing levels of focus of discrete por
`tions of a captured image.
`4. The method of claim 1, wherein analyzing levels of
`focus comprises analyzing image portions using a focusing
`algorithm.
`5. The method of claim 1, wherein analyzing levels of
`focus comprises assigning numerical values to image por
`tions, each numerical value being indicative of the level of
`focus of an image portion.
`6. The method of claim 5, wherein evaluating a relative
`focus comprises comparing the assigned numerical values.
`7. The method of claim 1, wherein identifying image
`portions comprises displaying graphical indicia in a camera
`display.
`8. The method of claim 7, wherein displaying graphical
`indicia comprises displaying at least one dot over an image
`portion in the display.
`9. The method of claim 7, wherein displaying graphical
`indicia comprises displaying number of dots over image
`portions having the highest level of focus and a Smaller
`number of dots over image portions having a lower level of
`focus.
`10. The method of claim 7, wherein displaying graphical
`indicia comprises displaying Shading over image portions in
`the display.
`11. A method for providing a camera user with feedback
`as to which portions of an image are most focused, com
`prising:
`focusing on a viewed Scene to form a focused image;
`identifying discrete portions of the focused image;
`analyzing a level of focus as to each identified image
`portion;
`evaluating a relative focus of the image portions to
`determine which are most focused; and
`generating graphical indicia to be displayed in a camera
`display, the graphical indicia identifying the image
`portions which are most focused.
`12. The method of claim 11, wherein analyzing a level of
`focus comprises assigning numerical values to the image
`portions.
`13. The method of claim 12, wherein evaluating a relative
`focus comprises comparing the assigned numerical values.
`14. The method of claim 11, further comprising:
`identifying discrete portions of an image captured by the
`Camera,
`analyzing a level of focus as to each identified captured
`image portion;
`
`Qualcomm, Exh. 2014, p. 13
`Apple v. Qualcomm, 2018-01277
`
`

`

`US 2004/0218086 A1
`
`Nov. 4, 2004
`
`evaluating a relative focus of the captured image portions
`to determine which are most focused; and
`generating graphical indicia to be displayed in the camera
`display, the graphical indicia identifying the captured
`image portions which are most focused.
`15. A System for providing focus feedback, comprising:
`means for determining the amount of focus each of
`Several discrete portions of an image has, and
`means for generating graphical indicia for display, the
`graphical indicia being indicative of an amount to
`which at least one image portion is focused.
`16. The system of claim 15, wherein the means for
`determining comprises a focusing algorithm that is config
`ured to assign focus values to the image portions.
`17. The system of claim 15, wherein the means for
`generating comprises a focus feedback System that is con
`figured to generate graphical indicia to be displayed in a
`camera display.
`18. A camera, comprising:
`a display that is configured to display an image;
`a processor that controls operation of the display; and
`
`a memory comprising a focus feedback System that is
`configured to generate graphical indicia to be shown in
`the display, the graphical indicia being indicative of a
`level of focus of at least one discrete portion of the
`displayed image.
`19. The camera of claim 18, wherein the memory further
`comprises a focusing algorithm that is configured to identify
`discrete portions of an image, analyze the image portions,
`and determine which image portions are most in focus.
`20. The camera of claim 19, wherein the focusing algo
`rithm is configured to assign numerical values to the image
`portions that are indicative

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket