`
`Paul E. Debevec
`
`Jitendra Malik
`
`University of California at Berkeley
`
`ABSTRACT
`We present a method of recovering high dynamic range radiance
`maps from photographs taken with conventional imaging equip-
`ment. In our method, multiple photographs of the scene are taken
`with different amounts of exposure. Our algorithm uses these dif-
`ferently exposed photographs to recover the response function of the
`imaging process, up to factor of scale, using the assumption of reci-
`procity. With the known response function, the algorithm can fuse
`the multiple photographs into a single, high dynamic range radiance
`map whose pixel values are proportional to the true radiance values
`in the scene. We demonstrate our method on images acquired with
`both photochemical and digital imaging processes. We discuss how
`this work is applicable in many areas of computer graphics involv-
`ing digitized photographs, including image-based modeling, image
`compositing, and image processing. Lastly, we demonstrate a few
`applications of having high dynamic range radiance maps, such as
`synthesizing realistic motion blur and simulating the response of the
`human visual system.
`
`CR Descriptors: I.2.10 [Artificial Intelligence]: Vision and
`Scene Understanding - Intensity, color, photometry and threshold-
`ing; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and
`Realism - Color, shading, shadowing, and texture; I.4.1 [Image
`Processing]: Digitization - Scanning; I.4.8 [Image Processing]:
`Scene Analysis - Photometry, Sensor Fusion.
`
`1 Introduction
`Digitized photographs are becoming increasingly important in com-
`puter graphics. More than ever, scanned images are used as texture
`maps for geometric models, and recent work in image-based mod-
`eling and rendering uses images as the fundamental modeling prim-
`itive. Furthermore, many of today’s graphics applications require
`computer-generated images to mesh seamlessly with real photo-
`graphic imagery. Properly using photographically acquired imagery
`in these applications can greatly benefit from an accurate model of
`the photographic process.
`When we photograph a scene, either with film or an elec-
`tronic imaging array, and digitize the photograph to obtain a two-
`dimensional array of “brightness” values, these values are rarely
`
`1Computer Science Division, University of California at Berkeley,
`Berkeley, CA 94720-1776.
`Email:
`debevec@cs.berkeley.edu, ma-
`lik@cs.berkeley.edu. More information and additional results may be found
`at: http://www.cs.berkeley.edu/˜debevec/Research
`
`true measurements of relative radiance in the scene. For example, if
`one pixel has twice the value of another, it is unlikely that it observed
`twice the radiance. Instead, there is usually an unknown, nonlinear
`mapping that determines how radiance in the scene becomes pixel
`values in the image.
`This nonlinear mapping is hard to know beforehand because it is
`actually the composition of several nonlinear mappings that occur
`in the photographic process. In a conventional camera (see Fig. 1),
`the film is first exposed to light to form a latent image. The film is
`then developed to change this latent image into variations in trans-
`parency, or density, on the film. The film can then be digitized using
`a film scanner, which projects light through the film onto an elec-
`tronic light-sensitive array, converting the image to electrical volt-
`ages. These voltages are digitized, and then manipulated before fi-
`nally being written to the storage medium. If prints of the film are
`scanned rather than the film itself, then the printing process can also
`introduce nonlinear mappings.
`In the first stage of the process, the film response to variations
`in exposure X (which is E t, the product of the irradiance E the
`film receives and the exposure time t) is a non-linear function,
`called the “characteristic curve” of the film. Noteworthy in the typ-
`ical characteristic curve is the presence of a small response with no
`exposure and saturation at high exposures. The development, scan-
`ning and digitization processes usually introduce their own nonlin-
`earities which compose to give the aggregate nonlinear relationship
`between the image pixel exposures X and their values Z.
`Digital cameras, which use charge coupled device (CCD) arrays
`to image the scene, are prone to the same difficulties. Although the
`charge collected by a CCD element is proportional to its irradiance,
`most digital cameras apply a nonlinear mapping to the CCD outputs
`before they are written to the storage medium. This nonlinear map-
`ping is used in various ways to mimic the response characteristics of
`film, anticipate nonlinear responses in the display device, and often
`to convert 12-bit output from the CCD’s analog-to-digital convert-
`ers to 8-bit values commonly used to store images. As with film,
`the most significant nonlinearity in the response curve is at its sat-
`uration point, where any pixel with a radiance above a certain level
`is mapped to the same maximum image value.
`Why is this any problem at all? The most obvious difficulty,
`as any amateur or professional photographer knows, is that of lim-
`ited dynamic range—one has to choose the range of radiance values
`that are of interest and determine the exposure time suitably. Sunlit
`scenes, and scenes with shiny materials and artificial light sources,
`often have extreme differences in radiance values that are impossi-
`ble to capture without either under-exposing or saturating the film.
`To cover the full dynamic range in such a scene, one can take a series
`of photographs with different exposures. This then poses a prob-
`lem: how can we combine these separate images into a composite
`radiance map? Here the fact that the mapping from scene radiance
`to pixel values is unknown and nonlinear begins to haunt us. The
`purpose of this paper is to present a simple technique for recover-
`ing this response function, up to a scale factor, using nothing more
`than a set of photographs taken with varying, known exposure du-
`rations. With this mapping, we then use the pixel values from all
`available photographs to construct an accurate map of the radiance
`in the scene, up to a factor of scale. This radiance map will cover
`
`
`
`Lens
`
`Shutter
`
`Film
`
`Development
`
`CCD
`
`ADC
`
`Remapping
`
`scene
`radiance
`(L)
`
`sensor
`irradiance
`(E)
`
`sensor
`exposure
`(X)
`
`latent
`image
`
`film
`density
`
`analog
`voltages
`
`digital
`values
`
`Film Camera
`
`Digital Camera
`
`final
`digital
`values
`(Z)
`
`Figure 1: Image Acquisition Pipeline shows how scene radiance becomes pixel values for both film and digital cameras. Unknown nonlin-
`ear mappings can occur during exposure, development, scanning, digitization, and remapping. The algorithm in this paper determines the
`aggregate mapping from scene radiance L to pixel values Z from a set of differently exposed images.
`
`the entire dynamic range captured by the original photographs.
`
`1.1 Applications
`
`Our technique of deriving imaging response functions and recover-
`ing high dynamic range radiance maps has many possible applica-
`tions in computer graphics:
`
`Image-based modeling and rendering
`
`Image-based modeling and rendering systems to date (e.g. [11, 15,
`2, 3, 12, 6, 17]) make the assumption that all the images are taken
`with the same exposure settings and film response functions. How-
`ever, almost any large-scale environment will have some areas that
`are much brighter than others, making it impossible to adequately
`photograph the scene using a single exposure setting.
`In indoor
`scenes with windows, this situation often arises within the field of
`view of a single photograph, since the areas visible through the win-
`dows can be far brighter than the areas inside the building.
`By determining the response functions of the imaging device, the
`method presented here allows one to correctly fuse pixel data from
`photographs taken at different exposure settings. As a result, one
`can properly photograph outdoor areas with short exposures, and in-
`door areas with longer exposures, without creating inconsistencies
`in the data set. Furthermore, knowing the response functions can
`be helpful in merging photographs taken with different imaging sys-
`tems, such as video cameras, digital cameras, and film cameras with
`various film stocks and digitization processes.
`The area of image-based modeling and rendering is working to-
`ward recovering more advanced reflection models (up to complete
`BRDF’s) of the surfaces in the scene (e.g.
`[21]). These meth-
`ods, which involve observing surface radiance in various directions
`under various lighting conditions, require absolute radiance values
`rather than the nonlinearly mapped pixel values found in conven-
`tional images. Just as important, the recovery of high dynamic range
`images will allow these methods to obtain accurate radiance val-
`ues from surface specularities and from incident light sources. Such
`higher radiance values usually become clamped in conventional im-
`ages.
`
`Image processing
`
`Most image processing operations, such as blurring, edge detection,
`color correction, and image correspondence, expect pixel values to
`be proportional to the scene radiance. Because of nonlinear image
`response, especially at the point of saturation, these operations can
`produce incorrect results for conventional images.
`In computer graphics, one common image processing operation
`is the application of synthetic motion blur to images.
`In our re-
`sults (Section 3), we will show that using true radiance maps pro-
`duces significantly more realistic motion blur effects for high dy-
`namic range scenes.
`
`Image compositing
`Many applications in computer graphics involve compositing im-
`age data from images obtained by different processes. For exam-
`ple, a background matte might be shot with a still camera, live
`action might be shot with a different film stock or scanning pro-
`cess, and CG elements would be produced by rendering algorithms.
`When there are significant differences in the response curves of
`these imaging processes, the composite image can be visually un-
`convincing. The technique presented in this paper provides a conve-
`nient and robust method of determining the overall response curve
`of any imaging process, allowing images from different processes to
`be used consistently as radiance maps. Furthermore, the recovered
`response curves can be inverted to render the composite radiance
`map as if it had been photographed with any of the original imaging
`processes, or a different imaging process entirely.
`
`A research tool
`One goal of computer graphics is to simulate the image formation
`process in a way that produces results that are consistent with what
`happens in the real world. Recovering radiance maps of real-world
`scenes should allow more quantitative evaluations of rendering al-
`gorithms to be made in addition to the qualitative scrutiny they tra-
`ditionally receive. In particular, the method should be useful for de-
`veloping reflectance and illumination models, and comparing global
`illumination solutions against ground truth data.
`Rendering high dynamic range scenes on conventional display
`devices is the subject of considerable previous work, including [20,
`16, 5, 23]. The work presented in this paper will allow such meth-
`ods to be tested on real radiance maps in addition to synthetically
`computed radiance solutions.
`
`1.2 Background
`The photochemical processes involved in silver halide photography
`have been the subject of continued innovation and research ever
`since the invention of the daguerretype in 1839. [18] and [8] pro-
`vide a comprehensive treatment of the theory and mechanisms in-
`volved. For the newer technology of solid-state imaging with charge
`coupled devices, [19] is an excellent reference. The technical and
`artistic problem of representing the dynamic range of a natural scene
`on the limited range of film has concerned photographers from the
`early days – [1] presents one of the best known systems to choose
`shutter speeds, lens apertures, and developing conditions to best co-
`erce the dynamic range of a scene to fit into what is possible on a
`print. In scientific applications of photography, such as in astron-
`omy, the nonlinear film response has been addressed by suitable cal-
`ibration procedures. It is our objective instead to develop a simple
`self-calibrating procedure not requiring calibration charts or photo-
`metric measuring devices.
`In previous work, [13] used multiple flux integration times of a
`CCD array to acquire extended dynamic range images. Since direct
`CCD outputs were available, the work did not need to deal with the
`
`
`
`quantities we will be dealing with are weighted by the spectral re-
`sponse at the sensor site. For color photography, the color channels
`may be treated separately.
`The input to our algorithm is a number of digitized photographs
`taken from the same vantage point with different known exposure
`durations tj.4 We will assume that the scene is static and that this
`process is completed quickly enough that lighting changes can be
`safely ignored. It can then be assumed that the film irradiance values
`Ei for each pixel i are constant. We will denote pixel values by Zij
`where i is a spatial index over pixels and j indexes over exposure
`times tj. We may now write down the film reciprocity equation
`as:
`
`(1)
`Zij = f Ei tj
`Since we assume f is monotonic, it is invertible, and we can rewrite
`(1) as:
`
`f Zij =E i tj
`Taking the natural logarithm of both sides, we have:
`
`ln f Zij = ln Ei + ln tj
`
`To simplify notation, let us define function g = ln f . We then
`have the set of equations:
`
`gZij = ln Ei + ln tj
`
`(2)
`
`where i ranges over pixels and j ranges over exposure durations. In
`this set of equations, the Zij are known, as are the tj. The un-
`knowns are the irradiances Ei, as well as the function g, although
`we assume that g is smooth and monotonic.
`We wish to recover the function g and the irradiances Ei that best
`satisfy the set of equations arising from Equation 2 in a least-squared
`error sense. We note that recovering g only requires recovering the
`finite number of values that gz can take since the domain of Z,
`pixel brightness values, is finite. Letting Zmin and Zmax be the
`least and greatest pixel values (integers), N be the number of pixel
`locations and P be the number of photographs, we formulate the
`problem as one of finding the Zmax Zmin + values of gZ
`and the N values of ln Ei that minimize the following quadratic ob-
`jective function:
`
`Zmax
`
`Xz=Zmin+
`
`g z
`
`PXj
`NXi
`
`=
`
`=
`
`O =
`
`gZij ln Ei ln tj +
`
`(3)
`The first term ensures that the solution satisfies the set of equa-
`tions arising from Equation 2 in a least squares sense. The second
`term is a smoothness term on the sum of squared values of the sec-
`ond derivative of g to ensure that the function g is smooth; in this
`discrete setting we use g z =g z gz +g z + . This
`smoothness term is essential to the formulation in that it provides
`coupling between the values gz in the minimization. The scalar
` weights the smoothness term relative to the data fitting term, and
`should be chosen appropriately for the amount of noise expected in
`the Zij measurements.
`Because it is quadratic in the Ei’s and gz’s, minimizing O is
`a straightforward linear least squares problem. The overdetermined
`
`4Most modern SLR cameras have electronically controlled shutters
`which give extremely accurate and reproducible exposure times. We tested
`our Canon EOS Elan camera by using a Macintosh to make digital audio
`recordings of the shutter. By analyzing these recordings we were able to
`verify the accuracy of the exposure times to within a thousandth of a sec-
`ond. Conveniently, we determined that the actual exposure times varied by
`powers of two between stops (
`
`
`
`
`
` , , , , , , 1, 2, 4, 8, 16, 32), rather
`
`
`
`
`than the rounded numbers displayed on the camera readout ( , , , ,
`,
` , 1, 2, 4, 8, 15, 30). Because of problems associated with vignetting,
`varying the aperture is not recommended.
`
`
`
`problem of nonlinear pixel value response. [14] addressed the prob-
`lem of nonlinear response but provide a rather limited method of re-
`covering the response curve. Specifically, a parametric form of the
`response curve is arbitrarily assumed, there is no satisfactory treat-
`ment of image noise, and the recovery process makes only partial
`use of the available data.
`
`2 The Algorithm
`This section presents our algorithm for recovering the film response
`function, and then presents our method of reconstructing the high
`dynamic range radiance image from the multiple photographs. We
`describe the algorithm assuming a grayscale imaging device. We
`discuss how to deal with color in Section 2.6.
`
`2.1 Film Response Recovery
`
`Our algorithm is based on exploiting a physical property of imaging
`systems, both photochemical and electronic, known as reciprocity.
`Let us consider photographic film first. The response of a film
`to variations in exposure is summarized by the characteristic curve
`(or Hurter-Driffield curve). This is a graph of the optical density
`D of the processed film against the logarithm of the exposure X
`to which it has been subjected. The exposure X is defined as the
`product of the irradiance E at the film and exposure time, t, so
`that its units are Jm . Key to the very concept of the characteris-
`tic curve is the assumption that only the product E t is important,
`that halving E and doubling t will not change the resulting optical
`density D. Under extreme conditions (very large or very low t ),
`the reciprocity assumption can break down, a situation described as
`reciprocity failure. In typical print films, reciprocity holds to within
`stop1 for exposure times of 10 seconds to 1/10,000 of a second.2
`In the case of charge coupled arrays, reciprocity holds under the as-
`sumption that each site measures the total number of photons it ab-
`sorbs during the integration time.
`After the development, scanning and digitization processes, we
`obtain a digital number Z, which is a nonlinear function of the orig-
`inal exposure X at the pixel. Let us call this function f , which is the
`composition of the characteristic curve of the film as well as all the
`nonlinearities introduced by the later processing steps. Our first goal
`will be to recover this function f . Once we have that, we can com-
`pute the exposure X at each pixel, as X = f Z. We make the
`reasonable assumption that the function f is monotonically increas-
`ing, so its inverse f is well defined. Knowing the exposure X and
`the exposure time t, the irradiance E is recovered as E = X= t,
`which we will take to be proportional to the radiance L in the scene.3
`Before proceeding further, we should discuss the consequences
`of the spectral response of the sensor. The exposure X should be
`thought of as a function of wavelength X, and the abscissa on the
`
`
`
`characteristic curve should be the integral R XRd where
`
`R is the spectral response of the sensing element at the pixel lo-
`cation. Strictly speaking, our use of irradiance, a radiometric quan-
`tity, is not justified. However, the spectral response of the sensor site
`may not be the photopic luminosity function V, so the photomet-
`ric term illuminance is not justified either. In what follows, we will
`use the term irradiance, while urging the reader to remember that the
`
`
`
`11 stop is a photographic term for a factor of two;
` stop is thus
`2An even larger dynamic range can be covered by using neutral density
`filters to lessen to amount of light reaching the film for a given exposure time.
`A discussion of the modes of reciprocity failure may be found in [18], ch. 4.
`3 L is proportional E for any particular pixel, but it is possible for the
`proportionality factor to be different at different places on the sensor. One
`formula for this variance, given in [7], is E = L
`cos , where
`measures the pixel’s angle from the lens’ optical axis. However, most mod-
`ern camera lenses are designed to compensate for this effect, and provide a
`nearly constant mapping between radiance and irradiance at f/8 and smaller
`apertures. See also [10].
`
`
`
` df
`
`
`
`2.2 Constructing the High Dynamic Range Radi-
`ance Map
`
`Once the response curve g is recovered, it can be used to quickly
`convert pixel values to relative radiance values, assuming the expo-
`sure tj is known. Note that the curve can be used to determine ra-
`diance values in any image(s) acquired by the imaging process asso-
`ciated with g, not just the images used to recover the response func-
`tion.
`From Equation 2, we obtain:
`
`ln Ei = gZij ln tj
`
`(5)
`
`For robustness, and to recover high dynamic range radiance val-
`ues, we should use all the available exposures for a particular pixel
`to compute its radiance. For this, we reuse the weighting function in
`Equation 4 to give higher weight to exposures in which the pixel’s
`value is closer to the middle of the response function:
`
`ln Ei = PP
`
`PP
`
`j= wZij gZij ln tj
`j= wZij
`Combining the multiple exposures has the effect of reducing
`noise in the recovered radiance values. It also reduces the effects
`of imaging artifacts such as film grain. Since the weighting func-
`tion ignores saturated pixel values, “blooming” artifacts5 have little
`impact on the reconstructed radiance values.
`
`(6)
`
`2.2.1 Storage
`In our implementation the recovered radiance map is computed as
`an array of single-precision floating point values. For efficiency, the
`map can be converted to the image format used in the RADIANCE
`[22] simulation and rendering system, which uses just eight bits for
`each of the mantissa and exponent. This format is particularly com-
`pact for color radiance maps, since it stores just one exponent value
`for all three color values at each pixel. Thus, in this format, a high
`dynamic range radiance map requires just one third more storage
`than a conventional RGB image.
`
`2.3 How many images are necessary?
`
`To decide on the number of images needed for the technique, it is
`convenient to consider the two aspects of the process:
`
`1. Recovering the film response curve: This requires a minimum
`of two photographs. Whether two photographs are enough
`can be understood in terms of the heuristic explanation of the
`process of film response curve recovery shown in Fig. 2.
`If the scene has sufficiently many different radiance values,
`the entire curve can, in principle, be assembled by sliding to-
`gether the sampled curve segments, each with only two sam-
`ples. Note that the photos must be similar enough in their ex-
`posure amounts that some pixels fall into the working range6
`of the film in both images; otherwise, there is no information
`to relate the exposures to each other. Obviously, using more
`than two images with differing exposure times improves per-
`formance with respect to noise sensitivity.
`2. Recovering a radiance map given the film response curve: The
`number of photographs needed here is a function of the dy-
`namic range of radiance values in the scene. Suppose the
`range of maximum to minimum radiance values that we are
`
`5Blooming occurs when charge or light at highly saturated sites on the
`imaging surface spills over and affects values at neighboring sites.
`6The working range of the film corresponds to the middle section of the
`response curve. The ends of the curve, in which large changes in exposure
`cause only small changes in density (or pixel value), are called the toe and
`the shoulder.
`
`system of linear equations is robustly solved using the singular value
`decomposition (SVD) method. An intuitive explanation of the pro-
`cedure may be found in Fig. 2.
`We need to make three additional points to complete our descrip-
`tion of the algorithm:
`First, the solution for the gz and Ei values can only be up to
`a single scale factor . If each log irradiance value ln Ei were re-
`placed by ln Ei + , and the function g replaced by g + , the sys-
`tem of equations 2 and also the objective function O would remain
`unchanged. To establish a scale factor, we introduce the additional
`
`constraint gZmid = , where Zmid = Zmin + Zmax, simply
`by adding this as an equation in the linear system. The meaning of
`this constraint is that a pixel with value midway between Zmin and
`Zmax will be assumed to have unit exposure.
`Second, the solution can be made to have a much better fit by an-
`ticipating the basic shape of the response function. Since gz will
`typically have a steep slope near Zmin and Zmax, we should ex-
`pect that gz will be less smooth and will fit the data more poorly
`near these extremes. To recognize this, we can introduce a weight-
`ing function wz to emphasize the smoothness and fitting terms to-
`ward the middle of the curve. A sensible choice of w is a simple hat
`function:
`
`wz = z Zmin
`
`Zmax z
`
`for z
` Zmin + Zmax
`for z
` Zmin + Zmax
`
`(4)
`
`Equation 3 now becomes:
`
`fwZij gZij ln Ei ln tjg +
`
`PXj
`NXi
`
`=
`
`=
`
`O =
`
`
`
`Zmax
`
`Xz=Zmin+
`
`wzg z
`
`Finally, we need not use every available pixel site in this solu-
`tion procedure. Given measurements of N pixels in P photographs,
`we have to solve for N values of ln Ei and Zmax Zmin sam-
`ples of g. To ensure a sufficiently overdetermined system, we want
`N P Zmax Zmin. For the pixel value range Zmax
`Zmin = , P = photographs, a choice of N on the or-
`der of 50 pixels is more than adequate. Since the size of the sys-
`tem of linear equations arising from Equation 3 is on the order of
`N P + Zmax Zmin, computational complexity considera-
`tions make it impractical to use every pixel location in this algo-
`rithm. Clearly, the pixel locations should be chosen so that they have
`a reasonably even distribution of pixel values from Zmin to Zmax,
`and so that they are spatially well distributed in the image. Further-
`more, the pixels are best sampled from regions of the image with
`low intensity variance so that radiance can be assumed to be con-
`stant across the area of the pixel, and the effect of optical blur of the
`imaging system is minimized. So far we have performed this task
`by hand, though it could easily be automated.
`Note that we have not explicitly enforced the constraint that g
`must be a monotonic function. If desired, this can be done by trans-
`forming the problem to a non-negative least squares problem. We
`have not found it necessary because, in our experience, the smooth-
`ness penalty term is enough to make the estimated g monotonic in
`addition to being smooth.
`To show its simplicity, the MATLAB routine we used to minimize
`Equation 5 is included in the Appendix. Running times are on the
`order of a few seconds.
`
`
`
`normalized plot of g(Zij) after determining pixel exposures
`
`6
`
`4
`
`2
`
`0
`
`−2
`
`−4
`
`log exposure (Ei * (delta t)j)
`
`plot of g(Zij) from three pixels observed in five images, assuming unit radiance at each pixel
`6
`
`4
`
`2
`
`0
`
`−2
`
`−4
`
`log exposure (Ei * (delta t)j)
`
`−6
`0
`
`50
`
`100
`
`200
`
`250
`
`300
`
`150
`150
`pixel value (Zij)
`pixel value (Zij)
`Figure 2: In the figure on the left, the symbols represent samples of the g curve derived from the digital values at one pixel for 5 different
`known exposures using Equation 2. The unknown log irradiance ln Ei has been arbitrarily assumed to be . Note that the shape of the g curve
`is correct, though its position on the vertical scale is arbitrary corresponding to the unknown ln Ei. The + and symbols show samples of
`g curve segments derived by consideration of two other pixels; again the vertical position of each segment is arbitrary. Essentially, what we
`want to achieve in the optimization process is to slide the 3 sampled curve segments up and down (by adjusting their ln Ei’s) until they “line
`up” into a single smooth, monotonic curve, as shown in the right figure. The vertical position of the composite curve will remain arbitrary.
`
`−6
`0
`
`50
`
`100
`
`200
`
`250
`
`300
`
`interested in recovering accurately is R, and the film is capa-
`ble of representing in its working range a dynamic range of F .
`F e to
`Then the minimum number of photographs needed is d R
`ensure that every part of the scene is imaged in at least one
`photograph at an exposure duration that puts it in the work-
`ing range of the film response curve. As in recovering the re-
`sponse curve, using more photographs than strictly necessary
`will result in better noise sensitivity.
`
`If one wanted to use as few photographs as possible, one might
`first recover the response curve of the imaging process by pho-
`tographing a scene containing a diverse range of radiance values at
`three or four different exposures, differing by perhaps one or two
`stops. This response curve could be used to determine the working
`range of the imaging process, which for the processes we have seen
`would be as many as five or six stops. For the remainder of the shoot,
`the photographer could decide for any particular scene the number
`of shots necessary to cover its entire dynamic range. For diffuse in-
`door scenes, only one exposure might be necessary; for scenes with
`high dynamic range, several would be necessary. By recording the
`exposure amount for each shot, the images could then be converted
`to radiance maps using the pre-computed response curve.
`
`2.4 Recovering extended dynamic range from sin-
`gle exposures
`
`Most commericially available film scanners can detect reasonably
`close to the full range of useful densities present in film. However,
`many of these scanners (as well as the Kodak PhotoCD process) pro-
`duce 8-bit-per-channel images designed to be viewed on a screen or
`printed on paper. Print film, however, records a significantly greater
`dynamic range than can be displayed with either of these media. As
`a result, such scanners deliver only a portion of the detected dynamic
`range of print film in a single scan, discarding information in either
`high or low density regions. The portion of the detected dynamic
`range that is delivered can usually be influenced by “brightness” or
`“density adjustment” controls.
`The method presented in this paper enables two methods for re-
`covering the full dynamic range of print film which we will briefly
`
`outline7. In the first method, the print negative is scanned with the
`scanner set to scan slide film. Most scanners will then record the
`entire detectable dynamic range of the film in the resulting image.
`As before, a series of differently exposed images of the same scene
`can be used to recover the response function of the imaging system
`with each of these scanner settings. This response function can then
`be used to convert individual exposures to radiance maps. Unfortu-
`nately, since the resulting image is still 8-bits-per-channel, this re-
`sults in increased quantization.
`In the second method, the film can be scanned twice with the
`scanner set to different density adjustment settings. A series of dif-
`ferently exposed images of the same scene can then be used to re-
`cover the response function of the imaging system at each of these
`density adjustment settings. These two response functions can then
`be used to combine two scans of any single negative using a similar
`technique as in Section 2.2.
`
`2.5 Obtaining Absolute Radiance
`
`For many applications, such as image processing and image com-
`positing, the relative radiance values computed by our method are
`all that are necessary. If needed, an approximation to the scaling
`term necessary to convert to absolute radiance can be derived using
`the ASA of the film8 and the shutter speeds and exposure amounts in
`the photographs. With these numbers, formulas that give an approx-
`imate prediction of film response can be found in [9]. Such an ap-
`proximation can be adequate for simulating visual artifacts such as
`glare, and predicting areas of scotopic retinal response. If desired,
`one could recover the scaling factor precisely by photographing a
`calibration luminaire of known radiance, and scaling the radiance
`values to agree with the known radiance of the luminaire.
`
`2.6 Color
`
`Color images, consisting of red, green, and blue channels, can be
`processed by reconstructing the imaging system response curve for
`
`7This work was done in collaboration with Gregory Ward Larson
`8Conveniently, most digital cameras also specify their sensitivity in terms
`of ASA.
`
`
`
`Figure 3: (a) Eleven grayscale photographs of an indoor scene ac-
`quired with a Kodak DCS460 digital camera, with shutter speeds
`
`progressing in 1-stop increments from of a second to 30 seconds.
`
`250
`
`200
`
`150
`
`100
`
`50
`
`pixel value Z
`
`0
`−10
`
`−5
`
`0
`
`log exposure X
`Figure 4: The response function of the DCS460 recovered by our al-
`gorithm, with the underlying Ei tj