`
`1
`METIIODS, APPARATUS, AND DEVICES FOR
`NOISE REDUCTION
`
`RELATED APPLICATIONS
`
`This application claims benefit of U.S. Provisional Patent
`
`Application No. 60/681,429, entitled “METHODS, APPA-
`
`
`
`RATUS, AND DEVICES FOR NOISE REDUCTION,”filed
`May17, 2005.
`
`FIELD OF THE INVENTION
`
`This inventionrelates to image display.
`
`BACKGROUND
`
`Image noise is an important parameter in the quality of
`medical diagnosis. Several scientific studies have indicated
`that even slight increase of noise in medical images can have
`a significant negative impact on the accuracy and quality of
`medical diagnosis. In a typical medical imaging system there
`are several phases, and in cach of these phases unwanted
`noise can be introduced. Thefirst phaseis the actual modality
`or source that produces the medical image. Examples of such
`modalities include X-ray machines, computed tomography
`(CT) scanners, ultrasound scanners, magnetic resonance
`imaging (MRI) scanners, and positron emission tomography
`(PET) scanners. As for any sensor system or measurement
`device, there is always some amount of measurement noise
`present due ta imperfections of the device or even due to
`physical limitations (such as statistical uncertainty). A lot of
`effort has been put
`into devices that produce low-noise
`images or image data. For example,
`images fromdigital
`detectors (very alike to CCDs in digital cameras) used for
`X-rays are post-processed to remove noise by means of flat
`field correction and dark field correction.
`
`Once the medical imageis available, this image is to be
`viewed by a radiologist. Traditionally light boxes were used
`in combination with film, but nowadays more and moredis-
`play systems (first CRT-based and afterwards |.CD-based)
`are used for this task. The introduction ofthose digital display
`systems not only improved the workflow efficiency a lot but
`also opened new possibilities to improve medical diagnosis.
`For example: with display systemsit becomespossible for the
`radiologist to perform image processing operations such as
`zoom, contrast enhancement, and computerassistance (com-
`puter aided diagnosis or CAT). However, also significant
`disadvantages of medical display systems
`cannot be
`neglected.
`Contrary to extremely low noise film, display systems suf-
`ter from significant noise. Matrix based or matrix addressed
`displays are composedofindividual image forming elements,
`called pixels (Picture Elements),
`that can be driven (or
`addressed) individually by proper driving electronics. The
`driving signals can switch a pixelto a first state, the on-state
`(uminance emitted, transmitted or reflected), to a second
`state, the off-state (no luminance emitted, transmitted or
`reflected). For some displays, one stable intermediate state
`betweenthefirst and the second state is used—see EP 462 619
`
`
`
`which describes an LCD. Forstill other displays, one or more
`intermediate states between the first and the second state
`
`(modulation ofthe amount of luminance emitted, transmitted
`or reflected) are used. A modification of these designs
`attempts to improve uniformity by using pixels made up of
`individually driven sub-pixel areas and to have most of the
`sub-pixels driven either in the on- or off-state—see EP 478
`043 which also describes an LCD. Onesub-pixel is driven to
`
`10
`
`15
`
`25
`
`30
`
`35
`
`45
`
`2
`provide intermediate states. Dueto the fact that this sub-pixel
`only provides modulation ofthe grey-scale values determined
`byselection of the binary driven sub-pixels the luminosity
`variation over the display is reduced.
`A known image quality deficiency existing with these
`matrix based technologies
`is
`the unequal
`light-output
`response of the pixels that make up the matrix addressed
`display consisting of a multitude of such pixels. More spe-
`cifically, identical electric drive signals to various pixels may
`lead to different light-output output of these pixels. Current
`state ofthe art displays have pixel arrays ranging from a few
`hundred to millions of pixels. The observed light-output dif-
`ferences between (even neighboring) pixels is as high as 30%
`(as obtained from the formula (minimum luminance—maxi-
`mum luminance)/minimum luminance).
`These differences in behavior are caused by various pro-
`duction processes involved in the manufacturing of the dis-
`plays, and/or by the physical construction ofthese displays,
`each of them being different depending on the type of tech-
`nology ofthe electronic display under consideration. As an
`example, for liquid crystal displays (LCDs), the application
`of rubbing for the alignmentof the liquid crystal (LC) mol-
`ecules, and the colorfilters used, are large contributorsto the
`different luminance behavior of various pixels. The problem
`of lack of uniformity of OLEDdisplays is discussed in US
`20020047568. Such lack of uniformity may arise from dif-
`ferences in the thin film transistors used to switch the pixel
`elements.
`
`
`
`EP 0755042 (U.S. Pat. No. 5,708,451) describes a method
`and device for providing uniform luminosity of a field emis-
`sion display (FFD). Non-uniformities of luminance charac-
`teristics ina FED are compensatedpixel bypixel. This is done
`bystoring a matrix of correction values, one value for each
`pixel. These correction values are determined by a previously
`measured emission efficiency of the corresponding pixels.
`These correction values are used for correcting the level ofthe
`signal that drives the corresponding pixel.
`TLis a disadvantage of the method described in FP 0755042
`that a lincar approach is applicd, i.c. that a same correction
`value is applied to a drive signal of a given pixel, independent
`of whether a high or a low luminance has to be provided.
`However, pixel luminancefor different drive signals ofa pixel
`depends onphysical features of the pixel, and those physical
`features maynot be the sameforhigh or low luminancelevels.
`Therefore, pixel non-uniformity is different at high or law
`levels of luminance, and if corrected by applying to a pixel
`drive signal a same correction value independentofthe drive
`value corresponds to a high or to a low luminance level,
`non-uniformities in the luminancearestill observed.
`
`SUMMARY
`
`A method of image processing according to one embodi-
`ment includes, for each of a plurality of pixels of a display,
`obtaining a measure of a light-output response ofat least a
`portion ofthe pixel at each ofa plurality of driving levels. The
`method includes modifying a map that
`is based on the
`obtained measures, to increase a visibility of a characteristic
`ofa displayed image during a use ofthe display. The method
`also includes obtaining a display signal based on the modified
`map and an imagesignal.
`An image processing apparatus according to an embodi-
`ment includes an array of storage elements configured to
`store, for each of a plurality of pixels of a display, a measure
`ofa light-output responseofat least a portion of the pixel at
`each of a plurality of driving levels. The apparatus also
`includes an array of logic elements configured to modify a
`
`
`
`US 7,639,849 B2
`
`3
`map based on the stored measures and to obtain a display
`signal based on the modified map and an image signal. The
`array of logic elements is configured to modify the map to
`increase a visibility of a characteristic of a displayed image
`during a use of the display.
`The scope of disclosed embodiments also includes a sys-
`tem for characterizing the luminance response of each indi-
`vidual pixel of a matrix display, and using this characteriza-
`tion to pre-correct the driving signals to that display in order
`to compensate for the expected (characterized) unequal lumi-
`nance betweendifferent pixels.
`These and other characteristics, features and potential
`advantages of various disclosed embodiments will become
`apparent from the following detailed description, taken in
`conjunction with the accompanying drawings, which illus-
`trate, by way of example, principles of the invention. This
`description is given for the sake of example only, without
`limiting the scope of the invention. The reference figures
`quoted below refer to the attached drawings.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`FIG.1 illustrates a matrix display having greyscale pixels
`with equal luminance.
`FIG. 2 illustrates a matrix display having greyscale pixels
`with unequal luminance.
`FIG. 3 illustrates a greyscale LCD based matrix display
`having unequal luminance in subpixels.
`FIG.4 illustrates a first embodiment of an image capturing
`device, the image capturing device comprising a flatbed scan-
`ner.
`
`HIG.§ illustrates a second embodiment of an image cap-
`turing device, the image capturing device comprising a CCD
`camera and a movement devicc.
`
`FIG. 6 schematically illustrates an embodiment of an algo-
`rithm to identify matrix display pixel locations.
`FIG. 7 shows an example ofa luminance response curve of
`an individual pixel, the curve being constructed using eleven
`characterization points.
`FIG.8 is a block-schematic diagram ofsignal transforma-
`tion according to an embodiment.
`FIG. 9 illustrates the signal transformation of the diagram
`of FIG. 8.
`
`FIG. 10 is a graph showing different examples of pixel
`response curves.
`FIG. 11 illustrates an embodiment of a correctioncircuit.
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`40
`
`45
`
`FIG. 12 shows an example of a contrast sensitivity func-
`tion.
`
`50
`
`FIGS. 13-20 show examples ofneighborhoods ofa pixel or
`subpixel.
`FIG. 21 showsa flowchart of a method M100 according to
`an embodiment.
`
`FIG, 22 showsa flow chart of an implementation M110 of
`method M100.
`
`FIG. 23 shows a flow chart of an implementation M120 of
`method M100.
`
`FIG. 24 showsa block diagram ofan apparatus 100 accord-
`ing to an embodiment.
`FIG. 25 shows a block diagram of a system 200 according
`to an embodiment.
`
`FIG. 26 showsa block diagram ofan implementation 102
`of apparatus 100.
`In the different figures, the same reference figuresrefer ta
`the same or analogous elements.
`
`35
`
`60
`
`65
`
`4
`
`
`DETAILED DESCRIPTION
`
`The scope ofdisclosed embodiments includes a system and
`a methodfor noise reduction in medical imaging,in particular
`for medical images being viewed on display systems. Atleast
`some embodiments maybe applied to overcome one or more
`disadvantages of the prior art as mentioned above.
`Various embodiments will be described with respect to
`particular embodiments and with reference to certain draw-
`ings but the inventionis not limited thereto but only by the
`claims. The drawings described are only schematic and are
`non-limiting. In the drawings, the size of some of the ele-
`ments may be exaggerated and not drawn onscale for illus-
`trative purposes. Where the term “comprising”is used in the
`present description and claims,
`it does not exclude other
`elements or steps. Unless expressly limited by its context, the
`term “obtaining”is used to indicate any of ils ordinary mean-
`ings, such as sensing, measuring, recording, receiving (e.g.
`from a sensor or external device), and retrieving (e.g. froma
`storage element).
`In the present description, the terms “horizontal” and “ver-
`tical” are usedto provide a co-ordinate system and for ease of
`explanation only. They do not need to, but may, refer to an
`actual physical direction of the device.
`Embodiments relate to a system and method for noise
`reduction, for example in real-time, in medical imaging and in
`particularof the non-uniformity of pixel luminance behavior
`present in matrix addressedelectronic display devices such as
`
`plasma displays, liquid crystal displays, LED and OLED
`displays used in projection or direct viewing concepts.
`Embodiments may be applied to emissive, transmissive,
`reflective and trans-reflective display technologies fulfilling
`the feature that each pixel is individually addressable.
`A matrix addressed display comprises individual display
`elements. In the present description, the term “display ele-
`ments”is to be understood to comprise any form of element
`which emits light or through which light is passed or from
`which light is reflected. A display element may therefore be
`an individually addressable clement of an cmissive, transmis-
`sive, reflective or trans-reflective display. Display elements
`maybe pixels, e.g. ina greyscale LCD,as well as sub-pixels,
`a plurality of sub-pixels forming one pixel. For example three
`sub-pixels with a different color, such as a red sub-pixel, a
`green sub-pixel and a blue sub-pixel may together form one
`pixel in a color LCD. A subpixel arrangement may also be
`used in a greyscale (or “monochrome”) display. Whenever
`the word “pixel” is used,it is to be understood that the same
`may hold for sub-pixels, unless the contrary is explicitly
`mentioned.
`
`Embodimentswill be described with referenceto flat panel
`displays but the range of embodimentsis not limited thereto.
`It is understood that a flat panel display does not have to be
`exactlyflat but includes shaped or bent panels. A flat panel
`display differs from a display such as a cathode ray tube in
`that it comprises a matrix or array of “cells” or “pixels” each
`producing orcontrolling light overa small area. Arrays ofthis
`kind are called fixed format arrays. There is a relationship
`between the pixel of an image to be displayed and a cell ofthe
`
`display. Usually this is a one-to-one relationship. Each cell
`may be addressed and driven separately.
`The range ofembodiments includes embodiments that may
`be applied to flat panel displays that are active matrix devices,
`embodiments that may be applied to flat panel displays that
`are passive matrix devices, and embodiments that may be
`applied to both types of matrix device.‘he array ofcells is
`usually in rows and columnsbut the range of embodiments
`includes applications to any arrangement, e.g. polar or hex-
`
`
`
`US 7,639,849 B2
`
`5
`agonal. Although embodiments will mainlybe described with
`respect to liquid crystal displays, the range of application of
`the principles disclosed herein is more widely applicable to
`flat panel displays of different types, such as plasmadisplays,
`field emission displays, electroluminescent (EL) displays,
`
`organic light-emitting diode (OLED) displays, polymeric
`light-emitting diode (PLED)displays, etc. In particular, the
`range of embodiments includes application not only to dis-
`plays having an array of light emitting elements but also to
`displays having arrays of light emitting devices, whereby
`each device is made up ofa numberofindividual elements.
`The displays may be emissive, transmissive, reflective, or
`trans-reflective displays, and the light-output behavior may
`be caused by any optical process affecting visual light or
`electrical process indirectly defining an optical response of
`the system.
`Further the method of addressing and driving the pixel
`elements of an array is not considered a limitation on the
`application of these principles. Typically, each pixel element
`is addressed by meansofwiring but other methods are known
`and are useful with appropriate embodiments, e.g. plasma
`discharge addressing (as disclosed in U.S. Pat. No. 6,089,
`739) or CRT addressing.
`A matrix addressed display 2 comprises individual pixels.
`Thesepixels 4 can take all kinds of shapes, e.g. they can take
`the forms of characters. The examples of matrix displays 2
`given in FIG. 1 to FIG. 3 have rectangular or square pixels 4
`arranged in rows and columns. FIG.1 illustrates an image of
`a perfect display 2 having equal luminanceresponse inall
`pixels 4 when equally driven. Every pixel 4 driven with the
`same signal renders the same luminance. In contrast, FIG. 2
`and FIG.3 illustrate different cases where the pixels 4 of the
`displays 2 are also driven by equal signals but where the
`pixels 4 render a different luminance, as can be seen by the
`different grey values in the different drawings. The spatial
`distribution ofthe luminancedifferences ofthe pixels 4 canbe
`arbitrary. It is also found that with many technologies, this
`distribution changesas a function of the applied drive to the
`pixels. For a low drive signal leading to low luminance, the
`spatial distribution pattern can differ from the pattern at a
`higher driving signal.
`The phenomenonof non-uniform light-output response of
`a pluralityofpixels is disturbing in applications where image
`fidelity is required to be high, such as for example in medical
`applications, where luminance differences of about 1% may
`have a clinical
`significance. The unequal
`light-output
`responseofthe pixels superimposesan additional, disturbing
`and unwanted random image on the required or desired
`image, thus reducing the signal-to-noise ratio (SNR) of the
`resulting image.
`Moreover, at the end the only goal is to increase the accu-
`racyand quality ofthe medical diagnosis, and noise reduction
`is ameansto accomplish this goal. Therefore, noise reduction
`does not necessarily have the same meaning as correction for
`non-uniformities. In other words, if the non-uniformities do
`not interfere with the medical diagnosis then there is no
`advantage to correct for the non-uniformities. In some cases
`correcting those non-uniformities can even result in lower
`accuracyofdiagnosis as will be explained in detail later in this
`text. This also means that the noise reduction algorithms in
`the ideal case are matched with the type of medical image
`being looked at, as will be explained later.
`In order to be able to correct matrix display pixel non-
`uniformities, it is desirable that the light-output of eachindi-
`vidual pixel is known, and thus has been detected.
`The range of embodiments includes a characterizing
`device such as a vision measurement system, a set-up for
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`40
`
`45
`
`50
`
`35
`
`60
`
`65
`
`6
`automated, electronic vision of the individual pixels of the
`matrix addressed display, i.e. for measuring the light-output,
`e.g. luminance, emitted or reflected (depending on the type of
`display) by individual pixels 4, using a vision measurement
`set-up. The vision measurement system comprises an image
`capturing device 6, 12 and possibly a movementdevice 5 for
`moving the image capturing device 6, 12 and/orthe display 2
`with respect to each other. Two embodimentsare given as an
`example, although other electronic vision implementations
`may be possible reaching the sameresult: an electronic image
`of the pixels.
`Accordingtoa first embodiment, as represented in FIG.4,
`the matrix addresseddisplay 2 is placed withits light emitting
`side against an image capturing device, for example is placed
`face down ona flat bed scanner 6. Theflat bed scanner 6 may
`be a suitably modified documentorfilm scanner. The spatial
`resolution ofthe scanner6 is so as to allow for adequate vision
`of the individual pixels 4 of the display 2 under test. The
`sensor 8 and image processing hardwareofthe flat bed scan-
`ner 6 also have enough luminancesensitivity and resolution
`in orderto give a precise quantization of the luminance emit-
`ted bythe pixels 4. For an emissive display2, the light source
`10 or lamp of the scanner 6 is switched off: the luminance
`measured is emitted by the display 2 itself. For a reflective
`type of display 2,the light source 10 or lamp of the scanner 6
`is switched on:thelight emitted by the display 2 is light from
`the scanner’s light source 10, modulated by the reflective
`properties of the display2, and reflected, andis subsequently
`measured by the sensor 8 of the scanner 6.
`‘The output file of the image capturing device (in the
`embodimentdescribed, scanner 6) is an electronic imagefile
`giving a detailed picture of the pixels 4 of the complete
`electronic display 2.
`According to a second embodimentof the vision measure-
`ment system, as illustrated in FIG. 5, an image capturing
`device, such as e.g. a high resolution CCD camera 12, is used
`to take a pictureofthe pixels 4 ofthe display 2. The resolution
`of the CCD camera 12 is so as to allow adequate definition of
`the individualpixels 4 of the display 2 to be characterized. A
`typical LCD panel mayhavea diagonal dimension offrom 12
`or 14 to 19 or 21 inches or more. In the current state of the art
`
`of CCD cameras,it may not be possible to image large matrix
`displays 2 at once. As an example, high resolution electronic
`displays 2 with an image diagonal of more than 20" may
`require that the CCD camera 12 and the display 2 are moved
`with respect to each other, e.g. the CCD camera 12 is scanned
`(in X-Y position) over the image surface of the display 2, or
`vice versa: the display 2 is scanned overthe sensor area ofthe
`CCD camera 12, in order to take several pictures of different
`parts of the displayarea 2. ‘Lhe pictures obtained in this way
`are thereafter preferably stitched to obtain one image of the
`complete active image surface of the display 2.
`Again,the resulting electronic imagefile, i.e. the outputfile
`of the image capturing device, which is in the embodiment
`described a CCD camera 12, gives a detailed picture of the
`pixels 1 of the display 2 that needs to be characterized. An
`example of an image 13 of the pixels 4 of a matrix display 2
`is visualized in FIG,6a,
`
`Once an image 13 ofthe pixels 4 of the display 2 has been
`obtained, a processis run to extract pixel characterization data
`fromthe electronic image 13 obtained fromthe image cap-
`turing device 6, 12.
`In the image 13 obtained, algorithms will be usedto assign
`one luminancevalue ta each pixel 4. One embodiment of such
`an algorithm includes two tasks. In a first task, the actual
`
`
`
`US 7,639,849 B2
`
`7
`location ofthe matrix display pixels 4 is identified and related
`to the pixels ofthe electronic image, for example of the CCD
`or scanner image.
`Inmatrix displays 2, individual pixels 4 can be separated by
`a black matrix raster 14 that does not emit light. Therefore, in
`the image 13, a black raster 15 can be distinguished. This
`characteristic can be used in the algorithmsto clearly separate
`and distinguish the matrix display pixels 4. The luminance
`distribution on an imaginary line in a first direction, e.g.
`vertical line 16 in a Y-direction, and across an imaginary linc
`in a seconddirection, e.g. horizontal line 18 in an X-direction,
`through a pixel 4 can be extracted using imaging sofiware, as
`illustrated in IG. 6a to 'IG. 6c. Methods of extracting fea-
`tures from imagesare well known,e.g. as describedin “Intel-
`ligent Vision Systems for Industry”, B. G. Batchelor and P. F.
`Whelan, Springer-Verlag, 1997, “Traitement de I’ Image sur
`Micro-ordinateur”, Toumazet, Sybex Press, 1987; “Com-
`puter vision”, Reinhard Klette and Karsten Schltins, Springer
`Singapore, 1998; “Image processing: analysis and machine
`vision’, Milan Sonka, Vaclaw Hlavac and Roger Boyle, 1998.
`Supposingthat the image generated by the matrix display 2
`when the image was acquired by the image capturing device
`6, 12 wasset on all pixels 4 havinga first value, e.g. all white
`pixels 4 or all pixels 4 fully on. Then the luminance distribu-
`tion acrossvertical linc 16 and horizontallinc 18, in the image
`13 acquired by the image capturing device 6, 12, shows peaks
`19 and valleys 21, that correspond with the actuallocation of
`the matrix display pixels 4, as shown in FIG. 64 and FIG.6c
`respectively. As noted before, the spatial resolution of the
`image capturing device, e.g. the scanner 6 or the CCD camera
`12, needs to be high enough,i.e. higher than the resolution of
`the matrix display 2, e.g. ten times higher than the resclution
`of the matrix display 2 (10x over-sampling). Because of the
`over-sampling, it will be possible to express the horizontal
`and vertical distance of the matrix display pixels 4 precisely
`in units of pixels of the image capturing device 6, 12 (not
`necessarily integer numbers).
`A threshold luminance level 20 is constructed that is
`located at a suitable value between the maximum luminance
`
`
`
`level measuredat the peaks 19 and minimum luminancelevel
`measuredat the valleys 21 across the vertical lines 16 and the
`horizontal lines 18, e.g. approximately in the middle. All
`pixels of the image capturing device 6, 12 with luminance
`belowthe threshold level 20 indicate the location ofthe black
`
`raster 15 in the image, and thus of a corresponding black
`matrix raster 14 in the display 2. These locationsare called in
`he present description “black matrix locations” 22. The most
`robust algorithm will consider a pixel location of the image
`capturing device 6, 12 which1s located in the middle between
`two black matrix locations 22 as the center ofa matrix display
`pixel 4. Such locations are called “matrix pixel center loca-
`ions” 24. Depending on the amount of over-sampling, an
`amount of image capturing device pixels located around the
`matrix pixel center locations 24 across vertical line 16 and
`horizontalline 18, can be expected to represent the luminance
`of one matrix display pixel 4. In FIG. 6a, these image captur-
`ing device pixels, e.g. CCD pixels, are located in the hatched
`area 26 and are indicated with numbers 1 to 7 in FIG. 68.
`
`These CCD pixels are called “matrix pixel locators” 28 in the
`following. The matrix pixel locators 28 are defined for one
`luminancelevelof the acquired image 13. To makethe influ-
`ence of noise minimal, the luminance level is preferably
`maximized (white flat field when acquiring the image).
`Other algorithms to determine the exact location of the
`matrix display pixels 4 are included within the scope of the
`
`65
`
`8
`present invention. By means of example a second embodi-
`ment, which describes an alternative using markers, is dis-
`cussed below.
`A limited number of marker pixels (i.e. matrix display
`pixels 4 with a driving signal which is different from the
`driving signal of the other matrix pixels 4 of which an elec-
`tronic imageis being taken), for instance four, is used to allow
`precise localization of the matrix display pixels 4. For
`example, four matrix display pixels 4 ordered ina rectangular
`shape can be driven with a higher driving level thanthe other
`matrix display pixels 4. When taking an electronic image 13
`of this display area,
`it is easy to determine precisely the
`location of those four marker pixels 4 in the electronic image
`13. This can be donefor instance byfinding the four areas in
`the electronic image 13 that have the highest local luminance
`value. The centre of each marker pixel can then be defined as
`the centre of the local area with higher luminance. Once those
`four marker pixels have been determined,interpolation can be
`used to determine the location of the other matrix display
`pixels presentin the electronic image. This can be doneeasily
`since the location of the other matrix displaypixels is known
`relative to the marker pixels a priori (defined by the matrix
`display pixelstructure). Note that more advanced techniques
`can be used (for instance, correctionforlens distortion,e.g. of
`the imaging device) to calculate an exactlocationof the pixels
`relative to each other. Othertest images or patterns may also
`be used to drive the display under test during characterization.
`A potential advantage of this algorithm compared to one
`according to the previous embodimentis that a lower degree
`of over-sampling maybe sufficient. For example, such an
`algorithm may be implemented without including a task of
`isolating the black matrix in the electronic image. Therefore,
`lower resolution image capturing devices 6, 12 can be used.
`Thealgorithm can also be used for matrix displays where no
`black matrix structure is present or for matrix displays that
`also have black matrix between sub-pixels or parts of sub-
`pixels, such as a color pixel for example.
`Instead of (or in addition to) luminance, also color can be
`measured. The vision set-up may then beslightly different, to
`comprise a color measurement device, such as a colorimetric
`camera or a scanning spectrograph for example. The under-
`lying principle, however, is the same: a location of a pixel and
`its color are determined.
`
`In a secondtask of the algorithm to assign one light-output
`value to each pixel 4, after having determined the location of
`cach individual matrix pixcl 4, its light-output is calculated.
`This is explained for a luminance measurement. The lumi-
`nance of the matrix pixel locators 28 across the X-direction
`and Y-direction that describe onepixel location, are averaged
`to one luminance value using a suitable calculation method,
`e.g. the standard formula for calculation of a mean. As a
`result, every pixel 4 of the matrix display 2 that is to be
`characterized is assigned a pixcl value (a representative or
`averaged luminance value). Other more complex formulae
`are included within the scope of the present invention:e.g.
`harmonic mean can be used, or a numberof pixel values from
`the image 13 can be rejected from the mean formula as out-
`liers or noisy image capturing device pixels. Thus the mea-
`sured values may be filtered to remove noise of the imaging
`device. A similar method maybe applied for assigning a color
`value to each individual matrix pixel.
`Note that it is also possible to use techniques known as
`“super-resolution” to create a high-resolution image of the
`display surface (and display pixels) with a lower-resolution
`capture device.
`‘l‘his technique combines multiple lower-
`resolution images to generate a higher-resolution resulting
`image. In some implementations the super-resolution tech-
`
`
`
`US 7,639,849 B2
`
`9
`nique makes use of the fact that the object being imaged is
`slightly vibrating so that the relative orientation between
`object to be imaged and capture device is changing. In other
`implementations this vibration is actually enforced by means
`of mechanical devices. Also note that techniques exist to
`avoid problems with moire effects and this by combining
`images of the object to be imaged that are slightly shifted
`relative to each other. Such a technique may be implemented
`to allow the use of one or more lower-resolution imaging
`devices without the risk of having problems with moiré
`effects.
`
`The response function may be represented by a number of
`suitable meansfor storage andretrieval, e.g. in the form of an
`analytical function, in the form of a look-up table or in the
`form of a curve. An example of such a luminance response
`curve 30 is illustrated in FIG. 7. The luminance response
`function can be constructed with as many points as desired or
`required. The curve 30 in the example ofFIG.7 is constructed
`using cleven characterization points 32, which result from the
`display and acquisition of images, and the calculation of
`luminancelevels for a given pixel 4. Interpolation between
`those characterization points 32 can then be carried out in
`order to obtain a light-output response of a pixel 4 corre-
`spondingto a driving level whichis different from that of any
`of the characterization points 32. Different
`interpolation
`methods exist, and are within the skills of a person skilled in
`theart.
`
`10
`array). Thus nine million or more functions may be obtained,
`each defined by a set of e.g. sixteen values (light-output in
`functionof drive level).
`A pixel’s response function is the result ofvarious physical
`processes, each of which may define the luminance genera-
`tion process to a certain extent. A few of the processes and
`parameters that influence an individual pixel’s light-output
`response and can causethe responseto bedifferent frompixel
`to pixel are set forth in the following non-exhaustive list:
`the cell gap (in case of LCD displays),
`the driver integrated circuit (IC),
`electronic circuitry preceding the driver IC,
`It will be well understood by people skilled in the art that
`LCD material alignment defined by rubbing,
`the light-output values, i.e. luminance values and/or color
`the backlight intensity (drive voltage or current),
`values, of the individual pixels 4 can be calculated in any of
`temperature,
`the described waysor any other wayforvarious test images or
`spatial (e.g. over the area of the display 2) non-uniformity
`light-outputs, 1.e. for a plurality of test images in which the
`of any of the above mentioned parameters or processes.
`pixels are driven by different driving levels. Supposing that,
`The light-output response ofthe individual pixels 4 may be
`in order to obtain a test image, all pixels are driven with the
`assumed to completely describe that pixel’s light-output
`same information,1.c. with the same drive signal or the same
`20
`behavior as a function of the applied drive signal. This behav-
`driving level, then the displayed image representsaflat field
`ior is individual and may differ from pixel to pixel.
`with light-output ofthe pixels ranging from 0% to 100%(e.g.
`A next task of an algorithm according to an embodiment
`black to white) depending on the drive signal. For each per-
`defines a drive function, e.g. a drive curve, which ensuresthat
`centage of drive between 0% (zero drive, black field) and
`a predefined light-output response (fromelectrical input sig-
`100%(full drive or white field) a complete image 13 of the
`nal to light-output output