throbber
CCD and CMOS
`Imaging Array Technologies:
`Technology Review
`
`Stuart A. Taylor
`
`Technical Report EPC-1998-106
`
`Copyright ª
`
` Xerox Limited 1998
`
`Xerox Research Centre Europe
`Cambridge Laboratory
`61 Regent Street
`Cambridge CB2 1AB
`
`Tel: +44 1223 341500
`Fax: +44 1223 341510
`
`IPR2022-00093 - LGE
`Ex. 1014 - Page 1
`
`

`

`CCD and CMOS Imaging Array Technologies
`- Technology Review -
`
`Stuart A Taylor
`Xerox Research Centre Europe
`Cambridge, UK.
`staylor@xrce.xerox.com
`
`Abstract
`This paper provides an overview of both CCD (charged coupled device) and CMOS
`(complimentary metal oxide semiconductor) imaging array technologies. CCDs have been in
`existence for nearly 30 years and the technology has matured to the point where very large,
`consistent (low numbers of defects) devices can now be produced. However, CCDs suffer
`from a number of drawbacks, including cost, complex power supplies and support
`electronics. CMOS imaging arrays, on the other hand, are still in their infancy, but are set to
`develop rapidly and offer a number of potential benefits over CCDs. This review provides an
`overview of both CCD and CMOS imaging technology, and includes explanations of how
`images are captured and read out from the imaging arrays. Also covered are issues such as
`performance characteristics, cost considerations and the future of imaging arrays. This review
`does not provide details of colour sensors, colour filter arrays and colour interpolation, etc.,
`as these will be the subject of a separate report.
`
`Introduction to CCDs
`The Charged Coupled Device (CCD) was invented in 1970 by Willard Boyle and George
`Smith at Bell Laboratories, USA [Sharma97]. The idea originated from research into
`magnetic bubble memories, and as with many great inventions, Smith is quoted as saying
`“[we] invented charged-coupled devices in an hour” [Lucent96]. In the intervening twenty-
`eight years, CCDs have found their way into a huge range of products including fax
`machines, photocopiers, cameras, scanners and even children’s toys.
`
`CCDs consist of thousands (or millions) of light sensitive cells or pixels that are capable of
`producing an electrical charge proportional to the amount of light they receive. Typically, the
`pixels are arranged in either a single line (linear array CCDs) or in a two-dimensional grid
`(area array CCDs). The particular application will, in general, dictate the type of CCD that is
`used. Flatbed scanners, for example, use linear array CCDs and, in this case, it is necessary to
`progressively move the CCD over the object being imaged (or vice versa) while capturing
`multiple one-dimensional images in order to build up the final two-dimensional image.
`Digital cameras, on the other hand, normally use area array CCDs, thus allowing the full two-
`dimensional image to be captured within a single exposure.
`
`One of the fundamental parameters of a CCD is resolution, which is equal to the total number
`of pixels that makes up the light sensitive area of the device. One of the first area array
`CCDs, manufactured by Fairchild in 1974, had a resolution of 100x100 [Oregon97]. Today,
`the largest commercially available device is approximately 9000x7000 or roughly 63 million
`pixels [Pixelv97]. Other parameters that characterise CCDs will be discussed in more detail
`later.
`
`Page 1 of 14
`
`IPR2022-00093 - LGE
`Ex. 1014 - Page 2
`
`

`

`CCDs are in essence integrated circuits (ICs) and are hence rather like computer chips.
`However, to allow light to fall on the silicon chip (or die) a small glass window is inserted in
`front of the chip. Conventional ICs are usually encapsulated in a black plastic body to
`primarily provide mechanical strength, but this also shields them from light, which can affect
`their normal operation. CCDs are manufactured using metal-oxide-semiconductor (MOS)
`fabrication techniques, and each pixel can be thought of as a MOS capacitor that converts
`photons (light) into electrical charge, and stores the charge prior to readout.
`
`Examples of CCD Arrays
`Before moving on to explain the operation of CCDs in more detail, it is useful to illustrate
`what CCDs actually look like. The following table shows three different devices for
`comparison, along with their respective pixel counts. The transfer method will be explained
`in a subsequent section.
`
`Manufacturer: Kodak
`Type: area array
`Pixel count: 4096x4096
`Transfer method: full frame transfer
`
`Manufacturer: Dalsa
`Type: area array
`Pixel count: 256x256
`Transfer method: frame transfer
`
`Manufacturer: Kodak
`Type: linear array
`Pixel count: 5000
`Transfer method: full frame transfer
`
`Table 1 – Examples of CCD Arrays [Kodak98, Dalsa98]
`
`CCD Fundamentals
`As mentioned above, each pixel that makes up a CCD is essentially a MOS capacitor, of
`which there are two types: surface channel and buried channel. The two differ only slightly
`in their fabrication, however; buried channel capacitors offer major advantages, and because
`of this, nearly all CCDs manufactured today use this preferred structure.
`
`A schematic cross section of a buried channel capacitor is shown in Figure 1 [Sharma97,
`SITe94]. The device is typically built on a p-type silicon substrate (approx 300m m thick) with
`an n-type layer (approx 1m m thick) formed on the surface. Next, a thin silicon dioxide layer
`(approx 0.1m m thick) is grown followed by a metal electrode (or gate). The application of a
`
`Page 2 of 14
`
`IPR2022-00093 - LGE
`Ex. 1014 - Page 3
`
`

`

`positive voltage to the electrode reverse biases the p-n junction and this causes a potential
`well to form in the n-type silicon directly below the electrode. Incident light generates
`electron-hole pairs in the depletion region, and due to the applied voltage, the electrons
`migrate upwards into the n-type silicon layer and are trapped in the potential well
`[Muncaster85]. The build up of negative charge is thus directly proportional to the level of
`incident light.
`
`Once the exposure time (also known as the integration time) has elapsed, the charge trapped
`in the potential well is transferred out of the CCD before being converted to an equivalent
`digital value.
`
`+Ve
`
`Metal Electrode
`
`Photo-generated
`electron-hole pair
`
` Oxide Layer
`
`N-Type Silicon
`
`Depletion Region
`
`P-Type Silicon
`
`Earth
`
`Figure 1 - Buried Channel Capacitor CCD Pixel
`
`An illustration of a practical buried channel capacitor is shown in Figure 2 [SITe94]. This
`diagram also shows the channel stops, which are usually created by heavily doping these
`regions to form p-type semiconductor. In addition, a thick layer of oxide, the field oxide, is
`applied over these regions. The purpose of the channel stops is to minimise the diffusion of
`electrons from one pixel to another.
`
`Figure 2 – Practical Buried Channel
`Capacitor (From [SITe94])
`
`The Charge Readout Process
`The charge readout process takes place in two stages. The first involves moving the pixel
`charges across the surface of the array. The second involves reading out the pixel charges into
`a register prior to being digitised.
`
`Page 3 of 14
`
`IPR2022-00093 - LGE
`Ex. 1014 - Page 4
`
`

`

`The charge transfer process can be explained as follows. Each pixel is divided into a number
`of distinct areas known as phases [Gallagher]. Three-phase sensors tend to be the most
`common form due principally to high yields and high process tolerances, although one, two,
`and four phase formats do exist [SITe94]. Taking a three phase sensor as an example (see
`Figure 3), the integration period, phases 1 and 2 will be in charge holding mode, and phase 3
`will be in charge blocking mode (a). At the end of the integration period, when it is time to
`transfer the captured image out of the array, the following process takes place. Phase 1 is
`placed in charge blocking mode, which has the effect of transferring the total charge of
`phases 1 and 2 into only phase 2 (b). Phase 3 is then placed in charge holding mode, which
`allows the charge in phase 2 to distribute itself evenly between phases 2 and 3 (c). Next,
`phase 2 is placed in charge blocking mode, forcing the charge into phase 3 (d). This process
`repeats, until, as illustrated by (g), the charge from pixel two has been moved into pixel one.
`An alternative representation [Oregon97] of the charge readout process is illustrated in
`Figure 4.
`
`Pixel 1
`
`Pixel 2
`
`Ø 3
`
`Ø 2
`
`Ø 1
`
`Ø 3
`
`Ø 2
`
`Ø 1
`
`a
`
`b
`
`c
`
`d
`
`e
`
`f
`
`g
`
`Figure 3 - Charge Transfer
`
`Figure 4 - Charge Transfer; Alternative Representation
`(From [Oregon97])
`
`The second stage of the readout process occurs after each row of pixels has been transferred
`by one complete row. Located adjacent to the top row of pixels is one additional row of
`pixels called the readout register. As the charge from each row of pixels is moved up one
`row, the charge from the top row will be moved into the readout register. At this point, the
`charge values in the readout register are transferred horizontally into the readout stage, from
`which they are then sent to an analogue-to-digital converter (ADC) before being stored in
`memory. This process is illustrated in Figure 5 .
`
`Page 4 of 14
`
`IPR2022-00093 - LGE
`Ex. 1014 - Page 5
`
`

`

`Readout
`Register
`
`Readout
`Stage
`
`Readout
`Register Transfer
`
`Row
`Transfer
`
`1
`
`2
`
`3
`
`Figure 5 - Charge Readout
`
`The transfer and readout processes described above apply to both linear array and area array
`CCDs. In the former case, a single transfer followed by the readout stage is all that is
`necessary to transfer the image out of the array. For area array CCDs, the transfer and readout
`stages must, in general, be repeated for each row of pixels, until the complete image has been
`read out. This generic description of the readout process for area arrays will vary, however,
`depending on the overall architecture of the device.
`
`Area Array CCD Architectures
`The architecture of area array CCDs generally falls into one of four categories: full frame,
`frame transfer, split frame transfer and interline transfer. These are illustrated in Figure 6.
`
`Imaging Section
`
`Imaging Section
`
`Storage Section
`
`Readout Register
`
`Readout Register
`
`Full Frame Sensor
`
`Frame Transfer Sensor
`
`Readout Register
`
`Upper Storage
`Section
`
`Imaging Section
`
`Lower Storage
`Section
`
`Readout Register
`
`Readout Register
`
`Readout Register
`
`Light Shielded Vertical CCD
`
`Imaging CCD Elements
`
`Split Frame Transfer
`
`Interline Transfer
`
`Figure 6 - Area Array CCD Architectures
`
`Page 5 of 14
`
`IPR2022-00093 - LGE
`Ex. 1014 - Page 6
`
`

`

`Full Frame – Here, the image is transferred directly from the imaging region of the sensor to
`the readout register. However, since only a single row can be transferred to the readout
`register at a time (the contents of which must then be transferred to the readout stage) the rest
`of the imaging pixels must wait. During this period, pixels that have not yet been read out of
`the array are still capable of recording image information. The problem with this is that the
`imaged information will now be offset from the original scene recorded, and this can lead to
`smearing and blurring of the final image. Another problem is encountered during high-speed
`applications. In these situations, the integration time will be only a small percentage of the
`total time required to record and read out the resultant image from the array. The effect of this
`is that the image will have a lower contrast since very little time is spent actually recording
`the image. One solution to these problems is to use a mechanical shutter, whereby the array is
`shielded from light once the image has been captured.
`
`Frame Transfer – A frame transfer device utilises a light shielded storage section of at least
`the same size as the imaging section of the array. Following the integration period, the
`captured image is quickly transferred to the adjacent storage section. Whilst the next scene is
`being captured, the previous scene, now held in the storage section, is transferred to the
`readout register as described above. Using this technique effectively allows both the image
`capture and readout processes to run in parallel. This greatly increases the time available for
`integration whilst still maintaining a sufficiently high frame rate. However, since the captured
`image still has to be transferred across the surface of the imaging section of the array, a
`mechanical shutter may still be required to avoid the problems described above.
`
`Split Frame Transfer – This type of device is essentially the same as a frame transfer device,
`except that the storage section is split in half, with each half being located above and below
`the imaging section. The advantage of this is that it allows the image to be transferred out of
`the imaging section in half the time that is required for a frame transfer device.
`
`Interline Transfer – An interline transfer device has columns of photosensitive elements
`separated by columns of light shielded registers. At the end of the integration period, all of
`the photosensitive elements simultaneously transfer their accumulated charge to the adjacent
`storage registers. The light shielded registers then transfer the charge to the readout register
`as previously described, during which time, the imaging elements begin capturing the next
`scene. Although not requiring a mechanical shutter, this type of device has the significant
`drawback that a large proportion (typically 40%) of the imaging section is not sensitive to
`light. To minimise the effects of this, microlenses are often placed directly over the imaging
`section of the array. The lenses cover both the light sensitive and light shielded portions of
`each array element, and have the effect of focusing the incoming light onto just the light
`sensitive areas of each element.
`
`Interlaced and Progressive Scan
`The image collection mode of an area array sensor can be either interlaced or progressive
`scan (non-interlaced). The interlaced technique is used by the major broadcast standards PAL
`and NTSC and is done to reduce the bandwidth of the image for transmission. In this mode,
`the frame is divided into two fields: an odd field consisting of all the odd numbered rows, and
`the even field consisting of all the even numbered rows. Half of the frame is recorded by the
`odd field at time T1, and the other half of the frame is recorded by the even field at time T2.
`This means that it takes two cycles to build up a complete image, so for PAL, which operates
`at 50Hz, images are captured at a rate of 25 per second. A major drawback of the interlaced
`method is encountered when the subject is moving. Since the two fields are separated in time
`
`Page 6 of 14
`
`IPR2022-00093 - LGE
`Ex. 1014 - Page 7
`
`

`

`by 20ms, the position of a moving object will have changed between the two fields, resulting
`in blur when the two fields are combined to produce the final image. Figure 7 shows a picture
`of a moving hand which illustrates this problem (note, the effect has been emphasised in this
`image to more clearly demonstrate the problem). The progressive scan mode sensor reads out
`the complete frame in one go, thus making it possible to capture images of moving objects
`without the blurring effects associated with interlaced mode [Pulnix97].
`
`Figure 7 - Moving Hand Captured With an
`Interlaced Camera
`
`CCD Performance Characteristics
`In an ideal world, CCDs would exhibit perfect scene-to-digital image conversion. However,
`in reality, CCDs are not perfect. They suffer from a number of problems and limitations
`[Oregon97, SITe94], some of which are discussed in this section.
`
`Fill factor – The fill factor is basically the percentage of each pixel that is sensitive to light.
`Ideally, the fill factor should be 100%; however, in reality, it is often less than this. Features
`to control blooming (see below) and, for CMOS sensors, additional control electronics,
`occupy space within each pixel, and these areas are insensitive to light. The net effect of
`reducing the fill factor is to lower the sensitivity of the array.
`
`Dark current noise – Dark current can be defined as the unwanted charge that accumulates in
`the CCD pixels due to natural thermal processes that occur while the device operates at any
`temperature above absolute zero. At any temperature, electron-hole pairs are randomly
`generated and recombine within the silicon and at the silicon-silicon dioxide interface.
`Depending on where they are generated, some of these electrons will be collected in the CCD
`wells and will appear as unwanted signal charges (i.e. noise) at the output.
`
`The principal sources for dark current in order of importance are: generation at the silicon-
`silicon dioxide interface, electrons generated in the CCD depletion region, and electrons that
`diffuse to the CCD wells from the neutral bulk. The first two sources usually dominate the
`dark current. In addition, the generation rate can vary spatially over the array leading to a
`fixed noise pattern.
`
`Page 7 of 14
`
`IPR2022-00093 - LGE
`Ex. 1014 - Page 8
`
`

`

`For applications requiring very low noise levels, for example astro-photography, dark current
`sources can be reduced by cooling the CCD since they are strongly temperature dependent.
`The level of cooling is largely dependent on the longest integration time desired and the
`minimum acceptable signal-to-noise ratio.
`
`Quantum efficiency (QE) – Quantum efficiency (QE) is the measure of the efficiency with
`which incident photons are detected. Some incident photons may not be absorbed due to
`reflection or may be absorbed where the electrons cannot be collected. The quantum
`efficiency is the ratio of the number of detected electrons divided by the product of the
`number of incident photons times the number of electrons each photon can be expected to
`generate. Visible wavelength photons generate one electron-hole pair, thus the QE for visible
`light is given by the ratio of the number of detected electrons divided by the number of
`incident photons.
`
`There are a number of techniques used to improve the QE of CCDs, one of which is to
`illuminate the CCD from the back, as opposed to the front. In front-illuminated devices,
`incident photons must pass through the gate structure in order to generate signal electrons.
`Photons will be absorbed in these layers and thus won’t contribute to the final signal. The
`absorption is also wavelength dependent, with short wavelength photons being absorbed
`more than long wavelength photons. This effect results in poor blue and UV spectral
`responses. In order to increase the short wavelength response, a technique of thinning the
`silicon substrate has been developed (typically from 300m m down to 15m m). In this case, the
`CCD is illuminated from the back and thus photons do not have to pass through the front gate
`structure. It has taken almost a decade to perfect the thinning process, the main problem
`being non-uniform thinning, i.e. the corners were thinner than the centre which leads to non-
`uniform response, the ‘potato chip factor’ [Oregon97].
`
`Blooming – Blooming is an effect that occurs when, during the integration period, a potential
`well becomes full of electrons; this is usually caused by the presence of a bright object in the
`scene being imaged (assuming that the overall exposure is correctly set). When a potential
`well overflows, the electrons flow into surrounding potential wells, thus creating an area of
`saturated pixels. If blooming isn’t controlled, the resultant image will suffer from large over-
`exposed regions.
`
`Many techniques have been developed to combat blooming, but one common method is to
`use lateral overflow drains (LODs) [Parulski96], which is illustrated in Figure 8.
`Conceptually, they work in a similar way to an overflow in a sink; when the potential well
`fills to a certain level, any further electrons that accumulate are allowed to drain away
`without affecting surrounding pixels. One of the drawbacks of using anti-blooming systems is
`the fill factor is often reduced, typically from near 100% down to around 70% [Kodak98].
`
`Pixel
`
`Electrons
`
`LOD
`
` Figure 8 – Schematic of LODs
`
`Page 8 of 14
`
`IPR2022-00093 - LGE
`Ex. 1014 - Page 9
`
`

`

`Charge transfer efficiency (CTE) – The CTE is a measure of the percentage of electrons
`which are lost at each stage during the charge transfer process. Modern buried channel CCDs
`have CTE values in excess of 99.999%. Another aspect of the charge transfer mechanism is
`that the rows of pixels closest to the readout register will undergo fewer transfers than those
`on the opposite side of the array. The net effect is that the image quality will vary across the
`width of the array. CTE effects and the non-linearity of image quality across the array are
`contributing factors to the upper size limit of CCD arrays.
`
`Introduction to CMOS Imaging Arrays
`CMOS, or complimentary metal oxide semiconductor, image sensors have been around for
`nearly as long as CCDs [Dyson97], but it is only recently that commercial sensor chips have
`become available. These devices were made possible through research carried out at the Jet
`Propulsion Laboratory (JPL), and in 1993, they produced a CMOS sensor with a performance
`comparable to scientific-grade CCDs [Photobit97b]. This section provides an overview of
`CMOS image sensors, and highlights some of the advantages and disadvantages they offer
`over CCDs.
`
`CMOS sensors, like CCDs, are formed from a grid of light sensitive elements, each capable
`of producing an electrical signal/charge proportional to the incident light. However, the
`process of achieving this is very different for each of the technologies. As previously
`explained, a CCD pixel is formed from a biased p-n junction that creates a potential well in
`which charge accumulates during the integration period. Each CMOS pixel, on the other
`hand, employs a photodiode, a capacitor and up to three transistors. Prior to the start of the
`integration period, the capacitor will be charged to some known voltage. When the
`integration period begins, the charge on the capacitor is allowed to slowly drain away through
`the photodiode, the rate of drain being directly proportional to the level of incident light. At
`the end of the integration period, the charge remaining in the capacitor is read out and
`digitised. Figure 9 shows an example of an active pixel (see next section) along with graphs
`of voltage vs. time taken from various points within the pixel [Hurwitz97]. It is worth noting
`that other circuit arrangements are possible, for example, the capacitor is charged during the
`integration period, as opposed to being discharged.
`
`Figure 9 - Active CMOS Pixel Structure (From [Hurwitz97])
`
`Page 9 of 14
`
`IPR2022-00093 - LGE
`Ex. 1014 - Page 10
`
`

`

`CMOS Detector Types
`CMOS image sensors typically come in two forms: passive pixel and active pixel. Passive
`pixel devices have charge amplifiers at the bottom of each column of pixels, with each pixel
`having just a single transistor (in addition to the photodiode and capacitor). This transistor is
`used as a charge gate and switches the contents of each pixel’s capacitor to the charge
`amplifier. Active pixel arrays implement an amplifier in every pixel (as shown in Figure 9).
`The two different detector types are illustrated in Figure 10.
`
`Photo-detector
`
`Photo-detector
`
`Row
`Selectors
`
`Switching
`Transistor
`
`Row
`Selectors
`
`Active Charge
`Amplifier
`
`Charge Amplifiers
`
`Column Outputs
`
`Passive Pixel Array
`
`Column Outputs
`
`Active Pixel Array
`
`Figure 10 - Passive and Active Pixel CMOS Arrays
`
`Although only a schematic, this diagram highlights the fact that even for the passive pixel
`type, a proportion of each pixel is occupied by additional components which are not
`themselves sensitive to light. The effect of this is to greatly reduce each pixel’s fill factor;
`Hurwitz quotes a figure of only 26% [Hurwitz97]. In an attempt to overcome this problem,
`micro-lenses are sometimes mounted directly in front of each pixel in an attempt to focus the
`incoming light onto the sensitive region of the pixels [Pixelv97].
`
`The Readout Process
`Both passive and active pixel arrays use the same technique for reading the image out of the
`array. Each of the row selectors are sequentially clocked. This in turn causes the switching
`transistor or charge amplifier for that row of pixels to activate and thus transfer each pixel’s
`charge to the column outputs. A readout register then serially transfers the column output
`values to an analogue to digital converter in a similar way to a CCD.
`
`Clocking the row selectors sequentially allows the full image to be progressively read out
`from the array. However, it is also possible to clock a limited number of row selectors, which
`has the effect of reading out a small horizontal strip from the array. By discarding pixel
`values from the beginning and end of each of these rows, a specific area of the image can be
`obtained. Sampling a small number of relevant pixels in this way dramatically increases the
`readout speed; using this technique, some manufacturers claim speeds of a million frames per
`second [Graydon97].
`
`Page 10 of 14
`
`IPR2022-00093 - LGE
`Ex. 1014 - Page 11
`
`

`

`Product Integration
`Another advantage of CMOS arrays over CCDs lies in the potential for high levels of product
`integration. It is possible to include timing logic, exposure control, analogue-to-digital
`conversion and image compression circuitry on-chip with the sensor to make a complete
`single-chip camera [WhatDC97]. It is technically feasible, but not economic, to use the CCD
`process to integrate these functions, and thus most CCD based cameras are formed from
`several chips. This can result in the need for up to 5 different supply voltages which leads to
`high power consumption. By using a single chip, coupled with the inherent low power
`consumption of CMOS devices, power savings of 100 times can be achieved over CCDs
`[Photobit97b]. Another advantage of integrating functionality is that CMOS sensors can be
`tested ‘on the wafer’; that is, rejects can be found before the costly production steps of cutting
`and mounting each individual device [Dyson97].
`
`However, there are a number of drawbacks of producing multi-functional CMOS sensors.
`Firstly, there is the cost of producing the very large dies needed to accommodate the sensors
`and associated components (see the next section for a more detailed examination of cost
`considerations). Also, there is the difficulty of embedding leading edge intellectual property;
`to embed new technology requires a new chip to be produced, therefore multi-chip solutions
`remain popular. Finally, each function of the image sensor technology requires a different
`manufacturing process. For example, companies producing memory chips use a dramatically
`different process to those making analogue-to-digital converters. Consequently, multi-
`function CMOS sensor production must employ a hybrid approach, and this can sometimes
`be difficult to optimise [Pixelv97].
`
`Cost Considerations
`It is estimated that between 90% and 95% of all chips found in computers and other
`electronic goods today are manufactured using CMOS technology [Graydon97, Photobit97a].
`Chip manufacturing plants cost many millions of dollars to set up, but provided they produce
`chips in sufficient quantities, the price per chip is very low, particularly when compared with
`other technologies.
`
`CMOS wafer production lines are already making 8-inch wafers, and heading towards 12-
`inch in the near future. Also, the feature size of CMOS is already 0.4 microns with 0.2 on the
`horizon; wafer sizes of between 4 and 6-inch are typical for CCDs, and their feature sizes are
`typically 0.6 micron. Larger wafers with smaller features means more devices per production
`cycle, and that leads to lower costs [Dunn97].
`
`Performance Issues
`CMOS imaging is still a relatively immature technology and, although there are many
`potential benefits to be gained over CCDs, there remain a number of problems yet to be
`addressed. It has already been mentioned that the fill factor of CMOS devices is typically
`around the 25% mark, and that micro-lenses are used to help improve sensitivity. However,
`there are other more serious problems that have to be addressed.
`
`Photodiodes, which are the light sensing component of every pixel, are easy to make, but
`have an exceedingly non-linear response to light. It has also proved very difficult to control
`the fabrication process well enough to obtain comparable responses from a million individual
`diodes across the chip [Dyson97]. Another related problem is one of pixel defects where
`individual, or groups, of pixels are insensitive to light; it is worth noting that this is also a
`problem found in CCDs. Devices with high defect levels can still be used for non-critical
`
`Page 11 of 14
`
`IPR2022-00093 - LGE
`Ex. 1014 - Page 12
`
`

`

`applications such as toys, and such devices tend to be very much cheaper than near-perfect
`devices. Currently, the main solution to these problems is to use image processing software,
`i.e., the defects and non-linearities are still present in the sensors, but software is used to
`reduce their effects. For example, by capturing a light shielded image, this can be subtracted
`from the actual image to help reduce the non-linearity effects. Another technique used to
`overcome defects is to use a ‘blemish map’. Using this map, software can be used to look at
`neighbouring pixels and calculate what a likely value should be for defective pixels
`[Dunn97].
`
`On the plus side, CMOS sensors consume very little power. In addition, they do not suffer
`from problems of blooming. Since the pixels are not accumulating charge, unlike a CCD,
`they can not ‘overflow’ and affect neighbouring pixels. Another advantage is image
`consistency (ignoring the effects of defects, etc). Transferring the image directly out of the
`array, as opposed to having to move the image from one pixel to the next, removes problems
`of charge transfer efficiency (CTE) found in CCDs.
`
`The Future of Imaging Arrays
`CCD imaging sensors have matured over the last 30 years to the point where they are now
`available in very high-resolution formats with very low defect rates. Because of this, they
`remain the sensor of choice for nearly all scientific and professional imaging applications.
`However, for the low-end consumer market, where cost is often more important than quality,
`CCDs are beginning to loose out to a new generation of CMOS devices. Over the next 5 to 10
`years, it is likely that CMOS imaging technologies will evolve to the point where they offer
`all of the benefits of CCDs, along with low cost, low power consumption and high levels of
`product integration.
`
`The field of digital photography has seen a huge growth over the last few years, and it is
`believed that this growth will increase once the current problems with CMOS sensors have
`been overcome. In short, the next few years will see CMOS sensors having a dramatic impact
`on the field of digital imaging.
`
`Page 12 of 14
`
`IPR2022-00093 - LGE
`Ex. 1014 - Page 13
`
`

`

`References
`
`[Dalsa98]
`
`Dalsa web site; CCD Image Capture Technology; http://www.dalsa.com
`
`[Dunn97]
`
`[Dyson97]
`
`Dunn, James F.; A New Digital Camera Startup Busts Price/Performance
`Standards with CMOS Sensor; Advanced Imaging; Jan 1997.
`
`Dyson, P.E, Rossello, R.; CMOS Challenges CCD in Vivitar’s 3000; PMA
`‘97: new technology and applications for digital photography; The Seybold
`Report on Publishing Systems; April 1997, Vol.26, Num. 13.
`
`[Gallagher]
`
`Gallagher, P.; Considerations For Selecting CCD Sensors and Cameras for
`Vision Applications; EG&G Application Note.
`
`[Graydon97]
`
`[Hurwitz97]
`
`[Kodak98]
`
`[Lucent96]
`
`Graydon, O.; Selected pixels deliver pictures by the million;
`http://www.vvl.co.uk/whycmos/text/olejune.htm; Extracted from the full
`text from Opto & Laser Europe, June 1997
`
`Hurwitz, J.E.D, Denyer, P.B, Baxter, D.J; An 800K-Pixel Colour CMOS
`Sensor for Consumer Still Cameras; Presented as part of Electronic
`Imaging at SPIE Photonics West, 1997; Obtained from
`http://www.vvl.co.uk/whycmos/text/spie800.htm
`
`Kodak web site; Full frame sensors, KAF-0400(L);
`http://www.kodak.com/daiHome/genInfo/sensors/kaf0400.shtml
`
`Lucent Technologies web site; Invented Here: Charged-Coupled Devices;
`1996; http://www.lucent.com/ideas2/discoveries/telescope/docs/ccd1.html
`
`[Muncaster85] Muncater, R.; A-Level Physics; Stanley Thornes (Publishers) Ltd; 1985;
`pp.769-778.
`
`[Oregon97]
`
`The University of Oregon Physics Department web site; Evolving Towards
`The Perfect CCD; 1997; http://zebu.uoregon.edu/ccd.html
`
`[Parulski96]
`
`Paraulski, K. and Jamerson, P.; Enabling Technologies for a family of
`digital cameras; SPIE – The International Society for Optical Engineering;
`Solid State Sensor Arrays and CCD Cameras; Volume 2654; 1996.
`
` [Photobit97a]
`
`Photobit web site; Advantages Over the CCD;
`http://www.photobit.com/advant1t.htm
`
`[Photobit97b]
`
`Photobit web site; Technology, CMOS Active Pixel Sensor Technology;

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket