throbber
EXHIBIT 1051
`
`P.B. DENYER, D. RENSHAW, W. GUOYU and L. MINGYING,
`
`“CMOS image sensors for multimedia applications,”
`
`Proceedings of the IEEE 1993 CICC, pp. 11.5.1-11.5.4 (1993)
`
`TRW Automotive U.S. LLC: EXHIBIT 1051
`PETITION FOR INTER PARTES REVIEW
`OF U.S. PATENT NUMBER 8,599,001
`IPR2015-00436
`
`

`
`CMOS IMAGE SENSORS FOR MULTIMEDIA APPLICATIONS
`
`P. B. Denyer, D. Renshaw, Wang Guoyu, Lu Mingying
`
`VLSI Vision Ltd. & University of Edinburgh
`The King's Buildings,
`Edinburgh, UK
`
`sensor is improved by maximising the fraction of each pixel
`occupied by the exposed diode. At the same time we have an
`interest in keeping the pixel pitch as small as possible to
`maximise the sensor resolution achievable within a given
`cost (silicon area). Typically we achieve a fill factor better
`than 50% at a pixel pitch of around 10 microns using a 1
`micron double—metal CMOS technology.
`
`
`
`horizontal shift regi st
`
`0/pstage
`
`
`

`
` vertical
`
`
`
`
`
`shiftregister
`
`Figure 1. CMOS Sensor Architecture
`
`low-noise
`The major circuit problem is to achieve rapid,
`linear detection and output of the charge detected in each
`pixel. For typical sensor responsivity and operation down to
`gloomy lighting conditions
`at normal video rates
`the
`minimum detectable charge may need to be less than 1 fC. We
`have found that a useful performance may be obtained by the
`architecture shown in Fig 1. We provide charge—integrating
`amplifiers at
`the top of each column of pixels.
`Reading
`whole rows of pixels in parallel removes much of the speed
`constraint on these arnplfiers and allows them to be designed
`with regard to sensitivity and size (the need to realize this
`function within the pixel pitch is a major layout challenge).
`For this reason we prefer
`to use a simple single-ended
`amplifier,
`and for extreme
`sensitivity
`the
`integrating
`capacitor may be purely parasitic (around 10 fF). Once a row
`has been sensed, it is stored on an array of capacitors before a
`second, fast charge sensing operation is used to scan the row
`of values out through a single output amplifier. As there is
`
`ABSTRACT
`
`Building on earlier work We confirm the feasibility of
`integrating image sensors with other system functions on
`single chips, fabricated on standard CMOS ASIC processes.
`We report novel circuit techniques and present results from
`successful implementations.
`
`1.
`
`INTRODUCTION
`
`We have previously reported [1] that quality images may be
`obtained by carefully designed sensors
`implemented in
`standard ASIC CMOS processes.
`This leads to several
`powerful advantages;
`
`through a very high level of
`greatly reduced size
`-
`integration. This may extend to encompass other elements of
`the imaging system on the same chip as the sensor (e.g. A/D
`conversion, compression logic).
`
`— greatly reduced power consumption, and operation from a
`single 5v rail.
`
`than comparable systems constructed with
`lower cost
`-
`discrete camera technologies. System cost reductions of 90%
`are often feasible.
`
`These advantages are relevant to many applications in the
`vision marketplace, but nowhere so much as in personal
`computing
`and
`telecommunications, where multi-media
`applications are growing but the advantages conferred by the
`capability of sight are yet barely recognised.
`Integrated
`CMOS-based sensors and sighted peripherals are made
`feasible at economic cost by the technology reviewed and
`presented in this paper.
`
`2. FUNDAMENTALS
`
`Figure 1 shows an architecture for an array image sensor
`which is realisable in a conventional ASIC CMOS process.
`The entire function may be likened to a single-transistor
`DRAM module, except data is written optically,
`the cell
`(pixel) array is constructed in a regular
`two-dimensional
`format, and the sensing operation is analogue.
`
`contains a single access transistor, as in the
`The pixel
`DRAM analogy, but the source region is physically extended
`to form a reversebiased photodiode. The efficiency of the
`
`IEEE 1993 CUSTOM INTEGRATED CIRCUITS CONFERENCE
`
`0-7803-0826-3/93 $3.00 © 1993 IEEE
`1051-001
`
`11.5.1
`
`1051-001
`
`

`
`only one of these, a more complex design is permitted to
`achieve the necessary speed (up to 14 MHz for full resolution
`BIA video).
`
`3. OUTPUT STAGE
`
`It is most useful to augment the output stage with some form
`of digital gain control.
`This assists
`in implementing
`automatic gain control (AGC), and may be adapted for A/D
`conversion.
`Ideally, this stage should be free of any analogue
`offset (or else the applied gain value will also cause variation
`of the offset).
`Fig. 2 shows a simple self-calibrating
`digitally-controlled gain stage.
`
`
`
`Figure 2. Self—calibratt'ng output stage including digiial gain
`control.
`
`The video signal is first converted to a current through M1.
`M2 provides a constant current source Imax, of value equal to
`the greatest current expected from MI. For an arbitrary value
`of Vin, which converts to Iin through M1, the residual current
`Iv = (Imax - Iin) flows out of this stage, through the common
`gate device M4 and to ground through load devices M5, M6,
`etc. These load devices operate in their linear regions and
`form a binary-ratioed series. The total conductance of the
`load is G.D, where G is the conductance of the smallest device
`in the binary series (say M5) and D is the digital word formed
`by the control bits applied to the series of gates. 80 the
`output becomes;
`
`Vout = Iv/(G.D)
`
`Now, if the input to this stage is inverted video (Vin = Vblack
`- Vv) and we approximate M1 as a linear transconductor of
`value Gt and Imax = Gt.Vblack, then;
`
`Vout = VV/G'.D.
`
`G‘ = G/Gt is a scaling constant and D is the digital input word.
`This stage is therefore a Dividing DAC (DDAC).
`This is
`useful enough to control gain incrementally within a feedback
`loop that uses comparator compl
`to monitor
`the
`image
`quality (judged by the fraction pixels with values above Va).
`
`Although this stage is far from perfectly linear, it is suitable
`for most video applications.
`Indeed where the resulting video
`is to be displayed directly on a monitor some nonlinearity is
`called for to provide gamma correction.
`
`An accurate calibration of Imax is obtained by turning M3 on
`and M4 off during a known period of video corresponding to
`black, for example during the readout of a reference line at the
`bottom of the array which is deliberately shielded from light.
`The resulting value is held via C1 for one video field until
`the
`calibration process can be repeated.
`
`4. OPTICAL RESPONSE
`
`The overall device efficiency is a function of the optical path,
`the physical structure of the photodiode, and the gain of the
`detection path between the photodiode and the output.
`Optoelectronic conversion in the photodiode plays a key role
`in determining the response of
`the sensor.
`Photons are
`converted to charge by splitting electron—hole pairs; where
`this occurs within the depletion region beneath the diode the
`resulting free carriers form a photocurrent under the influence
`of
`the
`depletion region
`field.
`Unfortunately
`some
`photocharge is lost by recombination if
`the conversion
`occurs
`in the heavily
`doped
`surface diffusion
`layer.
`Additionally, charge created below the depletion region may
`still be collected by diffusion as the recombination time
`Within the lightly doped substrate is relatively long. The
`depth of
`conversion is
`a
`statistical
`function of
`the
`penetrating wavelength, and therefore the spectral
`response
`is not uniform.
`Fig. 3 shows typical photodiode responses
`from two commercial CMOS processes. The response to blue
`light (shorter wavelengths) is attenuated by the surface effect,
`and the response to red light by the bulk effect. There is good
`coverage of the visible spectrum in both cases and significant
`response at near-infra-red wavelengths above the visible
`spectrum. The attenuated blue response compromises colour
`camera
`realisation,
`but
`the
`technology
`trend
`towards
`The near-IR
`shallower diffusion mitigates this problem.
`enable
`covert
`camera
`response
`is
`good
`enough
`to
`applications using IR illumination.
`
`
`
`responses of two CMOS
`Predicted spectral
`Figure 3.
`processes. (arbitrary vertical scale).
`(a) 2 micron single tub epitaxial
`(b) 1 micron twin tub nonepitaxial
`
`11.5.2
`
`1051-002
`
`1051-002
`
`

`
`S. SECOND ORDER EFFECTS
`
`image
`acceptable
`a visually
`for
`rule-of-thumb,
`a
`As
`imperfections must be contained to better than 1% . A sensor
`of
`the
`above architecture, built
`in a
`standard CMOS
`technology, may easily suffer fixed-pattem noise effects
`worse than this criterion.
`Specific problems induced by
`threshold and transconductance mismatches are spatially
`random pixel offsets, caused by threshold variation in the
`pixel access transistors, giving a measels-like effect, and
`random column offsets, giving vertical stripes, caused by
`threshold offsets in the column sense amplifiers. Both of
`these effects may be cured or compensated however.
`The
`pixel effect is easily cured by ensuring that the common reset
`potential
`is
`lower
`than the gate—voltage-minus-threshold
`limited maximum. This removes the primary Vt-dependence
`and hence the source of noise. As for the column sense
`
`several common compensation
`amplifier offsets, any of
`Thus
`the desired analogue
`schemes may be adopted.
`performance may be achieved by careful circuit design whilst
`retaining the primary advantage of working within a standard
`CMOS ASIC process. Figure4shows examples of the effects
`of these sources, and their elimination, taken from the fully-
`integrated CMOS camera chip shown in Figure 5.
`
`
`
`(£1)
`
`(19)
`
`(C)
`
`Figure 4. Typical image artefacts and their elimination.
`(a) measels effect of Vt variation in pixel reset potential
`(b) vertical striping caused by offsets
`in column sense
`amplifiers
`(c) defect-free image after compensation
`
`6. EXAMPLE CAMERA
`
`The device of Fig. 5 includes a 312 x 287 pixel array,
`formatted to accord with the 4:3 aspect ratio which is a TV
`standard, timing logic to achieve the CCIR video standard,
`exposure control logic covering a range of 40,000 :
`1, and
`AGC for up to 20 dB of gain in poor lighting conditions. The
`chip integrates all of the camera function except the lens, a
`Sv regulator, a clock crystal and a few decoupling capacitors.
`All of these may be provided within a space of approximately
`1 cubic inch, and the resulting camera module consumes less
`than 40 mA whilst continuously driving a 50 ohm video load
`(e.g. a TV monitor).
`
`
`
`Figure 5. An example CCIR camera fully integrated using
`ASIC CMOS technology
`
`The resulting camera provides subjectively excellent video at
`a fraction of
`the cost,
`size and power consumption of
`contemporary technologies, exemplifying the
`advantages
`cited in the introduction.
`
`7. DIGITAL APPLICATIONS
`
`require video
`Many multimedia and related applications
`signals in digital form.
`It is clearly attractive to provide A/D
`conversion on—chip, and again the use of CMOS technology
`makes this feasible. One approach, using the same output
`stage as in Fig. 2, is to implement a sucessive—approximation
`technique.
`For this purpose we apply a suitable refrence
`voltage Va,
`to comparator compl.
`The output of
`the
`dividing-DAC is then successively compared to this value
`whilst a binary search proceeds for
`the closest digital
`representation. At
`the conclusion of this process, Vout
`is
`close to Va and so;
`
`D = IV/(Va.G)
`
`where D is the digital word applied to‘the DAC. Thus the
`output stage can be used to form a linear digital conversion of
`the video current.
`The logic overhead to implement
`the
`sucessive approximation process is about 400 gates only,
`and it is ideal for applications requiring digital pixel rates up
`to 1 MHz. This encompasses the CIF image standard for
`video compression at frame rates up to approximately 12
`frames per second.
`For
`faster digital
`rates, other A/D
`techniques may be equally well applied in CMOS on the same
`chip as the sensor.
`
`8. MINIATURE OPTICS
`
`The camera function is always completed by the addition of a
`lens, and this threatens to reduce the cost and size advantages
`given above if conventional
`lenses are used. We have
`experimented with two miniature lens forms. Fig. 6 shows in
`
`11.5.?)
`
`1051-003
`
`1051-003
`
`

`
`cross section a miniature two-part glass lens which is bonded
`to the sensor surface.
`
`9. EXAMPLE APPLICATIONS
`
`We conclude by examining two multimedia case studies; a PC
`stills
`camera
`peripheral,
`and
`a
`fu1ly—integrated
`sensor/compressor Videophone device. These both illustrate
`the potential for all-CMOS video systems.
`In addition to
`integrating the camera function, most of the remainder of the
`system may in principle be included onchip. The only off
`chip components need be those that are not physically
`realisable in an ASIC CMOS process (e.g. clock crystal) or
`quantities of RAM which are more economically provided by
`commodity parts.
`
`System
`
`Potential Onchip fns.
`
`Ofichip fns.
`
`PC camera
`peripheral
`
`Videophone
`
`5V Regulator
`Crystal
`RAM
`
`Decoupling
`RS232 Driver
`
`5v Regulator
`Crystal
`(RAM)
`Decoupling
`Comm. Driver
`Disp. Driver
`Display
`
`Sensor
`Exposure Control
`A/D
`
`RAM Control
`RS232 Format
`
`Sensor
`Exposure Control
`A/D
`Block Format
`Compress/Decompress
`Display Format
`Comm. Format
`
`10. CONCLUSION
`
`A range of multimedia applications which require adequate
`performance at minimum power,
`size and cost may be
`satisfied by combining the sensor function on-chip with the
`remaining system functions, using standard CMOS ASIC
`technology. The required perfonnance may be achieved by
`careful analogue design, obviating the expense and bulk of
`solutions using discrete camera technologies.
`
`ACKNOWLEDGEMENTS
`
`We gratefully acknowledge the support of the UK Science and
`Engineering Research Council who supported this work under
`Grant GR/F 36538.
`
`REFERENCES
`
`[1] Renshavv, D., Denyer P.B., Wang G.,and Lu M., "ASIC
`Vision", CICC 90
`
`package substrat-
`
`silicon die
`
`high index bloc
`
`I
`
`1mm U
`
`
`
`low index hemisphen
`V
`
`
`g\\V
`
`
`
`Figure 6. Cross section of a miniature lens assembly directly
`bonded to the silicon surface.
`
`Its dimensions and mass are comparable with the sensor die
`itself;
`the example shown comprises a glass block with a
`high refractive index aproximately 1600 microns thick,
`to
`which is cemented a hemisphere of lower index and radius
`approximately 650 microns.
`Its
`low mass
`and solid
`construction make this optical
`system extremely robust
`against physical shock. Lenses of this type are also capable
`of awide field of view. ‘Fig. 7 shows an image resulting from
`the use of this lens to provide a 90 degree field of View over a
`CMOS array of 100 x 156 pixels.
`
`
`
`Figure 7. Image obtained with chip-mounted lens and on-chip
`digitisation
`
`A somewhat more conventional approach is to use a single
`element aspheric lens moulded in plastic (typically acrylic).
`This is also small (although larger than the lens of Fig. 6) and
`of low mass, but is only good for fields-of—view up to around
`65 degrees.
`Either
`lens
`technique
`can yield camera
`implementations which are remarkably small and economical
`compared with existing camera technologies.
`
`11Ji4
`
`1051-004
`
`1051-004

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket