throbber
VALEO EXHIBIT 1040
`Valeo v. Magna
`IPR2015-____
`(cid:57)(cid:36)(cid:47)(cid:40)(cid:50)(cid:3)(cid:40)(cid:59)(cid:17)(cid:3)(cid:20)(cid:19)(cid:23)(cid:19)(cid:66)(cid:19)(cid:19)(cid:20)
`
`

`

`232
`
`1. Introduction
`
`M. Tremblay et al.
`
`Visual perception must be developed in rela-
`tion with the needs of the recognition processes
`in order to define efficient and adaptative au-
`tomation tasks or mobile robot applications. A
`large part of the computational effort
`in com-
`puter vision is related to low level repetitive pro-
`cessing which can best be implemented at the
`sensor level [12]. It is well known that biological
`visual
`systems
`include aspects of
`image pre-
`processing on the first layers of neurons within
`the retina. It has been suggested that this low
`level image processing is related to multiresolu-
`tion edge extraction [9]. This natural organization
`reduces the amount of data to be routed to the
`
`visual cortex for further recognition [3].
`Computational sensing is a novel research area
`which targets the integration of such image pro-
`cessing by merging transduction devices and ana-
`log signal processing modules. Interesting solu-
`tions exploit natural properties of semi-conductor
`devices in order to implement simple but power-
`ful computational functions on a very small sili-
`con area [4]. The bidimensional implementation
`of computational sensors is confronted to trade
`off between the spatial resolution of the sensor
`and the complexity of the imbedded electronics.
`This paradox will be real until microelectronic
`technology can offer higher integrated level or
`3D structures, which would remove limitations of
`fully parallel implementation of analog process-
`ing at the sensor level. In many cases,
`it
`is not
`advantageous to simultaneously compute a large
`number of edge data if sequential (slow) scanning
`is required for the extraction of these primary
`results. Previous work on smart sensing has con-
`sidered the design of complex photo-sensitive ele-
`ments [14] with emphasis on the communication
`
`
`
`Denis Poussart received the Ph.D. in
`Electrical Engineering from MIT and
`joined the department of Electrical
`Engineering of Laval UniVersity in
`1968. He leads the Vision and Com-
`puter Systems Laboratory, which is a
`node of a network of Canadian uni-
`versities and industries which have re-
`cently initiated the
`Institute for
`Robotics and Intelligent System. He
`is a member of IEEE and l’Ordre des
`ingénieurs du Québec. He has done
`research in biophysics and instrumen-
`tation and is involved in 3D vision and its applications in
`biomedicine and industry.
`
`between neighbors [7], [8]. Other approaches use
`the implicit access of parallel row data at one end
`of the photo-sensitive array for SIMD computa-
`tion, [5] or a complete sequential processor which
`is implemented on a same substrate [1]. A com-
`mon goal of these approaches and of the one
`discussed in this paper is to integrate photo-sensi-
`tive elements on CMOS or CCD technologies
`[14,16].
`The sensor architecture described here uses a
`
`serial-parallel approach in order to yield a bal-
`ance of resolution and computational capabilities.
`Emphasis is put on an efficient communication
`strategy in order to extract a local description of
`focal plane illuminance and exploit it by an exter—
`nal analog processing module. The main interest
`of this architecture is related to the parallel ana—
`log filtering made possible by using several differ-
`ent operators which are driven in common by the
`set of primary outputs of the sensor. It is thus
`possible to generate, in a single scan period, a
`multiresolution edge description of a scene using
`band-pass filters with different scales. This type
`of satellite analog processing is discussed in Sec-
`tion 2 with the proposed open architecture for
`dedicated post-processing on primary edge data.
`The Multi-port Access of photo-Receptor (MAR)
`architecture is presented in Section 3 with its
`pixel electronics and the basic operating mode.
`Section 4 describes the parallel analog module
`which implements multiresolution Laplacian-of—
`Gaussian operators followed by a zero-crossing
`evaluation module. The microcoded edge track-
`ing algorithm is described in Section 5. The paper
`concludes with experimental results which have
`been obtained from a current prototype of 256 X
`256 pixels.
`
`2. The system architecture
`
`2.]. Satellite analog processing
`
`The implementation of focal plane processing
`implies a delicate balance between pixel complex-
`ity, spatial resolution, and data flow. It is clear
`that the small cell size of a 2D photo-sensing
`array leaves only a limited area available for
`computation. In the case of large arrays, until
`technology allows much denser circuits (or 3D
`structures), most of the non-photosensitive area
`
`(cid:57)(cid:36)(cid:47)(cid:40)(cid:50)(cid:3)(cid:40)(cid:59)(cid:17)(cid:3)(cid:20)(cid:19)(cid:23)(cid:19)(cid:66)(cid:19)(cid:19)(cid:21)
`VALEO EX. 1040_002
`
`

`

`Smart sensor with multiresolution edge extraction capability
`
`233
`
`On Chip
`
`Parallel "luminance
`
`Extrusion
`
`Region 01 Interest
`
`Focal Plane Area
`
`
`
`Image Sensor
`
`"luminance Images
`
`Filtered Images
`
`ing, stereo disparity evaluation using two MAR
`sensors, image calibration for further photometric
`use of the sensor, and others. A main digital
`controller is designed for driving specific protocol
`signals, data flow and memory addressing on a
`hexagonal tessellation. It also implements a mi-
`crocoded instruction set which defines complex
`sequences displacement for the P01 within the
`sensing array. Finally, the controller is responsi-
`ble for interfacing the MAR system with the host
`computer and provides bidirectional interrupt ca-
`pabilities which offer interactive feature extrac-
`tion, especially during the edge tracking process.
`This open and modular architecture facilitates
`the development of other co-processing elements
`because none have direct
`functional
`conse-
`
`quences on the others.
`
`3. Basic organization of the hexagonal MAR sen-
`sor
`
`In conjunction with the satellite processing
`approach,
`the present sensor development has
`been oriented towards the following objectives:
`(1) highest possible spatial resolution using cur-
`rent VLSI technology, (2) possibility of using mul-
`tiresolution edge analysis in order to extract rele-
`vant characteristics from a scene, (3) access to
`custom video rate and format for automatic light
`adaptation without constraint emerging from his-
`torical video standards, and (4) emphasis on data
`base description of early primitives rather than
`real-time display of illuminance images: it is not
`an imaging sensor. This section explains how these
`objectives have been met by using a hexagonal
`multi-port addressing strategy and presents its
`associated design and operating constraints.
`
`3.]. Multi-port access on a hexagonal tessellation
`
`Fig. 1. Satellite processing approach. The set of primary
`analog outputs which represents the illuminance of the region
`of interest addressed within the sensor is used by several
`analog filters in order to implement the extraction of multires-
`olution primitives. The pixel of interest itself is also routed to
`an A/D converter in order to output illuminance data.
`
`will have to be dedicated to communication, with
`little built—in processing. Furthermore, there exist
`serious I /O limitations. For instance, even if
`edge information could be computed in parallel
`within each pixel, a simultaneous read-out of
`such data over large image regions shall remain
`challenging. Current
`technologies favor simple
`operators with great spatial homogeneity. The
`MAR architecture recognizes such a trade-off
`between the simplicity of the basic pixel design
`and the complexity and penalty of rapid commu-
`nication by making use of external, but tightly
`coupled, processing support.
`This so-called satellite processing is illustrated
`in Fig. 1, which shows the conceptual representa-
`tion of a device with relatively high resolution
`and its associated off-chip parallel analog pro-
`cessing. The dark circle on the sensor delineates
`a region of interest (ROI) centered on the pixel
`of interest (POI) from which illuminance data is
`retrieved and routed to a conditioning module.
`These channels are commonly used by a set of N
`different analog filters which may implement
`multiresolution and /or directional edge extrac-
`tors, Gaussian filters, etc.
`
`2.2. Post-processing module integration
`
`While computational sensor development re-
`mains our main research topic, several other pro-
`jects are in progress as VLSI co—processing mod-
`ules which will operate in the immediate periph-
`ery of the sensor. These modules share a single
`memory block with the sensor and other co-
`processing modules. Such processing could in-
`clude scale-space integration, shape from shad-
`
`The pixel architecture is based on a multi-port
`addressing strategy. The selection circuitry is sim-
`ilar to a multi-port memory except that selection
`busses are routed geometrically in order to define
`the shape of the kernel and that retrieved data
`are analog and represent pixel illuminance. The
`implementation of computational sensors is not
`restricted to a conventional rectangular grid. Even
`if low level algorithms in computer vision are
`often designed Cartesian tessellation, the hexago-
`
`(cid:57)(cid:36)(cid:47)(cid:40)(cid:50)(cid:3)(cid:40)(cid:59)(cid:17)(cid:3)(cid:20)(cid:19)(cid:23)(cid:19)(cid:66)(cid:19)(cid:19)(cid:22)
`VALEO EX. 1040_003
`
`

`

`234
`
`M. Tremblay et a1.
`
`nal pattern was chosen for three main reasons:
`(1) Immediate neighbors of any pixel of interest
`are located at a same radial distance, which facili-
`tates the implementation of circularly symmetric
`operators, (2) multi-port addressing of hexagonal
`pixels is naturally implemented using a set of
`colinear data busses, and (3) a hexagonal tessella-
`tion is highly regular and facilitates the represen-
`tation of curved lines or surfaces [6].
`Fig. 2 shows an overall block diagram of the
`MAR sensor. The core area is composed of a 2D
`matrix of multi—port access pixels where the POI
`is addressed by three concurrent selection lines.
`The white star represents the addressed pixels
`which are simultaneously extracted from the sen—
`sor. The topology allows access to the illuminance
`data of the POI together with the illuminances of
`all neighbors located on the three axes of symme-
`try of
`the array (corners of
`the concentric
`hexagon). Each illuminance signal is routed from
`the sensor on an individual channel and is fed to
`
`
`
`Photo-Transduction Analog Buffer
`
`MuIti-port Addressing
`
`Fig. 3. Multi-port addressing architecture of each pixel and
`non—destructive read-out of illuminance data. The photo-di-
`ode drives the gate voltage V; of transistor M1, which is then
`translated to a proportional output current L.
`
`is
`ters for both motion and direction. Reg_Set
`used to initialize the four shift registers, Rm,
`initializes the integration process which is dis-
`cussed in Section 3.2 while the Urn; signal
`is
`used to stop the integration process during scan-
`ning.
`The circuit organization of each pixel is pre-
`sented in Fig. 3. It can be divided in three main
`parts: (1) photo-transduction, (2) signal buffering,
`and (3) multi-port addressing. The single current
`15 which is generated by the integration of
`photo-current IE on the gate capacitance of tran-
`sistor M1,
`is retrieved through a set of three N
`transistors (My, Ms1 and MX2) according to the
`status of the selection lines. It may be shown [15]
`that the output current 15 may be expressed by
`the following expression:
`
`Is = E (VI/Reset _ VSN + K ‘ 2Cg1Einti
`
`
`K
`
`7’
`—V —Vt
`
`,
`
`(1)
`
`where K is a CMOS process constant, B is the
`current gain of the transistor M1, VSN is the
`voltage drop of an N transistor due to threshold
`effect, V7 is the forward voltage drop of a PIN
`diode (approx. 0.6 V), V,
`is the threshold voltage
`of the transistor M1,
`t,- is the photo-current inte-
`gration time and E." is the input illuminance of
`the pixel.
`
`3.2. Non-destructive read-out of pixel information
`
`Transistor M1 operates as an analog transcon-
`ductance buffer and is required to ensure a non-
`
`(cid:57)(cid:36)(cid:47)(cid:40)(cid:50)(cid:3)(cid:40)(cid:59)(cid:17)(cid:3)(cid:20)(cid:19)(cid:23)(cid:19)(cid:66)(cid:19)(cid:19)(cid:23)
`VALEO EX. 1040_004
`
`the external analog module for spatial filtering
`operations. Each set of selection busses (named
`Y, X1 and X2) is activated by a bidirectional shift
`register. A fourth register (named T) is used to
`control
`the parallel analog multiplexor on the
`upper region of the sensor. Unlike in conven—
`tional video sensors,
`the POI may be moved
`along any of the axes of the underlying hexagonal
`structure,
`thus allowing very flexible scanning
`strategies. A direction code which uses 3 bits is
`internally decoded in order to control shift regis-
`
`T Shift Register
`-
`
`
`lllllllllllllllllllllll|||||llll|||l|lIlllllllllllllllllllilllllllll
`Dacuder
`Bulls rs
`
`
`
`II l|||||l|||
`
`
`
`Parallel
`Analog Outputs
`
`Pixel Ol
`lntarest
`
`||lli||l||l|ll|||ll||l|lilllll||||lll|||||li||||
`
`
`‘lllllllllllI||||llllllllllllllllllllllll ||||||llllllllllllllllllllllJ
`
`
`_ “may...
`|||||||lllllllllllllllllllllll llllllllll‘lllllllillllllllllilllllllll
`
`X. Shaft Register
`
`
`Flagjet
`Grab
`R 5“"
`DEIBGllDH
`Command l-———'
`
` Direction code
`
`
`
`'Il'
`
`l III
`
`q.Illll|||||llllllllllllllllllllllll lllll
`
`
`
`
`Fig. 2. Block diagram of the MAR Sensor. A set of four
`bidirectional shift registers drive selection lines which con-
`verge upon the P01. A decoder module uses a direction code
`of three bits in order to drive each shift register. The analog
`data path is shown in the shadowed area until it reaches the
`analog multiplexor.
`
`

`

`Smart sensor with multiresolution edge extraction capability
`
`235
`
`destructive read-out of the illuminance data dur-
`
`ing pixel access. This property is critical because
`each pixel is addressed several times due to the
`parallel analog nature of the MAR architecture.
`The MAR sensor has a global Reset signal for
`the entire array which applies voltage VRm, on
`the gate of transistor M1. The integration process
`is thus uniform for each pixel of the sensor and
`the reading of the output current 15 is non-de-
`structive, with voltage Vg remaining unchanged
`irrespective of the frequency or duration of the
`access to a pixel element. This property is essen-
`tial since, after a complete scan of the sensor,
`each pixel is addressed as often as the number of
`pixels on the extraction kernel.
`
`3.3. The MAR sensor operation
`
`The typical operating mode of the MAR sen-
`sor is presented in Fig. 4 for the two extreme
`values of scene illuminance in a bright and in a
`dark region, when only one pixel is selected dur-
`ing the scanning window t, (for proper biasing of
`output transistor M1). For the dark case, photo-
`current IE is limited to the reverse leak current
`of the photo-diode which causes a small deviation
`AVg on the gate of M1. In this condition the
`output current is maximum. The bright case is
`illustrated for an unsaturated pixel which sinks a
`small current (near zero) until a minimum thresh-
`
`is applied at the gate of the output
`old voltage V,
`transistor. If an isolated region has a very high
`level of illuminance (caused by a light source in
`the scene or specular reflections for instance)
`
`Control
`
`Analog
`Response
`
`
`
`
`
`DinhnI{ Reset
`1
`Grab _
`l_._l
`
`Scan
`A VF
`“I
`—1
`ti
`J.
`
`V9(black)txr:
`ls (black) '7
`- -----------_
`(saturation)
`V“;
`
`ii
`is
`
`V9 (white)
`
`Fig. 4. The timing diagram shows a typical illuminance inte-
`gration cycle and its associated scan (read-out) cycle for the
`two extreme cases of scene illuminance. The maximum output
`current is associated with a dark region while a null output
`corresponds to a highlighted pixel.
`
`pixels in this region will saturate. In this case (see
`dotted line in Fig. 4), the gate voltage decreases
`rapidly, resulting in a zero current output signal
`in the entire saturated region without any effect
`on the unsaturated neighbor pixels.
`The global m5 signal is used to interrupt the
`integration process during the scan period, espe-
`cially in conditions of heavy light,
`in order to
`avoid that the integration time be longer for the
`last visited pixels than for the first ones. It is clear
`that the integration process of the MAR sensor is
`not limited to proceed at a standard video rate
`(1/30 sec). This is a very useful characteristic.
`The integration duration t, may be adjusted by
`the operating system depending on the illumina-
`tion condition of the scene. This parameter is
`then defined as an equivalent aperture control
`(or the light source intensity thereof). A simple
`procedure which uses the histogram of a previous
`image could dynamically adjust the level of the
`white saturation of pixels by changing the integra-
`tion time ti The range of adjustment for this
`parameter is only limited by the voltage deviation
`due to the dark current of the back-biased PN
`
`junction.
`
`4. Analog image processing for multiresolution
`edge extraction
`
`Marr’s theory suggests to analyze edges using a
`multiresolution strategy [9,10,11] in order to dis-
`criminate, from noisy (but accurate) edges, those
`which represent relevant features in the scene. In
`this approach, edge extraction refers to the zero-
`crossing location in the resulting filtered image
`when it is convolved with the Laplacian-Of-Gaus-
`sian (LOG) operator. A multiresolution analysis
`is simply a variation on the standard deviation 0-
`of the Gaussian. A relevant property of the MAR
`sensor is its parallel analog filtering capability
`which allows a very fine sampling,
`in the scale
`domain, of the zero-crossing maps from the high-
`est frequency filters to the lower ones. The kernel
`has a sufficiently large diameter in order to im-
`plement low frequency filters for significant fea-
`ture extraction and high frequency rejection. The
`current version of the analog multiresolution edge
`extraction implements 16 different LOG filters as
`a resistor network from o = 0.5 (or Laplacian
`operator) to o= 6.9. This section presents some
`
`(cid:57)(cid:36)(cid:47)(cid:40)(cid:50)(cid:3)(cid:40)(cid:59)(cid:17)(cid:3)(cid:20)(cid:19)(cid:23)(cid:19)(cid:66)(cid:19)(cid:19)(cid:24)
`VALEO EX. 1040_005
`
`

`

`236
`
`M. Tremblay et al.
`
`details on the VLSI analog implementation of the
`Marr operator and a threshold-free zero-crossing
`detector. An edge encoding strategy is also pre-
`sented.
`Implementation details of
`the analog
`computing module is available in [2].
`
`4.1. Effect of the spatial sub-sampling of the convo-
`lution kernel
`
`The star shape of the convolution kernel which
`is drawn in Fig. 2 represents a sub-sampling of
`
`
`
`
`
`(b)
`
`Fig. 5. Fourier transform of a complete LOG kernel (a) and for its approximation in the MAR implementation (b). High
`frequencies are not rejected in the region where a large number of pixels remains unread. The MAR kernel is shown in (c).
`
`(cid:57)(cid:36)(cid:47)(cid:40)(cid:50)(cid:3)(cid:40)(cid:59)(cid:17)(cid:3)(cid:20)(cid:19)(cid:23)(cid:19)(cid:66)(cid:19)(cid:19)(cid:25)
`VALEO EX. 1040_006
`
`

`

`Smart sensor with multiresolution edge extraction capability
`
`237
`
`the LOG operator.
`the complete mask for
`Ninety-one pixels are read simultaneously for a
`possible count of 721 pixels (for maximum radius
`of 16 pixels). The spatial Fourier transform is
`presented in Fig. 5 for the complete LOG opera-
`tor in (a) and the approximated one in the MAR
`architecture (b). Some oriented high frequency
`regions are related to the unsampled pixels from
`the MAR sensor. This means that some high
`frequency patterns (texture-like) are not correctly
`rejected by coarse filters as it would be for a
`complete LOG kernel. In the scale space domain,
`this means that some zero-crossings (or edges)
`may appear for coarse resolutions even if they are
`not detected by finer filters. This problem is
`solved in part by applying a large number of
`filters with very fine increments of a.
`
`4.2. Zero-crossing detection and digital edge encod—
`ing
`
`The zero-crossing extraction must be executed
`dynamically within the camera by analog process-
`ing module in order to avoid a large number of
`A/D converters, data storage and digital process-
`ing. The procedure is shown in Fig. 6 for a simple
`example while a 1D cut view of the image (a) is
`shown on Fig. 7. The analog response from each
`LOG filter is routed to three inverters with pro-
`
`
`
`(a) llluminance Image
`
`(b) Binary Image for H and S
`
`as
`
`(c) H' resulting from dilatation
`
`(d) Edge detected from (c)
`
`l:lS=1
`
`LEGEND
`-S=0
`
`-H=1
`
`-H'=1
`
`Fig. 6. Visualization of the zero-crossing extraction procedure.
`The synthetic illuminance image (a) consists of a dark circle
`on a white background with small noisy patterns. The sign (S)
`and Hysteresis (H) bits are shown in (b) while (c) represents
`the regions where the zero-crossing is relevant (H * =1) in
`order to derive the final edge map in (d). A cross-section of
`the edge in (a) is shown in Fig. 7 for analog and digital values
`of a filter response.
`
`v2 Gooey)
`
`Flay) = I(x,y)~v‘ Gamay);
`
`
`
`Fig. 7. Side view of the example of Fig. 6. The zero-crossing
`detection consists of extracting the sign of the convolved
`image with the LOG operator S(x, y) and the thresholded
`value of the magnitude of the response H(x, y). After a
`morphological operation applied on the H signal (giving H *),
`a zero-crossing is defined as the change of sign within the
`active modified hysteresis value H *(x, y).
`
`grammable input thresholds (VS + , VS — , and 0
`V) in order to derive two digital signals: (1) the
`sign of the response S(x, y) and (2) the thresh-
`olded value of the magnitude of the response
`H(x, y) [2]. The zero-crossing is detected and
`located at a change of sign within the active
`window of H. While some pixels near a zero-
`crossing may have a small magnitude, a morpho-
`logical dilatation must be applied on the signal
`H(x, y). This is done by growing its active region
`(H = 1) until a change on the signal S is reached
`which defines the derived signal H *(x, y) (see
`Fig. 6c). This ensure that only zero-crossings with
`active H on both sides are extracted as edges.
`This situation is shown over the shadow area in
`
`Fig. 7 where a noisy pixel of H does not generate
`false edge detection.
`The final format for edge representation in-
`cludes two quantities for each pixel
`location:
`S(x, y) and H *(x, y). The resulting raw data
`consists of blocks of 32 bits per pixel for a 16
`filters system. An additional 8 bits from a single
`A/D converter is added in order to memorize
`the illuminance of the P01 (illuminance image).
`Another particularity of the MAR system is that
`edge location is defined between two pixels. This
`edge representation is very useful since the zero-
`crossing approach tends to yield closed loop
`edges. Using such an inter-pixel edge representa-
`tion, a single black pixel in a white background
`
`(cid:57)(cid:36)(cid:47)(cid:40)(cid:50)(cid:3)(cid:40)(cid:59)(cid:17)(cid:3)(cid:20)(cid:19)(cid:23)(cid:19)(cid:66)(cid:19)(cid:19)(cid:26)
`VALEO EX. 1040_007
`
`

`

`238
`
`M Tremblay e! a].
`
`(or thin line like finger prints) is extracted as a
`small circle with diameter of one pixel (see re-
`sults in Section 6).
`
`5. Scene representation and featutes extraction
`
`A main goal of computational sensing is to
`design sensors which process images at the focal
`plane level. But it is imperative to define a proper
`data format in order to accommodate the subse-
`
`quent segmentation and recognition processes.
`This
`section presents
`the
`two main post-
`processing digital modules which allow such a
`data-base description. The proposed scale-space
`integration approach is
`first summarized,
`fol-
`lowed by- the microcoded edge tracking which is
`
`
`
`(K ,
`,
`\ff
`
`VAccurate but Noisy
`Edges (a Small)
`
`Original Image
`(Illuminance)
`
`(a)
`
`Iog(s)
`
`|09(S)
`
`
`
`DOS value
`(example)
`
`(b)
`Fig. 8. 2D representation (a) of the multiresolution analysis
`using zero—crossing maps in the scale-space domain. The
`Depth Of Scale (DOS) approach (b) shows the transposition
`from continuous representation to an oversampled one in the
`scale domain. Some DOS values are traced for this example
`with the level of visibility of an edge in the space domain.
`
`U
`
`Cl
`
`E
`
`El
`
`
`
`El
`
`Cl
`
`El
`
`U
`
`D
`
`Cl
`
`E!
`
`El
`
`El
`
`LEGEND
`.__o._.
`A,—
`Real Edge Location Undetected Edge Detected Edge
`
`-----------
`Interpreted Edge [Location
`
`Fig. 9. Typical execution of the edge tracking algorithm on a
`hexagonal
`tessellation. The algorithm starts by finding the
`first zero-crossing [move #4] (which is located between two
`pixels) and start the tracking in the left for this example [#5
`and #6]. The regular edge tracking is made by crossing the
`P01 from side to side [#7, #8 and #9] the zero-crossing
`detection is lost [#10]. At this point, the triangle is closed
`[#11] and tracking continues in the new direction [#12 and
`#13]. etc.
`
`tessellation. An
`dedicated to an hexagonal
`overview of the primary scene description which
`is the effective output of the MAR system is also
`introduced.
`
`5.]. Real—time scale-space integration
`
`The scale-space integration procedure must
`proceed at
`the sensor level because it
`is not
`appropriate to obtain 16 individual edge maps as
`sensor outputs without any hierarchical interpre-
`tation or pyramidal analysis [13]. An example of a
`typical scale—space representation is shown in Fig.
`8a as a 3D set of data: a low value of the
`
`parameter 0' (narrow filter) gives an accurate but
`very dense edge map while coarse filters extract
`only the relevant structures of the scene without
`a good localization. The proposed approach visits
`the scale-space domain in a line by line way and
`counts the number of levels which have been
`
`visited until it is possible to detect the edge. The
`oversampling of the scale-domain ensures that
`
`(cid:57)(cid:36)(cid:47)(cid:40)(cid:50)(cid:3)(cid:40)(cid:59)(cid:17)(cid:3)(cid:20)(cid:19)(cid:23)(cid:19)(cid:66)(cid:19)(cid:19)(cid:27)
`VALEO EX. 1040_008
`
`

`

`Smart sensor with multiresolution edge extraction capability
`
`239
`
`the displacement of the edge is never more than
`one pixel for consecutive filters. A typical exam-
`ple is shown in Fig. 8b where a sub-set of the
`accurate edges are labelled with the Depth Of
`Scale value which ranges from 1
`(noisy, high
`
`frequency or very low contrast edges) to 16 (low
`frequency and relevant contrast in the scene).
`The scale-space integration algorithm is in-
`tended for a co-processing module within the
`MAR architecture. It is implemented in a pipe-
`
`.wfiéfiI.I-'
`86%!m' 3*.
`
`
`.@399"it?$553.{*“rfi‘
`'b
`”3.;
`535?."w
`2232
`
`.
`
`,fihmmnmmmnm
`
`g i EE
`
`3.
`EEl
`
`Fig. 10. Results for a photographic enlarger from the 256x 268 MAR sensor. The hexagonal illuminance image (a) is shown along
`with four of its sixteen edge maps at different spatial resolutions (b, c, d and e) which are generated simultaneously in a single
`frame period. The resulting representation of the scene after scale—space integration is shown in (f) (see Section 5.1).
`
`(d) :2ng Map (0:2.4)
`
`(cid:57)(cid:36)(cid:47)(cid:40)(cid:50)(cid:3)(cid:40)(cid:59)(cid:17)(cid:3)(cid:20)(cid:19)(cid:23)(cid:19)(cid:66)(cid:19)(cid:19)(cid:28)
`VALEO EX. 1040_009
`
`

`

`240
`
`M. Tremblay et a1.
`
`lined processor which reads the 16 edge maps (32
`bits per pixel) and generates the corresponding
`DOS values for every edge pixel which is de-
`tected by the finest filter. The DOS value is
`accumulated for three consecutive image scans
`(from memory to memory) using the three main
`directions of the hexagonal structure. The sixteen
`edge images are used as the input of the scale-
`space integration algorithm in order to track edges
`in the scale space. The resulting DOS image of
`the example of Fig. 10a is presented in Fig. 10f. A
`single treshold was used for the printed represen-
`tation but, in fact, every edge pixel has its own
`weight which represents the local DOS value as
`shown in Fig. 8b. This final image is used for the
`edge tracking procedure where DOS values are
`integrated locally along the edge segment.
`
`5.2. Microcoded edge tracking algorithm on hexag-
`onal tessellation
`
`As mentioned in the beginning of the paper, a
`hexagonal edge tracking algorithm has been im-
`plemented as a microcoded component of the
`direction controller. This interactive procedure
`extracts linear edge segments from the scene and
`
`transfers them as a line drawing to the host
`computer. An example of the algorithm is illus-
`trated at Fig. 9 where the continuous curve repre-
`sents the real edge (or zero-crossings) location
`and the pixel path is traced using arrows. The
`basis of the algorithm is to cross from side to
`side, in a zigzag mode, while the zero-crossing is
`validated. When the edge stops to be detected (or
`changes in orientation), the algorithm closes the
`triangle and restarts on that new direction. The
`interest of this approach is that the natural edge
`segments 1 connectivity is naturally recorded dur-
`ing the edge-tracking algorithm. A simple visit
`flag is set during the edge backing while a basic
`scan of the sensor data, with break condition set
`to unvisited pixels, ensures that every edge is
`extracted from the focal plane.
`This step, in the data acquisition process, fol-
`lows the nature of the scene. A large amount of
`short and unstructured contrasts (edges) gener-
`ated by textured surfaces will
`imply a longer
`procedure than for a scene with smooth surfaces
`with polyhedral shapes. Some interesting values
`may be computed during the edge tracking as a
`line segment property. The DOS value may be
`integrated along with the last visited pixels as well
`as the edge length.
`
`
`
`(9) Edge Map (0:6.9)
`
`
`
`fl
`
`if) Dos
`
`Fig. 10. (Continued)
`
`(cid:57)(cid:36)(cid:47)(cid:40)(cid:50)(cid:3)(cid:40)(cid:59)(cid:17)(cid:3)(cid:20)(cid:19)(cid:23)(cid:19)(cid:66)(cid:19)(cid:20)(cid:19)
`VALEO EX. 1040_010
`
`

`

`Smart sensor with multiresolution edge extraction capability
`
`241
`
`5.3. Towards a data-base representation for robot
`vision application
`
`The information extracted from a computa-
`tional sensor must be formatted in order to ac-
`
`commodate application requirements for which it
`is designed. Our strategy is oriented towards robot
`vision and machine intelligence. This is why a
`token description of the scene is preferred to a
`conventional raster data format. A data-base rep-
`resentation of the line segments may compress
`significantly the amount of data to be extracted
`from the sensing element. It may also increase
`the efficiency of the recognition process if this
`data-base includes some pre-computed proper-
`ties. The primary scene representation consists of
`a sequential list of extracted basic linear edge
`segments which are oriented towards one of the
`three main diagonals of the hexagonal tessella-
`tion (see Fig. 10). The basic description of a
`simple edge segment includes global DOS infor-
`mation from multiresolution analysis, line length
`and orientation, 3D coordinates (using stereo vi-
`sion) and the natural connectivity of line seg-
`ments. The software which runs on the host com-
`
`puter will modify this primary data-base into a
`more compact and significant one. This is done
`by merging consecutive line segments and replac-
`ing them by single features such as longer line
`segments or arcs. This advanced scene descrip-
`tion is improved by creating cross-references such
`as junction pointers (or proximity, vertices, T-
`junctions), symmetries (or parallelism) and direct
`2D accesses to primitives from a multi-scale local-
`ization map. Each vertex may include a small
`illuminance sub-image on which a sophisticated
`algorithm may proceed in order to compute an
`accurate junction localization.
`
`6. Experimental results
`
`A 256 X 256 pixels version of the MAR sensor
`is currently installed in a custom camera case
`with an opto-electric shutter. This version imple-
`ments sixteen different filters for multiresolution
`
`spatial edge detection. Image acquisitions have
`been performed on different scenes as a proof of
`the concept. A typical result is shown in Fig. 10
`for a photographic enlarger scene along with the
`resulting edge data. Four of the sixteen edge
`
`maps are presented (b, c, d and e) as well as the
`digitized illuminance data (a) of the enlarger (on
`a hexagonal grid). The high frequency rejection is
`particularly visible on the small vertical slots which
`are correctly detected for high resolution filters
`(0' = 0.8 and a = 1.1) but are almost completely
`eliminated and grouped in a single feature by the
`low resolution filters (0' = 2.4 to (r = 6.9). It may
`be also observed that any extracted edge segment
`is oriented according to one of the three main
`diagonals of the hexagonal structure. Another
`relevant property, which is related to Marr’s the-
`ory [10],
`is the bad localization of edges for low
`resolution filters. This is particularly visible on
`the control knobs of the enlarger. T

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket