throbber
WO 97/37336
`
`PCT/AU97100198
`
`- 8 -
`
`and stores data concerning the wingspan 22 of the aircraft and other details
`
`concerning the size, shape and ICAO category (A to G) of the aircraft. The image
`
`proceSSing system 8 classifies the aircraft on the basis of the size which can be used
`
`subsequently when determining the registration markings on the port wing 20. The
`5 data obtained can also be used for evaluation of the aircraft during landing and/or
`
`take-off.
`
`Alternatively a pyroelectric sensor 27 can be used with a signal processing wing
`detection unit 29 to provide a tracking system 1 which also generates the acquisition
`
`10 signal using the trigger logic circuit 39, as shown in Figure 4 and descnbed later.
`
`Detecting moving aircraft in the field of view of the sensor 3 or 27 is based on
`
`forming a profile or signature of the aircraft, P(Y,t), that depends on a spatial
`
`coordinatey and time t. To eliminate features in the field of view that are secondary
`
`15 or slowly moving, a difference profile f).P(y.t) is formed. The profile or signature can be
`
`differenced in time or in space because these differences are equivalent for moving
`
`objects. If the intensity of the light or thermal radiation from the object is not changing
`
`then the time derivative of the profile obtained from this radiation is zero. A time
`
`derivative of a moving field can be written as a convective derivative involving partial
`
`20 derivatives, which gives the equation
`
`dP(Y,t) _ ap(Y,t)
`at
`dt
`
`+ v ap(Y,t) = 0
`ay
`
`(3)
`
`where v is the speed of the object as observed in the profile. After rearranging
`
`equation (3), gives
`
`2P(Y,t) = -v ap(Y,t)
`at
`ay
`
`(4)
`
`which shows that the difference in the profile in time is equivalent to the difference in
`the profile in space. This only holds for moving objects, when v ~ o. Equation (4) also
`25 follows from the simple fact that if the profile has a given valueP(Yo,to) at the
`
`Sony, Ex. 1002, p.1501
`
`

`
`W097137336
`
`PCT/AU97/00198
`
`- 9 -
`
`coordinate (Yo,ta), then it will have this same value along the line
`
`(5)
`
`To detect and locate a moving feature that forms an extremum in the profile,
`
`such as an aircraft wing, the profile can be differenced in space t:.1'(y,t). Then an
`
`extremum in the profile P(y, t) will correspond to a point where the difference profile t:.1'(Y, t)
`
`5 crosses zero.
`
`In one method for detecting a feature on the aircraft, a profile P(Y,t) is formed
`
`and a difference profile t:.?(y,t) is obtained by differencing in time, as described
`below. According to equation (4) this is equivalent to a profile of a moving object that
`
`10 is differenced in space. Therefore the position yp of the zero crossing point of t:.,P(y,t)
`at time t is also the position of the zero crossing point of t:.?(y,t) which locates an
`extremum in P(y,t).
`
`In another method for detecting a feature on the aircraft, the difference between
`
`15 the radiation received by a sensor 27 from two points in space is obtained as a
`
`function of time, t:.ys(t) , as described below. If there are no moving features in the field
`
`of view, then the difference is constant. If any object in the field of view is moving, then
`
`the position of a point on the object is related to time using equation (5). This allows
`
`a profile or signature differenced in space to be constructed
`
`(6)
`
`20 and, as described above, allows an extremum correspondIng to an aircraft wing to be
`
`located in the profile from the zero crossing point in the differential signature.
`
`The image acquisition system 10 includes at least one high resolution camera
`
`7 to obtain images of the aircraft when triggered. The images are of sufficient
`
`25 resolution to enable automatic character recognition of the registration code on the
`
`port wing 20 or elsewhere. The illumination unit 16 is also triggered simultaneously to
`
`provide illumination of the aircraft during adverse lighting conditions, such as at night
`
`Sony, Ex. 1002, p.1502
`
`

`
`W097137336
`
`PCT/AU97100198
`
`-10-
`
`or during inclement weather.
`
`The acquired images are passed to the analysis system 12 which performs
`
`Optical Character Recognition (OCR) on the images to obtain the registration code.
`
`5 The registration code corresponds to aircraft type and therefore the aircraft
`
`classification determined by the image processing system 8 can be used to assist to
`
`the recognition process, particularly when characters of the code are obscured in an
`
`acquired image. The registration code extracted and any other information concerning
`
`the aircraft can be then passed to other systems via a network connection 24.
`
`10
`
`Once signals received from the pyroelectric sensors 21 indicate the aircraft 28
`
`is within the field of view of the sensors 3 of the tracking sensor system 6, the tracking
`
`system 1 is activated by the proximity detector 4. The proximity detector 4 is usually
`
`the first stage detection system to determine when the aircraft is in the proximity of the
`
`15 more precise tracking system 1. The tracking system 1 includes the tracking sensor
`
`system 6 and the image processing system 8 and according to one embodiment the
`
`images from the detection cameras 3 of the sensor system 6 are used by the image
`
`processing system 8 to provide a trigger for the image acquisition system when some
`
`point in the image of the aircraft reaches a predetermined pixel position. One or more
`
`20 detection cameras 3 are placed in appropriate locations near the airport runway such
`
`that the aircraft passes within the field of view of the cameras 3. A tracking camera 3
`
`provides a sequence of images, {/ n}' The image processing system 8 subtracts a
`
`background image from each image 'nof the sequence. The background image
`
`represents an average of a number of preceding images. This yields an image b.1n that
`
`25 contains only those objects that have moved during the time interval between images.
`The imagellJn is thresholded at appropriate values to yield a binary image, i.e. one
`that contains only two levels of brightness, such that the pixels comprising the edges
`
`of the aircraft are clearly distinguishable. The pixels at the extremes of the aircraft in
`
`the direction perpendicular to the motion of the aircraft will correspond to the edges
`
`30 18 of the wings of the aircraft. After further processing, described below, when it is
`
`determined the pixels comprising the port edge pass a certain position in the image
`
`Sony, Ex. 1002, p.1503
`
`

`
`W097J37336
`
`PCT/AU97/00198
`
`- 11 -
`
`corresponding to the acquisition point, the acquisition system 10 is triggered, thereby
`
`obtaining an image of the registration code beneath the wing 20 of the aircraft.
`
`Imaging the aircraft using thermal infrared wavelengths and detecting the
`
`5 aircraft by its thermal radiation renders the aircraft self-luminous so that it can be
`
`imaged both during the day and night primarily without supplementary illumination.
`
`Infrared (IR) detectors are classified as either photon detectors (termed cooled
`
`sensors herein), or thermal detectors (termed uncooled sensors herein). Photon
`
`detectors (photoconductors or photodiodes) produce an electrical response directly
`
`10 as the result of absorbing IR radiation. These detectors are very sensitive, but are
`
`subject to noise due to ambient operating temperatures. It is usually necessary to
`
`cryogenically cool (80 0 K) these detectors to maintain high sensitivity. Thermal
`
`detectors experience a temperature change when they absorb IR radiation, and an
`
`electrical response results from temperature dependence of the material property.
`
`15 Thermal detectors are not generally as sensitive as photon detectors, but perform well
`
`at room temperature.
`
`Typically, the cooled sensing devices are formed from Mercury Cadmium
`
`Telluride offer far greater sensitivity than uncooled devices, which may be formed from
`
`20 Barium Strontium Titanate. Their Net Equivalent Temperature Difference (NETD) is
`
`also superior. However, with the uncooled sensor a chopper can be used to provide
`
`temporal modulation of the scene. This permits AC coupling of the output of each pixel
`
`to remove the average background. This minimises the dynamic range requirements
`
`for the processing electronics and amplifies only the temperature differences. This is
`
`25 an advantage for resolving differences between cloud, the sun, the aircraft and the
`
`background. The advantage of differentiation between objects is that it reduced the
`
`load on subsequent image processing tasks for segmenting the aircraft from the
`
`background and other moving objects such as the clouds.
`
`30
`
`Both a cooled and uncooled thermal infrared imaging system 6 has been used
`
`during day, night and foggy conditions. The system 6 produced consistent images of
`
`Sony, Ex. 1002, p.1504
`
`

`
`WO 97137336
`
`PCT/AU97100198
`
`- 12 -
`
`the aircraft in all these conditions, as shown in Figures 8 and 9. In particular, the sun
`
`in the field of view produced no saturation artefacts or flaring in the lens. At night, the
`
`entire aircraft was observable, not just the lights.
`
`5
`
`The image processing system 8 uses a background subtraction method in an
`
`attempt to eliminate slowly moving or stationary objects from the image, leaving only
`
`the fast moving Objects. This is achieved by maintaining a background image that is
`
`updated after a certain time interval elapses. The update is an incremental one based
`
`on the difference between the current image and the background. The incremental
`
`10 change is such that the background image can adapt to small intensity variations in
`
`the scene but takes some time to respond to large variations. The background image
`
`is subtracted from the current image, a modulus is taken and a threshold applied. The
`
`result is a binary image containing only those differences from the background that
`
`exceed the threshold.
`
`15
`
`One problem with this method is that some slow moving features, such as
`
`clouds, still appear in the binary image. The reason for this is that the method does not
`
`select on velocity but on a combination of velocity and intensity gradients. If the
`
`intensity in the image is represented by l(x,Y,t). where x and Y represent the position
`20 in rows and columns, respectively, and t represents the image frame number (time)
`and if the variation in the intensity due to ambient conditions is very small then it can
`
`be shown that the time variation of the intensity in the image due to a feature moving
`
`with velocity v is given by
`
`al(x,Y,t)
`at
`
`- v.V'/(x,Y. t)
`
`(7)
`
`In practice, the time derivative in equation (7) is performed by taking the
`
`25 difference between the intensity at (x,Y) at different times. Equation (7) shows that the
`
`value of this difference depends on the velocity v of the feature at (x,Y) and the
`
`intensity gradient. Thus a fast moving feature with low contrast relative to the
`
`background is identical to a slow moving feature with a large contrast. This is the
`
`Sony, Ex. 1002, p.1505
`
`

`
`WO 97/37336
`
`PCT/AU97/00198
`
`- 13 -
`
`situation with slowly moving clouds that often have very bright edges and therefore
`
`large intensity gradients there, and are not eliminated by this method Since features
`
`in a binary image have the same intensity gradients, better velocity selection is
`
`obtained using the same method but applied to the binary image. In this sense, the
`
`5 background-subtraction method is applied twice, once to the original grey-scale image
`
`to produce a binary image as described above, and again to the subsequent binary
`
`image, as described below.
`
`The output from the initial image processing hardware is a binary image B(x,Y,t)
`10 where B(x,Y,t) = 1 If a feature is located at (x,Y) at time t, and B(x,Y,t) = Orepresents
`the background. Within this image the fast moving features belong to the aircraft. To
`
`deduce the aircraft wing position the two-dimensional binary image can be
`
`compressed into one dimension by summing along each pixel row of the binary image,
`
`P(Y,t) = f B(x,Y,t) dx
`
`(8)
`
`where the aircraft image moves in the direction of the image columns. This row-sum
`
`15 profile is easily analysed in real time to determine the location of the aircraft. An
`
`example of a profile is shown in Figure 10 where the two peaks 30 and 31 of the
`
`aircraft profile correspond to the main wings (large peak 30) and the tail wings (smaller
`
`peak 31).
`
`20
`
`in general, there are other features present, such as clouds, that must be
`
`identified or filtered from the profile. To do this, differences between profiles from
`
`successive frames are taken, which is equivalent to a time derivative of the profile.
`
`Letting A(x,Y,t) be the aircraft where A(x,Y,f} = 1 if (x,Y) lies within the aircraft and 0
`
`otherwise and letting C(x,Y,t) represent clouds or other slowly moving objects, then
`
`25 it can be shown that the time derivative of the profile is given by
`aP(y,t) = faA(X,y,t) dx + faC(x,Y,f) dx - I.E.. [A(x y f)C(X,y,t)jdx
`at
`at
`at
`a t ' ,
`= faA(~r,t) [1 - C(x,y,t)]dx + €(C)
`
`(9)
`
`Sony, Ex. 1002, p.1506
`
`

`
`W097137336
`
`PCT/AU97/00198
`
`- 14 -
`
`where e( C) '" 0 is a small error term due to the small velocity of the clouds. Equation
`
`(9) demonstrates an obvious fact that the time derivative of a profile gives information
`
`on the changes (such as motion) of feature A only when the changes in A do not
`
`overlap features C. In order to obtain the best measure of the location of a feature, the
`5 overlap between features must be minimised. This means that C(x,Y,t) must cover as
`
`small an area as possible. If the clouds are present but do not overlap the aircraft,
`
`then apart from a small error term, the time difference between profiles gives the
`
`motion of the aircraft. The difference profile corresponding to Figure 10 is shown in
`Figure 11 where the slow moving clouds have been eliminated. The wing positions
`
`10 occur at the zero-crossing paints 33 and 34. Note that the clouds have been removed,
`
`apart from small error terms.
`
`The method is implemented using a programmable logic circuit of the image
`
`processing system 8 which is programmed to perform the row sums on the binary
`
`15 image and to output these as a set of integers after each video field. When taking the'
`
`difference between successive profiles the best results were obtained using
`differences between like fields of the video image, i. e. even-even and odd-odd fields.
`
`The difference profile is analysed to locate valid zero crossing points
`
`20 corresponding to the aircraft wing pOSitions. A valid zero crossing is one in which the
`difference profile initially rises above a threshold IT for a minimum distance YT and
`falls through zero to below -IT for a minimum distance Yr' The magnitude of the
`threshold f T is chosen to be greater than the error term e( C) which is done to discount
`
`the affect produced by slow moving features, such as clouds.
`
`25
`
`30
`
`In addition, the peak value of the profile, corresponding to the aircraft wing, can
`
`be obtained by summing the difference values when they are valid up to the zero
`
`crossing point. This method removes the contributions to the peak from the non(cid:173)
`overlapping clouds. It can be used as a guide to the wing span of the aircraft.
`
`The changes in position of the aircraft in the row-sum profile are used to
`
`Sony, Ex. 1002, p.1507
`
`

`
`WO 97137336
`
`PCT/AU97/00198
`
`- 15 -
`
`determine a velocity for the aircraft that can be used for determining the image
`
`acquisition or trigger time, even if the aircraft is not in view. This situation may occur
`
`if the aircraft image moves into a region on the sensor that is saturated, or if the trigger
`
`pOint is not in the field of view of the camera 3. However, to obtain a reliable estimate
`5 of the velocity, geometric corrections to the aircraft position are required to account
`for the distortions in the image introduced by the camera lens. These are described
`
`below using the coordinate systems (x,Y,z) for the image and (X, Y,Z) for the aircraft
`
`as shown in Figures 12 and 13, respectively_
`
`10
`
`For an aircraft at distance Z and at a constant altitude Yo' the angle from the
`horizontal to the aircraft in the vertical plane is
`
`Yo
`tanS = (cid:173)
`y Z
`
`(10)
`
`Since Yo is approximately constant, a normalised variableZN = ZIYo can be used. If Yo
`is the coordinate of the centre of the images, f is the focal length of the lens and Se
`
`is the angle of the camera from the horizontal in the vertical plane, then
`
`)
`Yo - y
`tanSy - tanSe
`- - = tanf8 - 8 = --.!.---
`f
`\ Y
`1 + tane
`tane
`e
`Y
`c
`
`(11 )
`
`15 where the tangent has been expanded using a standard trigonometric identity. Using
`
`(10) and (11) an expression for the normalised distance ZN is obtained
`
`Z (y) = _1 _+_~,.,..(Y_-_~_o~,-a_n8_c
`[3(;' - Yo)
`tanSe -
`N
`
`(12)
`
`where [3 = 1 If. This equation allows a point in the image at y to be mapped onto a
`true distance scale, Zw Since the aircraft altitude is unknown, the actual distance
`
`cannot be determined. Instead, all points in the image profile are scaled to be
`
`20 equivalent to a specific point, Y"
`line or image acquisition line. The change in the normalised distance ZN(Yl) at Y1 due
`to an increment in pixel value 6.Y1 is 6.ZJY1) = ZN(Y1 + 6.Y,) - ZN(Y1 )' The number
`
`in the profile. This point is chosen to be the trigger
`
`Sony, Ex. 1002, p.1508
`
`

`
`W097/37336
`
`PCT/AU97/00198
`
`- 16 -
`
`of such increments over a distance ZN(Y2) - ZN(Y1) is M = (ZN(Y2) - ZN(Y1 ))/b.ZN(y1).
`Thus the geometrically corrected pixel position at Y2 is
`
`(13)
`
`F or an aircraft at distance Z and at altitude Yo' a length X on it in the X direction
`subtends an angle in the horizontal plane of
`
`(14)
`
`5 where normalised values have been used. If Xo is the location of the centre of the
`image and f is the focal length of the lens, then
`
`x - Xo = tan8x
`f
`
`(15)
`
`Using (12), (14) and (15), the normalised distance XN can be obtained in terms of x
`and Y
`
`(16)
`
`As with the Y coordinate, the x coordinate is corrected to a value at Y1 0 Since XN
`10 should be independent of position, then a length x2 - Xo at Y2 has a geometrically
`corrected length of
`
`(17)
`
`The parameter ~ = 11f is chosen so that x and yare measured in terms of pixel
`numbers. If Yo is the centre of the camera centre and it is equal to half the total
`number of pixels, and if 9FOV is the vertical field of view of the camera, then
`
`Sony, Ex. 1002, p.1509
`
`

`
`W097/37336
`
`PCT/AU971OO198
`
`- 17 -
`
`~ = tan(6 FO.}2)
`Yo
`
`(18)
`
`This relation allows p to be calculated without knowing the lens focal length and the
`dimensions of the sensor pixels.
`
`The velocity of a feature is expressed in terms of the number of pixels moved
`5 between image fields (or frames). Then if the position of the feature in frame n is Yn ,
`the velocity is given by vn :: Yn - Yn-1' Over N frames, the average velocity is then
`
`1 N
`1 N
`1
`(v) = - L v n = - L (Y n - Y n -1) = -(Y N
`Nn:l
`Nn:l
`N
`
`- Yo)
`
`(19)
`
`which depends only on the start and finish points of the data. This is sensitive to errors
`
`in the first and last values and takes no account of the positions in between. The error
`in the velocity due to an error oYN in the value YN is
`
`e:(v») = °YN
`N
`
`(20)
`
`10 A better method of velocity estimation uses all the position data obtained between
`these values. A time t is maintained which represents the current frame number. Then
`the current position is given by
`
`Y ::; Yo - vt
`
`(21)
`
`where Yo is the unknown starting point and v is the unknown velocity. The number n
`of valid positions Yn measured from the feature are each measured at time tn'
`15 Minimising the mean square error
`
`(22)
`
`with respect to v and Yo gives two equations for the unknown quantitiesyo and v.
`Solving for v yields
`
`Sony, Ex. 1002, p.1510
`
`

`
`W097/37336
`
`PCT/AU97/00198
`
`-18 -
`
`N
`
`N
`
`N
`
`L Yn L tn - N L yin
`n=1
`n=1
`v = n=1
`N
`N
`N
`N L t; - L tn L tn
`
`n=l
`
`n=l
`
`n=1
`
`(23)
`
`This solution is more robust in the sense that it takes account of all the motions of the
`
`feature, rather than the positions at the beginning and at the end of the observations.
`If the time is sequential, so that tn = n6t where tn = 1 is the time interval between
`image frames, then the error in the velocity due to an error 0Yn in the value Yn is
`e{(v») = eYn{6(N + 1 - 2n)}
`N
`(N + 1 )(N - 1)
`
`(24)
`
`5 which, for the same error eYN in (19), gives a smaller error than (21) for N > 5. In
`
`general, the error in (24) varies as 1/N2 which is less sensitive to uncertainties in
`
`position than (19).
`
`If the aircraft is not in view, then the measurement of the velocity v can be used
`10 to estimate the trigger time. If y, is the position of a feature on the aircraft that was last
`then the position at any time t is estimated from
`seen at a time t"
`
`(25)
`
`Based on this estimate of position, the aircraft will cross the trigger point located
`at Y T at a time t T estimated by
`
`Yr - y,
`tT = t, - - - (cid:173)
`v
`
`(26)
`
`An alternative method of processing the images obtained by the camera 3 to
`
`15 determine the aircraft position, which also automatically accounts for geometric
`
`corrections, is described below. The method is able to predict the time for triggering
`
`the acquisition system 10 based on observations of the position of the aircraft 28.
`
`Sony, Ex. 1002, p.1511
`
`

`
`W097137336
`
`PCT/AU97{OO198
`
`- 19 -
`
`To descnbe the location of an aircraft 28 and its position, a set of coordinates
`are defined such that the x axis points vertically upwards, the i axis points
`horizontally along the runway towards the approaching aircraft, and y is horizontal
`and perpendicular to the runway. The image 66 of the aircraft is located in the digitised
`
`5 image by pixel values (xp,Yp) ' where xp is defined to be the vertical pixel value and yp
`
`the horizontal value. The lens on the camera inverts the image so that a light ray from
`the aircraft strikes the sensor at position (-xp, -yp,O) , where the sensor is located at
`the coordinate origin. Figure 14 shows a ray 68 from an object, such as a point on the
`
`aircraft, passing through a lens of a focal length f, and striking the imaging sensor at
`10 a point (-xp' -yp) , wherexp andyp are the pixel vail..!es. The equation locating a point
`on the ray is given by
`
`(27)
`
`where z is the horizontal distance along the ray, and the subscript c refers to the
`
`camera coordinates. The camera axis ic is collinear with the lens optical axis. It will
`be assumed that zit» 1 , which is usually the case.
`
`15
`
`Assuming the camera is aligned so that Yc = y is aligned with the runway
`coordinate, but the camera is tilted from the hOrizontal by angle S. Then
`
`Xc = X cos8 - i sin8
`ic = x sinS + i cosS
`
`and a point on the ray from the aircraft to its image is given by
`r = 4(xp cosa J f + sina) x + (Yp J 1 y + (cose - Xp sina I 1 z]
`
`Letting the aircraft trajectory be given by
`t{t) = (z(t) tan8 GS + xo) x + YoY + z(t)z
`
`(28)
`
`(29)
`
`(30)
`
`20 where z(t} is the horizontal position of the aircraft at time t, 8 GS is the glide-slope
`angle, and the aircraft is at altitude Xo and has a lateral displacement Yo at z(to) = O.
`Here, t = to is the time at which the image acquisition system 10 is triggered, i. e. when
`
`Sony, Ex. 1002, p.1512
`
`

`
`WO 97137336
`
`PCT/AU97/00198
`
`- 20-
`
`the aircraft is overhead with respect to the cameras 7.
`
`Comparing equations (29) and (30) allows z to be written in terms of z(t) and
`
`gives the pixel positions as
`
`x (t) = f
`p
`
`(
`
`z(t)[cosS tanS GS - sinSl + Xo COSS]
`z(t)[sinS tanSGS + cosS) + Xo sinS
`
`5 and
`
`fyo
`Yp(t) = -~-------::.----­
`Z(t)[stnS tanS GS + cosS] + xa sinS
`
`(31)
`
`(32)
`
`Since zp(t) is the vertical coordinate and its value controls the acquisition
`
`trigger, the following discussion will be centred on equation (31) The aircraft position
`
`is given by
`
`z(t) = v(to - t)
`
`(33)
`
`where v is the speed of the aircraft along the taxis.
`
`10
`
`The aim is to determine to from a series of values of zp(t) at t determined from
`
`the image of the aircraft. For this purpose, it is useful to rearrange (31) into the
`
`following form
`
`where
`
`c - t + ax
`p
`
`- btx = 0
`p
`
`a = vfo(tanS GS + cotS) + Xo
`fv(1 - tanS GS cotS)
`
`tan8GS + cotS
`b = - , - - - - - - -
`~1 - tan8 GS cote)
`
`15 and
`
`(34)
`
`(35)
`
`(36)
`
`Sony, Ex. 1002, p.1513
`
`

`
`WO 97137336
`
`PCT/AU97/00198
`
`- 21 -
`
`X.,xo
`C = ~ - ---------------
`fv( 1 - tane GS cote)
`
`(37)
`
`The pixel value corresponding to the trigger point vertically upwards is xr = f cote.
`
`The trigger time, to' can be expressed in terms of the parameters a, band c
`
`t = c + aXr
`1
`o
`b
`+ xr
`
`(38)
`
`The parameters a, band c are unknown since the aircraft glide slope, speed,
`altitude and the time at which the trigger is to occur are unknown. However, it IS
`
`5 possible to estimate these using equation (34) by minimising the chi-square statistic
`
`Essentially. equation (34) is a prediction of the relationship between the measured
`values xp and t , based on a simple model of the optical system of the detection
`camera 3 and the trajectory of the aircraft 28. The parameters a, band c are to be
`
`chosen 50 as to minimise the error of the model fit to the data, I. e. make equation (34)
`
`10 be as close to zero as possible.
`
`Let xn be the location of the aircraft in the image. i.e. pixel value, obtained at
`time tn. Then the chi-square statistic is
`
`N
`X2 = L
`
`n~l
`
`(c - tn + aXn - bt~nf
`
`(39)
`
`for N pairs of data points. The optimum values of the parameters are those that
`
`15 minimise the chi-square statistic, i.e. those that satisfy equation (34).
`
`F or convenience, the following symbols are defined
`
`N
`N
`N
`N
`X= Lxn.
`T = L tn. P = L Xntn. Y = LX;,
`n,l
`n:l
`n=1
`n=l
`N
`N
`N
`= Lx;tn. R LXnt;. S = Lx;t;.
`n:l
`n,l
`n:l
`
`(40)
`
`Sony, Ex. 1002, p.1514
`
`

`
`W097137336
`
`PCT/AU97/00198
`
`- 22-
`
`Then the values of a, band c that minimise equation (39) are given by
`
`a = (NP - Xn(p 2 - NS) - (NR - PD(PX - NQ)
`(NY - X2XP2 - NS) + (PX - NQ)2
`
`b = (NP - XTXpX - NQ) + (NR - PT'/..NY - X2)
`(NY - X 2XP2 - NS) + (PX - NQ)2
`
`and
`
`T + bP - aX
`c= - - - - -
`N
`
`(41)
`
`(42)
`
`(43)
`
`On obtaining a, band c from equations (41) to (43), then to can be obtained from
`equation (38).
`
`5
`
`Using data obtained from video images of an aircraft landing at Melbourne
`
`airport, a graph of aircraft image position as a function of image frame number is
`
`shown in Figure 15. The data was processed using equations (41) to (43) and (38) to
`yield the predicted value for the trigger frame number to = 66 corresponding to trigger
`10 point 70. The predicted point 70 is shown in Figure 16 as a function of frame number.
`The predicted value is to = 66 ± 0.5 after 34 frames. In this example, the aircraft can
`be out of the view of the camera 3 for up to 1.4 seconds and the system 2 can still
`
`trigger the acquisition camera 7 to within 40 milliseconds of the correct time. For an
`aircraft travelling at 62.5 mis, the system 2 captures the aircraft to within 2.5 metres
`
`15 of the required position.
`
`The tracking system 6, 8 may also use an Area-Parameter Accelerator (APA)
`
`digital processing unit, as discussed in International Publication No. WO 93/19441,
`
`to extract additional information, such as the aspect ratio of the wing span to the
`
`20 fuselage length of the aircraft and the location of the centre of the aircraft.
`
`The tracking system 1 can also be implemented using one or more pyroelectric
`
`sensors 27 with a signal proceSSing wing detection unit 29. Each sensor 27 has two
`
`Sony, Ex. 1002, p.1515
`
`

`
`W097137336
`
`PCT/AU971OO198
`
`- 23-
`
`adjacent pyroelectric sensing elements 40 and 42, as shown in Figure 17, which are
`
`electrically connected so as to cancel identical signals generated by each element. A
`
`plate 44 with a slit 46 is placed above the sensing elements 40 and 42 so as to
`
`provide the elements 40 and 42 with different fields of view 48 and 50. The fields of
`
`5 view 48 and 50 are significantly narrower than the field of view of a detection camera
`discussed previously. If aircraft move above the runway in the direction indicated by
`the arrow 48, the first element 40 has a front field of view 48 and the second element
`
`42 has a rear field of view 50. As an aircraft 28 passes over the sensor 27 the first
`
`element 40 detects the thermal radiation of the aircraft before the second element 42,
`
`10 the aircraft 28 will then be momentarily in both fields of view 48 and 50, and then only
`
`detectable by the second element 42. An example of the difference signals generated
`
`by two sensors 27 is illustrated in Figure 18 where the graph 52 is for a sensor 27
`
`which has a field of view that is directed at 90° to the horizontal and a sensor 27 which
`
`is directed at 75° to the horizontal. Graph 54 is an expanded view of the centre of
`
`15 graph 52. The zero crossing points of peaks 56 in the graphs 52 and 54 correspond
`
`to the point at which the aircraft 28 passes the sensor 27. Using the known position
`
`of the sensor 27 the time at which the aircraft passes, and the speed of the aircraft 28,
`
`a time can be determined for generating an acquisition signal to trigger the high
`
`resolution acquisition cameras 7. The speed can be determined from movement of the
`
`20 zero crossing points over time, in a similar manner to that described previously.
`
`The image acquisition system 10, as mentioned previously, acquires an image
`
`of the aircraft with sufficient resolution for the aircraft registration characters to be
`
`obtained using optical character recognition. According to one embodiment of the
`
`25 acquisition system 10. the system 10 includes two high resolution cameras 7 each
`
`comprising a lens and a CCO detector array. Respective images obtained by the two
`
`cameras 7 are shown in Figures 19 and 20.
`
`The minimum pixel dimension and the focal length of the lens determme the
`30 spatial resolution in the image. If the dimension of a pixel is Lp ' the focal length f and
`the altitude of the aircraft is h, then the dimenSIon of a feature W
`on the aircraft that
`mm
`
`Sony, Ex. 1002, p.1516
`
`

`
`W097137336
`
`PCT/AU97/00198
`
`- 24-
`
`is mapped onto a pixel is
`
`or
`
`(44)
`
`The character recognition process used requires each character stroke to be
`
`mapped onto at least four pixels with contrast levels having at least 10% difference
`
`from the background. The width of a character stroke in the aircraft registration is
`
`5 regulated by the ICAO. According to the ICAO Report, Annex 7, sections 4.2.1 and
`
`5.3, the character height beneath the port wing must be at least 50 centimetres and
`
`the character stroke must be 1/6th the character height. Therefore, to satisfy the
`
`character recognition criterion, the dimension of the feature on the aircraft that is
`mapped onto a pixel should be Wmin = 2 centimetres, or less. Once the CCO detector
`10 is chosen, Lp is fixed and the focal length of the system 10 is determined by the
`maximum altitude of the aircraft at which the spatial resolution W min = 2 centimetres
`is required.
`
`The field of view of the system 10 at altitude h is determined by the spatial
`15 resolution W min chosen at altitude hmax and the number of pixels N pi along the length
`of the CCO,
`
`(45)
`
`For h = hmax and N pi = 1552 the field of view is WFOV = 31.04 metres.
`
`To avoid blurring due to motion of the aircraft, the image moves a distance less
`
`20 than the size of a pixel during the exposure. If the aircraft velocity is v, then the time
`
`to move a distance equal to the required spatial resolution W min is
`
`(46)
`
`The maximum aircraft velocity that is likely to be encountered on landing or
`
`Sony, Ex. 1002, p.1517
`
`

`
`W097/37336
`
`PCT/AU97/00198
`
`- 25-
`
`take-off is v = 160 knots = 82 ms -1. With Wmin = 0.02 m, the exposure time to
`avoid excessive blurring is t < 240 ~s.
`
`The focal length of the lens in the system 10 can be chosen to obtain the
`
`5 required spatial resolution at the maximum altitude. This fixes the field of view.
`
`Alternatively, the field of view may be varied by altering the focal length according to
`
`the altitude of the aircraft. The range of focal lengths required can be calculated from
`
`equation (44)
`
`10
`
`The aircraft registration, during daylight conditions, is illuminated by sunlight
`
`or scattered light reflected from the ground. The aircraft scatters the light that is
`
`incident, some of which is captured by the lens of the imaging system. The
`
`considerable amount of light reflected from aluminIum fuselages of an aircraft can
`
`affect the image obtained, and is taken into account. The light power falling

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket