throbber
J Intell Robot Syst (2013) 70:65–78
`DOI 10.1007/s10846-012-9747-9
`
`Tracking a Ground Moving Target with a Quadrotor
`Using Switching Control
`Nonlinear Modeling and Control
`J. E. Gomez-Balderas · G. Flores ·
`L. R. García Carrillo · R. Lozano
`
`Received: 30 June 2012 / Accepted: 16 July 2012 / Published online: 12 August 2012
`© Springer Science+Business Media B.V. 2012
`
`The control strategy consists of switching con-
`trollers, which allows making decisions when the
`target is lost temporarily or when it is out of the
`camera’s field of view. Real time experiments are
`presented to demonstrate the performance of the
`target-tracking system proposed.
`Keywords Switching controllers · Tracking ·
`Control strategy
`
`1 Introduction
`
`Tracking moving targets (m.t.) using unmanned
`aerial vehicles (UAV) allows performing impor-
`tant tasks like surveillance, reconnaissance, and
`intelligence missions. UAV hovering over a de-
`sired position requires information coming from
`a group of sensors; more precisely, it requires
`data coming from the UAV attitude as well as of
`its surrounding environment. A sensor capable of
`obtaining abundant information from the UAV
`environment is a vision system. Many results re-
`lated to this topic have been presented recenttly.
`Most of them are related to identification and
`classification of multiple targets, see for example
`[1, 9, 15, 16, 25] and [22]. A circular pattern nav-
`igation algorithm for autonomous target tracking
`has been studied in [24] and [30], showing a good
`performance. Other work concerning trajectory
`generation from video sensors includes particle
`
`Abstract An unmanned aerial vehicle (UAV) sta-
`bilization strategy based on computer vision and
`switching controllers is proposed. The main goal
`of this system is to perform tracking of a moving
`target on ground. The architecture implemented
`consists of a quadrotor equipped with an em-
`bedded camera which provides real-time video
`to a computer vision algorithm where images
`are processed. A vision-based estimator is pro-
`posed, which makes use of 2-dimensional images
`to compute the relative 3-dimensional position
`and translational velocity of the UAV with respect
`to the target. The proposed estimator provides
`the required states measurements to a micro-
`controller for stabilizing the vehicle during flight.
`
`· G. Flores · R. Lozano
`
`B)
`
`J. E. Gomez-Balderas (
`Heudiasyc UMR 7253,
`Université de Technologie de Compiegne,
`Centre de Recherche de Royalieu,
`60200 Compiegne, France
`e-mail: jgomezba@hds.utc.fr
`G. Flores
`e-mail: gfloresc@hds.utc.fr
`
`R. Lozano
`e-mail: rlozano@hds.utc.fr
`
`L. R. Garcia Carillo
`Department of Electrical and Computer Engineering,
`University of California, Santa Barbara,
`CA 93106-9560, USA
`e-mail: lrgc@ece.ucsb.edu
`
`Yuneec Exhibit 1021 Page 1
`
`

`
`66
`
`J Intell Robot Syst (2013) 70:65–78
`
`filters for moving cameras without stabilization
`[20], and the Joint Probabilistic Data Association
`Filter (JPDAF) for tracking multiple targets with
`unreliable target identification [3]. If a model for
`the object’s motion is known, an observer can
`be used to estimate the object’s velocity [10]. In
`[13], an observer for estimating the object velocity
`was utilized; however, a description of the object’s
`kinematics must be known. In [18] an autoregres-
`sive discrete time model is used to predict the
`location of features of a moving object. In [2],
`trajectory filtering and prediction techniques are
`utilized to track a moving object. In [26], object-
`centered models are utilized to estimate the trans-
`lation and the center of rotation of the object.
`Several interesting works have been presented
`concerning visual tracking of targets using UAVs.
`In [29] a color-based tracker is proposed to es-
`timate the target position, while in [14] thermal
`images are correlated with a geographical infor-
`mation system (GIS) towards the same goal. In
`[12] a vision-based control algorithm for stabiliz-
`ing a UAV equipped with two cameras is pre-
`sented. The same system has been used in [11]
`to track a line painted in a wall using a vanish-
`ing points technique. Some methods for designing
`UAV trajectories that increase the amount of in-
`formation available are presented in [23].
`In this paper we describe the use of a vision
`system designed to observe a visual target located
`over a ground vehicle and to track it by using an
`UAV platform of quadrotor type (X-4). The main
`challenge involved in target localization include
`maintaining the target inside the camera’s field
`of view. In order to achieve this requirement, we
`propose a control schema that develops a X-4
`tracking, such that the target localization estima-
`tion error is minimized. To successfully perform
`this task, we have developed a control strategy
`consisting of three principals objectives. First, the
`hovering UAV tracks the target over a desired
`position, this stage is known as take-off mode
`(TO). In case of a temporary lose of sight of the
`target, we use a second control schema named tar-
`get localization (TL), which consists on increasing
`the UAV altitude to obtain a better view of the
`scene. Once the vision system has located the tar-
`get, our third control schema called loss of moving
`target (LOMT) moves the UAV until it reaches a
`
`desired position with respect to the target. All this
`control strategies switch at every time during the
`test. The input of the image processing algorithm
`is a set of images taken during real flight tests,
`the output of our algorithm is the 3-dimensional
`target position and its translational velocity. How-
`ever, one missing part, as far as our interest is
`concerned, is that there are no physical obstacles
`which hinder UAVs from tracking a target. This
`paper focuses on the UAV implementation strate-
`gies of tracking a target over a ground vehicle
`using vision.
`This paper is organized as follows. Computer
`vision algorithm for position measurement and ve-
`locity estimation using optical flow are described
`in Section 2. In Section 3, the control design is
`presented. The experimental platform used to val-
`idate theoretical results is described in Section 4.
`Experimental results are presented in Section 5.
`Finally, Section 6 presents conclusions and future
`work.
`
`2 Image Processing
`
`The quadrotor’s computer needs to be able to
`chase the on ground m.t. using the information
`given by the camera, i.e., position and relative ve-
`locity. Regarding this, it is required the estimation
`of the relative x − y position, altitude and velocity
`to the moving target with respect to the quadrotor.
`An on board camera is implemented for such goal
`and the schema describing such system is shown
`in Fig. 1. At this respect, we use some process-
`ing methods: edge detection, target detection and
`optical flow measurement which will be further
`discussed in this section.
`
`2.1 Edge Detection
`
`In the present approach a red square object must
`be tracked using a camera. For this reason, an
`edge detector to find the edges of the target in
`the image plane is proposed. We use a Canny edge
`detection algorithm [7] which is known to have the
`following characteristics: good detection, localiza-
`tion, and only one response to a single edge. The
`Canny edge detection algorithm uses the good de-
`tection criterion: there should be a low probability
`
`Yuneec Exhibit 1021 Page 2
`
`

`
`J Intell Robot Syst (2013) 70:65–78
`
`Fig. 1 Basic scheme
`
`67
`
`of failing to mark real edge points, and low prob-
`ability of falsely marking non edge points. This
`criterion corresponds to maximize signal-to-noise
`ratio. In the good localization criterion the points
`marked as edge points by the operator should be
`as close as possible to the center of the true edge.
`That corresponds to maximize the position inter-
`val, passing by zero and correspond to the inverse
`of the expected value between the true edge point
`and the maximum output of the operator. The last
`criterion to only obtain one response to a single
`edge is implicitly captured in the first criterion,
`since when there are two responses to the same
`edge, one of them must be considered false. This
`is necessary to limit the number of peaks in the
`response so that there will be a low probability of
`declaring more than one edge.
`Our vision-based position estimation algorithm
`uses a red colored square target, for this reason we
`propose to use an edge detector to find the edges
`of the square target in the image plane. The Canny
`edge detection algorithm, locates the edges of the
`target in the image and produces thin fragments of
`image contours that can be controlled by a single
`smoothing parameter known as σ . The image is
`first smoothed with a Gaussian filter of spread σ ,
`then, gradient magnitude and direction are com-
`puted at each pixel of the smoothed image. Gradi-
`ent direction is used to thin edges by suppressing
`any pixel response that is not higher than the
`two neighboring pixels on either side of it along
`
`the direction of the gradient, this is called non-
`maximum suppression. A good operation to use
`with any edge operator when thin boundaries are
`(cid:2)
`(cid:3)
`wanted. The two 8-neighbors of a pixel
`that
`x, y
`are to be compared are found by rounding off the
`computed gradient direction to yield one neighbor
`on each side of the center pixel. Once the gradient
`magnitudes are thinned, high magnitude contours
`are tracked. In the final aggregation phase, contin-
`uous contour segments are sequentially followed.
`Contour following is initiated only on edge pix-
`els where the gradient magnitude meets a high
`threshold; however, once started, a contour may
`be followed through pixels whose gradient magni-
`tude meet a lower threshold, usually about half of
`the higher starting threshold. Image regions can
`sometimes be detected when boundary segments
`close on themselves. In our case the boundary of
`a rectangular building might result in four straight
`line segments.
`
`2.2 Target Detection
`
`Once we have four rays which intersect in four
`points it is required to verify that the rays form
`a square. To verify this, the angle in each corner
`of the intersection of vertices are computed. By
`applying the triangulation of polygons, we can
`obtain two similar triangles in one square, given
`the triangle with edges A, B and C, the angle β
`
`Yuneec Exhibit 1021 Page 3
`
`

`
`68
`
`J Intell Robot Syst (2013) 70:65–78
`
`Gp1
`
`(2)
`
`time instants, and let G pi be the gray scale value
`of a particular pixel p = (ui, vi)T. Then, the gray
`values for pi which appear in two consecutive
`images are defined respectively as
`= I1(u1, v1) Gp2
`= I2(u2, v2)
`where ui and vi are the row and column pixel
`coordinates respectively. Given a specific image
`point p1 ∈ I1, the aim of the approach is to find
`another image point p2 ∈ I2 such that Gp1
`≈ G p2.
`Moreover, the relationship between matched pix-
`els p1 and p2 is given by
`p2 = p1 + r = [ u1 + ru v1 + rv ]T
`(3)
`where r = [ ru rv ]T defines the image displace-
`ment or optical flow and minimizes the following
`residual function
`+wu(cid:4)
`ε(r) = up1
`up=up1
`−wu
`
`+wv(cid:4)
`v p2
`−wv
`v p=v p2
`
`(I1(p1)− I2(p1 + r))2 (4)
`
`where wu and wv are two integers that define
`the size of the integration window. The Lucas–
`Kanade optical flow algorithm has an adaptive
`integration window, therefore it is capable of han-
`dling large pixel motions and acts as a low pass
`filter. For a complete description of the algorithm
`see [6].
`Optical flow can be generated by two kinds
`of observer motion: translational motion (Ft) and
`rotational motion (Fr). Let us assume that the
`camera is moving with translational velocity v
`and angular velocity ω while viewing an object at
`distance d and offset β from the direction of travel,
`as depicted in Fig. 1. The optical flow (OF) can be
`mathematically expressed as follows:
`OF = v
`sin β − ω
`d
`
`(5)
`
`The maximal optical flow is obviously gener-
`ated when the plane that contains the features
`is perpendicular to translational motion direction
`(β = 90
`◦
`) [4]. The velocity can be estimated from
`Eq. 5 as follows
`
`v = (OF + ω)
`
`sin β
`
`d
`
`(6)
`
`Fig. 2 Target detection
`
`between the two edges B and C can be obtained
`by the following formula:
`|B|2 + |C|2 − |A|2
`β = arccos
`2 · |B| · |C|
`(1)
`where |A|,|B| and |C|, denotes the lengths of
`the edges A, B, and C respectively. Applying this
`formula at each one of the square vertices, we
`obtain the four angles in the polygon if this edges
`are perpendiculars. Figure 2 shows the result of
`target detection algorithm using four vertices; the
`4 edges are painted in blue color.
`
`2.3 Optical Flow Measurement
`
`We use the optical flow obtained from image se-
`quences to estimate the translational speed of the
`X-4. The horizontal speed estimation is used
`to perform autonomous hover flights as well as
`for avoiding lateral displacement. Many different
`methods for computing the optical
`flow are
`available [4]. We can mention for example
`intensity-based differential methods, frequency-
`based filtering methods or correlation-based
`methods. In this paper we implement the Lucas–
`Kanade pyramidal method, which is an intensity-
`based differential method [6].
`Consider two discrete functions I1, I2 ∈ Rmu×nv
`representing two gray scale images at different
`
`Yuneec Exhibit 1021 Page 4
`
`

`
`J Intell Robot Syst (2013) 70:65–78
`
`69
`
`Notice that singularities in the above equation
`appear when β = 0
`◦
`. Nevertheless, in our case the
`roll and pitch angles are very close to zero which
`implies that β = 90
`◦ ± 3
`◦
`as a maximal interval
`when the quadrotor is appropriately stabilized at
`hover. The measurement of the angular speed ω
`is obtained by using the gyro information on each
`axis. An altitude stabilization algorithm in closed
`loop is used to keep the distance d constant and
`equal to some desired value.
`
`2.4 Solving the Problem
`
`the vertex v0 to the central point pci = (xci, yci),
`l1 = v0 + α1(pi). In case of horizontal displace-
`ment of the target we need to know the target
`direction in order to indicate to the UAV a change
`in yaw angle. This is achieved by using a reference
`coordinate located in the image corner p0 = (0, 0).
`We construct a line from p0 to pci, l2 = p0 +
`α2(pci); using l1 and l2 we obtained the yaw angle
`reference of the UAV using dot product equation:
`
`Translational velocity was obtained using the op-
`tic flow algorithm described in the last section.
`To obtain X-4 position, we use a target detec-
`tion with a red color segmentation algorithm to
`obtain only the red channel image. After this,
`we apply the Canny algorithm for edge detec-
`tion, attaining the interior and exterior bound-
`aries in this region. The exterior boundaries are
`formed by a finite collection of n line segments
`of the following way e0 = v0v1, e1 = v1v2, ..., ei =
`vivi+1, ..., en−1 = vn−1v0, connected by n image
`points v0, v1, v2, ..., vn−1 where vi = (xi, yi). Using
`this points, we can obtain four vertices with four
`edges, the next step is to verify the right angle
`in each corner of the quadrilateral, using Eq. 1.
`With the area of the square founded and using
`the pinhole camera model with the intrinsic para-
`meters values, we use similar triangles properties
`to obtain the distance Z W (altitude) between the
`camera and the target. Also, other information is
`obtained such as the central point of the square,
`that is equal to the point where the two diagonals
`of the square intersect, called (xci, yci) in the image
`plane. Using this information we can obtain the
`3D coordinates of this point:
`XW = f
`YW = f
`
`xci
`Z W
`yci
`Z W
`
`(7)
`
`(8)
`
`where f is the focal length of the camera. Know-
`ing the four coordinates of target v0, v1, v2, and v3
`vertex of the target and the center image coordi-
`nates (xci, yci), we construct a line l1 going from
`
`Fig. 3 Flowchart of image processing
`
`Yuneec Exhibit 1021 Page 5
`
`

`
`70
`
`J Intell Robot Syst (2013) 70:65–78
`
`flowchart in Fig. 3 explains the image processing
`algorithms used to obtain position, orientation
`and velocity data. Once we obtain this 6-tuple,
`this information is sent to the X-4 by means of a
`wireless modem with a baud rate equal to 38400.
`Our X-4 vehicle is equipped with a reception
`modem connected to the serial port of a Rabbit
`microprocessor, embedded in the X-4. The output
`of these methods during real flight tests are shown
`in Figs. 4 and 5.
`
`3 Control
`
`A common mathematical model for the quad-
`rotor is implemented aiming at developing the
`navigation control strategy [8]:
`¨x = −u1(cos φ sin θ cos ψ + sin φ sin ψ )
`¨y = −u1(cos φ sin θ sin ψ − sin φ cos ψ )
`¨z = 1 − u1(cos φ cos θ )
`¨θ = u2
`¨φ = u3
`¨ψ = u4
`
`(9)
`
`The quad-rotor model, presented in Eq. 9, can
`be written in a state-space form by introducing
`
`˙Z = (z1, ..., z6)T ∈ (cid:5)6, with
`
`˙X= (x1, ..., x6)T ∈ (cid:5)6,
`
`states defined by
`x1 = x z1 = θ
`x2 = ˙x z2 = ˙θ
`x3 = y z3 = φ
`x4 = ˙y z4 = ˙φ
`x5 = z z5 = ψ
`x6 = ˙z z6 = ˙ψ
`
`(10)
`
`Using the linear representation of the model in
`Eq. 9 and the notation from Eq. 10, one has
`˙x1 = x2
`˙z1 = z2
`˙x2 = −z1u1
`˙z2 = u2
`˙x3 = x4
`˙z3 = z4
`˙x4 = z3u1
`˙z4 = u3
`˙x5 = x6
`˙z5 = z6
`˙x6 = 1 − u1
`˙z6 = u4
`Due to the fact that, in general, the vehicle never
`works in areas where |θ| ≥ π/2 and |φ| ≥ π/2,
`
`(11)
`
`Fig. 4 X-4 real time image
`
`ψ = arccos(l1 · l2). To calculate X-4 velocity esti-
`mation we use a Lucas–Kanade based optical flow
`algorithm to obtain the translational velocity of
`the camera placed under X-4 structure. It should
`be noticed that optical flow gives the velocity in
`the image plane, which in this case corresponds to
`the velocity of the X-4 displacements when using
`the floor as reference.
`The proposed algorithm obtain (XW ,YW , ZW , ψ)
`that is, the 3-dimensional position orientation of
`the target with respect to the X-4 camera. Using
`the optical flow algorithm we acquire (˙x, ˙y), which
`is the X-4 velocity estimation corresponding re-
`spectively to ( ˙xci, ˙yci) in world coordinates. The
`
`Fig. 5 X-4 real time displacement
`
`Yuneec Exhibit 1021 Page 6
`
`

`
`J Intell Robot Syst (2013) 70:65–78
`
`71
`
`3.2 Control Laws in Each Operating Mode
`
`The control strategy proposed in all different
`modes is based on the idea that the global system,
`presented in Eq. 11, is constituted of two subsys-
`tems, the attitude dynamics and the position dy-
`namics, existing a time-scale separation between
`them [19]. From this fact, it is possible to propose a
`hierarchical control scheme, where the positioning
`controller provides the reference attitude angles
`(θd, φd and ψd), which are the angles that must
`be tracked by the orientation controllers. For the
`complete system analysis, the error dynamics of
`the model in Eq. 11 are represented by the errors
`˜xi = xi − xid and ˜zi = zi − zid, with i ∈ {1, ..., 6}.
`
`3.2.1 Attitude Control
`
`the linear model is chosen for being implemented
`instead of its nonlinear version. Such working do-
`mains are satisfied even in research works where
`the nonlinear model is used together with a feed-
`back control [5].
`
`3.1 Operating Modes of the Moving
`Target Mission
`
`The navigation mission has been divided into
`different sub-missions or stages:
`
`– Take-off (TO) the objective is to achieve the
`desired altitude zd and it will be supposed
`that the moving target is initially inside of the
`camera’s field of view.
`– Target localization (TL) in this mode, the ve-
`hicle has achieved the desired altitude. The
`task to be accomplished here is to align the ve-
`hicle’s center of gravity (CG) w.r.t the moving
`target.
`– Loss of moving target (LOMT) the vehicle is
`required to change its altitude in order to find
`the moving target, i.e. until the moving target
`is inside of the camera’s field of view. Once the
`mini-UAV has found the moving target, the
`vehicle should go back to the desired altitude
`and pursue the moving target.
`
`The navigation control is structured in different
`controllers for each sub-mission as illustrated in
`Fig. 6.
`
`Fig. 6 Navigation control system
`
`For the present studies, attitude dynamics have
`the same controller among all operating modes.
`An integral sliding mode control (ISMC) is pro-
`posed and implemented on the platform, which is
`explained next.
`For the pitch dynamics case, the error equation
`is defined as ˜z1 = z1 − z1d. As shown in [28], lets
`select the switching function
`(cid:5)
`˜z1(τ ) dτ
`
`(12)
`
`s(z, t) = ˙˜z1 + 2λ˜z1 + λ2
`
`t
`
`0
`
`which depends on the pitch dynamics states. The λ
`parameter in Eq. 12 is the slope of the sliding line
`and should be greater than zero to ensure the as-
`ymptotic stability of the sliding mode. Computing
`the time derivative of Eq. 12 one has
`˙s = z1u1 + 2λz2 + λ2˜z1
`(13)
`Considering the sliding mode condition ˙s = 0, and
`using Eq. 13 one finds the equivalent control
`= −2λz2 − λ2˜z1
`
`(14)
`
`z1eq
`
`With the purpose of obtaining a control law such
`that the state vector ˜z1 remains on the sliding
`surface s(z, t) = 0, ∀t > 0, let’s use the Lyapunov
`function candidate
`v(s) = 1
`2
`
`(15)
`
`s2
`
`Yuneec Exhibit 1021 Page 7
`
`

`
`72
`
`J Intell Robot Syst (2013) 70:65–78
`
`the same time, the z controller changes. In both
`of them, the control objective is to regulate the
`x and y states to the origin, i.e. x1d = x3d = 0.
`There are two possible situations where the m.t.
`is lost by the mini-UAV. The first one occurs
`when the camera’s field of view is disturbed by
`some objects between the m.t. and the UAV. The
`second one occurs when the m.t. presents one
`acceleration large enough to disallow the tracking
`of the target. In this paper we focus only in the
`second case, considering the camera’s field of view
`is never obstructed by some objects. Thus, we
`will focus in the second case where is impossible
`to track the m.t. due to lack of information in
`the visual system. Then we propose a searching
`process consisting on changing the altitude until
`the m.t. is found again.
`
`– Control schema when the m.t. is detected: in
`this case, the feedback control law proposed is
`given by
`= kpz
`
`(cid:6)
`
`x5 − x5d
`
`(cid:7) + kvz
`
`(cid:6)
`
`x6 − x6d
`
`(cid:7) − 1 (20)
`
`u1mt
`
`where kpz and kvz are positive real numbers.
`Then, using the controllers 19 and 20, the
`closed-loop system of the position dynamics
`(left hand side of Eq. 11)is given by
`˙x1 = x2
`˙x2 = −kpxx1 − kvxx2
`˙x3 = x4
`˙x4 = −kpyx3 − kvyx4
`˙x5 = x6
`˙x6 = −kpzx5 − kvzx6
`The Eq. 21 can be represented as ˙X = Amt X
`
`(21)
`
`where
`
`⎞ ⎟⎟⎟⎟⎟⎟⎟⎠
`
`0
`0
`0
`0
`0
`1
`−k px −kvx
`0
`0
`0
`0
`0
`0
`0
`1
`0
`0
`0 −kpy −kvy
`0
`0
`0
`0
`1
`0
`0
`0
`0
`0 −kpz −kvz
`0
`0
`0
`(22)
`
`⎛ ⎜⎜⎜⎜⎜⎜⎜⎝
`
`Amt =
`
`An efficient condition for the stability of the pitch
`sub-system can be satisfied if one can ensure that
`the reached condition
`s2 ≤ η|s|,
`˙v(s) = 1
`2
`
`d d
`
`t
`
`η ≥ 0
`
`(16)
`
`holds. Then, the system remains on the sliding
`surface and the states go to the origin. Thus s˙s ≤
`−η|s| and the controller must be chosen such that
`− Ksign(s)
`z1 = z1eq
`where K is a positive real number. Following a
`similar approach, it is possible to obtain the yaw
`and roll angles controllers.
`
`(17)
`
`3.3 Position Control
`
`For each sub-mission or stage, the position con-
`trol have well defined objectives, previously ex-
`plained. Motion in the x − y plane is accomplished
`by orientating the vehicle’s thrust vector in the
`direction of the displacement desired. As conse-
`quence, the angles θd and φd act as virtual con-
`trollers for the position dynamics. The control
`laws proposed for the x an y positions, respec-
`tively, are expressed as
`) + kpx(x1 − x1d
`θd = kvx(x2 − x2d
`u1
`) + kpy(x3 − x3d
`φd = − kvy(x4 − x4d
`u1
`
`)
`
`)
`
`(18)
`
`(19)
`
`with kvx, kpx, kvy and kpy being positive real num-
`bers.
`
`3.3.1 Altitude Control
`
`The z-position control is composed by two dif-
`ferent controllers, one for the situation when the
`m.t. is being detected and one for the situation
`when it is not. When the m.t. is inside of the
`camera’s field of view, the proposed control law
`is formed by a state feedback taking into account
`only the states involving the altitude dynamics. On
`the other hand, when the vehicle looses the image
`of the target, a switch to a different method for
`measuring the vehicle’s ψ angle occurs, and, at
`
`Yuneec Exhibit 1021 Page 8
`
`

`
`J Intell Robot Syst (2013) 70:65–78
`
`73
`
`)
`
`(23)
`
`3.4 Stability Analysis of the Position Sub-System
`
`This section focuses on the system’s stability
`across switching boundaries, i.e., where altitude
`dynamics switches between the pair of controllers
`Eqs. 20 and 23.
`Following a similar approach than the one pre-
`sented in [21], it is possible to find a common
`Lyapunov function for the closed-loop system
`formed applying the two controllers of the posi-
`tion dynamics. However, working in this way, the
`same pole locations have to be chosen for both
`cases, when the m.t. is detected and were it is not
`detected, which in fact is not the case.
`Let’s define dx and dy as the distances mea-
`sured from the vehicle’s center of gravity projec-
`tion to the point where the camera loses the image
`of the road, see Fig. 7. Thus, when the error is
`bigger than dx for x dynamics or dy for y dynamics,
`the control law is changed. A change of coordi-
`nates can be made using the known constants dx
`and dy. Thus, without loss of generality, a state
`dependent switched linear system can be defined,
`given by the closed-loop system together with the
`switching conditions
`(cid:14)
`Amt X if |ex| < dx and|ey| < dy
`˙X =
`Amt X if |ex| ≥ dx or|ey| ≥ dy
`It is clear that each individual system in Eq. 25
`is stable, since the matrices Amt and Amt are
`Hurwitz.
`
`(25)
`
`Fig. 7 Camera’s view
`
`Suppose that there is a family A p, p ∈ P of
`functions from (cid:5)n to (cid:5)n, with P = 1, 2, ..., m
`defining the finite index set. For the case of lin-
`ear systems, this results in a family of systems
`˙x = Apx with A p ∈ (cid:5)n×n. Let’s define a piece-
`wise constant function σ : [0,∞) → P with finite
`number of discontinuities (switching times) on
`every bounded time interval. This function takes a
`constant value on every interval between two con-
`secutive switching times. Then σ gives the index
`σ (t) ∈ P of the system that is actually active, at
`each instant of time t.
`
`Theorem 1 Consider vectors t pq, symmetric matri-
`ces Sp with p ∈ {x : xT Spx ≥ 0}, ∀p ∈ P having
`non-negative entries and symmetric matrices Pp
`such that:
`p Pp + Pp A p + βpSp < 0,
`AT
`0 < Pp − Pq + fpqtT
`
`
`βp ≥ 0
`(26)
`+ tpq f Tpq for some tpq ∈ (cid:5)n
`(27)
`
`pq
`
`With the boundary between p and q of the form
`{x : f T
`= 0},
`fpq ∈ (cid:5)n. Then every continuous,
`piecewise C 1 trajectory of the system ˙x = Aσ x
`pq
`tends to zero exponentially.
`
`See the Appendix.
`
`– Control schema when the m.t. is not detected:
`in this case, the proposed control schema is
`given by
`) + kvz(x6 − x6d
`= kpz(x5 − x5d
`− kZ xx1 − kZ yx3 − 1
`where kZ x and kZ y are positive real numbers.
`Then, the closed-loop system is represented as
`˙X = Amt X where Amt =
`0
`1
`0
`0
`0
`0
`−kpx −kvx −kZ x
`0 −kZ y
`0
`0
`0
`0
`1
`0
`0
`0 −kpy −kvy
`0
`0
`0
`0
`0
`0
`0
`0
`1
`0 −kpz −kvz
`0
`0
`0
`
`(24)
`
`⎞ ⎟⎟⎟⎟⎟⎟⎠
`
`u1mt
`
`⎛ ⎜⎜⎜⎜⎜⎜⎝
`
`Yuneec Exhibit 1021 Page 9
`
`

`
`74
`
`J Intell Robot Syst (2013) 70:65–78
`
`Fig. 8 Embedded control architecture
`
`4 Experimental Platform
`
`The total weight of the vehicle is about 500 gr, with
`a flight endurance of 10 min approximately. The-
`oretical results obtained were incorporated into
`an autopilot control system using an architecture
`based on a29 MHz Rabbit micro controller with
`512 Kb Flash and 512 Kb RAM. These micro
`controllers are capable of handling floating point
`operations and multitasking processing virtually
`due to the enhancement compiler Dynamic C [27].
`We have built our own inertial measurement unit
`
`(IMU) using accelerometers, gyros and a compass
`to obtain the roll, pitch, and yaw angles, as well
`as the angular rates. The IMU information is sent
`to the micro controller which also reads reference
`control inputs from an operator through a ser-
`ial wireless modem. The micro controller subse-
`quently combines this information to compute the
`control input and sends the control corrections
`to the motors through a I2C serial port. The vi-
`sion sensor is composed of a wifi camera placed
`onboard the X-4 platform, the camera is point-
`ing downwards, allowing to perform the optical
`
`X−4 Altitude
`
`130
`
`120
`
`110
`
`100
`
`90
`
`80
`
`70
`
`X−4 Altitude (cm)
`
`Fig. 9 X-4 altitude
`
`60
`
`0
`
`20
`
`40
`
`60
`
`100
`80
`time (seconds)
`
`120
`
`140
`
`160
`
`180
`
`Yuneec Exhibit 1021 Page 10
`
`

`
`X−4 position
`
`75
`
`100
`
`90
`
`80
`
`70
`
`60
`
`50
`
`40
`
`30
`
`X position (cm)
`
`J Intell Robot Syst (2013) 70:65–78
`
`Fig. 10 X-4 position over
`X-axis
`
`20
`
`0
`
`20
`
`40
`
`60
`
`100
`80
`time (seconds)
`
`120
`
`140
`
`160
`
`180
`
`X−4 Position
`
`80
`
`70
`
`60
`
`50
`
`40
`
`30
`
`Y position (cm)
`
`Fig. 11 X-4 position over
`Y-axis
`
`20
`0
`
`20
`
`40
`
`60
`
`100
`80
`time (seconds)
`
`120
`
`140
`
`160
`
`180
`
`X−4 Velocity
`
`20
`
`15
`
`10
`
`05
`
`−5
`
`velocity x (cm/seg)
`
`Fig. 12 X-4 velocity over
`X-axis
`
`−10
`
`0
`
`20
`
`40
`
`60
`
`100
`80
`time (seconds)
`
`120
`
`140
`
`160
`
`180
`
`Yuneec Exhibit 1021 Page 11
`
`

`
`J Intell Robot Syst (2013) 70:65–78
`
`X−4 Velocity
`
`20
`
`40
`
`60
`
`80
`
`100
`
`120
`
`140
`
`160
`
`180
`
`40
`
`30
`
`20
`
`10
`
`0
`
`−10
`
`−20
`
`−30
`
`0
`
`Y velocity (cm/seg)
`
`76
`
`Fig. 13 X-4 velocity over
`Y-axis
`
`flow and positioning algorithms. The algorithm
`processes images coming the camera at a rate of 18
`frames/seconds. The image has a size of 320 width
`pixels and 240 height pixels. The 3D position and
`displacement velocity estimation are computed in
`a PC with Intel Core 2 duo processor 2.10 GHz
`using OpenCV libraries. Figure 8 shows a block
`diagram of the basic architecture.
`
`5 Real Time Experiments
`
`To test the proposed algorithms we realize sev-
`eral experiments. In this article we describe an
`experience in real
`flight consisting of 180 s
`of tracking a car with a translation motion at
`constant velocity. X-4 position and velocity es-
`timation are obtained by means of the image
`processing algorithm. In the first 10 s we observe
`the initialization of the algorithms, at this time the
`target detection algorithms identifies the target
`in the field of view of the camera. When the
`target is detected we obtain the position of the
`camera, next, the X-4 tries to go over the target.
`After the initialization the control algorithm uses
`the take-off (TO) navigation method with the
`desired altitude zd = 90 cm. At 55 s the target
`move faster than X-4 and the camera lost the
`target. With this information the X-4 navigation
`method change to loss of moving target (LOMT)
`mode, increasing its altitude over 120 cm. At this
`altitude the X-4 camera localizes the target, and
`target localization mode (TL) try to displace the
`
`X-4 toward a desired altitude. It is observed in
`Fig. 9 how the X-4 moves to desired altitude us-
`ing a embedded switch control. Figure 10 shown
`the X-4 displacements estimation over its X-axis
`on 180 s of real flight. It can be seen the ini-
`tialization steps during the first 10 s of flight.
`After that, we observe the displacement of the
`X-4 center of gravity and how the X-4 embedded
`control tries to move it to a desired orientation
`close to 60 cm over X-4 axis. In Fig. 11 we shown
`X-4 displacements estimation over its Y-axis on
`180 s of real flight. After 10 s of initialization
`step, the X-4 embedded control moves it to a
`desired orientation over Y-axis, in our case 50 cm.
`Figures 12 and 13 show the X-4 velocity over the
`X-axis and Y-axis, respectively, on 180 s of flight.
`Velocity estimation was obtained using optical
`flow algorithm.
`
`6 Conclusion
`
`In this paper we presented a UAV tracking a
`target placed over a ground vehicle. The strategy
`makes use of a vision system that estimates posi-
`tion, orientation and displacement velocity of the
`UAV with tespect to the moving target. The nav-
`igation mission uses different control algorithms:
`take-off (TO) mode for achieving a desired alti-
`tude; target localization (TL) mode has for goal
`aligning the vehicle’s center of gravity with respect
`to the moving target; and loss of moving target
`(LOMT) mode where the vehicle is required to
`
`Yuneec Exhibit 1021 Page 12
`
`

`
`J Intell Robot Syst (2013) 70:65–78
`
`77
`
`change its altitude in order to find the moving
`target. The UAV tracking algorithm has been
`developed and tested using our experimental X-4
`platform in real-time flights. The real-time exper-
`iments have shown an acceptable performance of
`the UAV applying the control navigation schema
`proposed.
`Future work will concern extending the strate-
`gies presented in this paper, with the purpose of
`working in different conditions, such as a different
`target forms. Pattern recognition methods, like
`machine learning, are being tested with the main
`objective of providing a more robust estimation of
`the form and location of the target.
`
`Acknowledgements This work was partially supported
`by the Mexican National Council for Science and Technol-
`ogy (CONACYT), the Institute for Science & Technology
`of Mexico City (ICyTDF) and the French National Center
`for Scientific Research (CNRS).
`
`Appendix
`
`Before proving Theorem 1, let’s use the

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket