throbber
Simultaneous Tracking of Multiple Ground Targets
`from a Single Multirotor UAV
`
`Nathaniel Miller∗
`Texas A&M University, College Station, TX, 77843, USA
`Jonathan Rogers†
`Georgia Institute of Technology, Atlanta, GA, 30332, USA
`
`An algorithm for autonomous multirotor tracking of an arbitrary number of ground
`targets is formulated and evaluated through simulation. The algorithm consists of a par-
`ticle filter to predict target motion, trajectory generator, and model predictive controller
`operating over a finite time horizon. Furthermore, a target selection algorithm is included
`to reject targets that preclude accurate tracking of the main target set. Performance of
`the guidance algorithm is evaluated through simulation using real-world target data. Sim-
`ulation results show that targets can be kept within the camera’s field of view when using
`either a gimbaled or non-gimbaled camera, but performance may be substantially degraded
`in the non-gimbaled case when target dynamics occur on time scales similar to tracking
`vehicle dynamics.
`
`Nomenclature
`
`Ground speed
`v
`Course over ground

`Lateral and longitudinal accelerations
`alat, alon
`Easterly and northerly distance from origin, heigh above ground
`x, y, z
`xcam, ycam Easterly and northerly center of camera frame
`Nearest altitude range (for gain scheduling)
`znearest
`φ, θ, φc, θc Roll and pitch, commanded roll and pitch
`Non-dimensionalized thrust, commanded non-dimensionalized thrust
`T , Tc

`Time constant
`cd
`Coeffecient of drag
`X, ¯X, σX Measurement, expected value, and standard deviation of random variable
`n
`Number of samples currently held in accumulator
`N
`Maximum number of samples holdable in accumulator
`¯t
`Time vector
`H
`Horizon length
`k, K
`Current time index, number of time indicies
`¯x, ¯u, ¯Y
`State, control, and output vectors at instant in time
`Desired output vector at instant in time
`
`Estimated and desired output vector over time horizion
`
`(cid:101)Y
`(cid:2) ¯Y(cid:3),
`
`(cid:105)
`
`(cid:104)(cid:101)Y
`
`Optimal control vector over time horizon
`[¯u]
`∗Graduate Research Assistant, Department of Aerospace Engineering, Student Member, AIAA.
`†Assistant Professor, Woodruff School of Mechanical Engineering, Member, AIAA.
`
`1 of 19
`
`American Institute of Aeronautics and Astronautics
`
`Downloaded by Michael Daniels on August 3, 2016 | http://arc.aiaa.org | DOI: 10.2514/6.2014-2670
`
` AIAA Atmospheric Flight Mechanics Conference
`
` 16-20 June 2014, Atlanta, GA
`
` AIAA 2014-2670
`
` Copyright © 2014 by Nathaniel R Miller. Published by the American Institute of Aeronautics and Astronautics, Inc., with permission.
`
` AIAA Aviation
`
`Yuneec Exhibit 1015 Page 1
`
`

`
`(cid:101)x, (cid:101)y,(cid:101)z
`
`[k]
`[kmin]
`rj
`
`Desired x, y, and z positions
`Gain matrix
`Minimal gain matrix producing ¯uk+1 only
`Distance from target j to centroid of other targets
`
`I.
`
`Introduction
`
`There is increasing interest in the ability to task low-cost airborne assets to autonomously track ground
`
`targets, specifically though video feed. Such autonomous tracking capabilities may allow air vehicles to
`perform monitoring and surveillance activities without persistent oversight by an operator, which may be
`a prohibitively burdensome task in cases involving large numbers of targets and/or tracking vehicles. In
`scenarios where the number of tracking vehicles is limited and the number of targets is large, one air vehicle
`may be responsible for simultaneously tracking multiple targets. In this case, the tracking problem primarily
`becomes one of vehicle guidance and path planning since the vehicle flight path must permit on-board sensors
`to track multiple ground targets that may be uncooperative. As ground targets move farther apart, the air
`vehicle must increase altitude to ensure the targets remain within sensor fields-of-view. Furthermore, the
`air vehicle must continuously search for the optimal position that minimizes the probability of targets being
`obscured or unexpectedly maneuvering out of the field of view. Thus, the vehicle path planning problem and
`the tracking solution are coupled and create a complex guidance problem that must be solved in real-time.
`No solution currently exists for this type of problem and therefore tactical UAV’s are often restricted to
`tracking and engaging one target at a time.
`An extensive body of literature exists on automated airborne target tracking algorithms. Video-based
`tracking of a single target has been well studied1, 2 but such methods do not easily generalize to the simul-
`taneous multi-target case. Similarly, the problem of tracking and engaging multiple targets from multiple
`aerial platforms has been studied somewhat extensively during the past decade (see, for instance,3–7). In
`many ways these are simple generalizations of single-target-single-tracker algorithms, in which the number
`of trackers needed is the same as the number of targets. For high-altitude missions tasked with tracking
`targets in a confined area, such as the Predator scenario discussed in,8 the multi-target tracking problem
`from a single vehicle can be easily solved by flying standard surveillance patterns and determining optimal
`pointing solutions for on-board sensors. The problem becomes fundamentally different and far more difficult,
`however, when tracking is done with tactical low-altitude UAV’s with on-board, low-cost sensors of limited
`range and resolution.
`In this case, the vehicle must constantly climb, descend, and reposition to ensure
`adequate visual range and resolution with low-cost sensors. In addition, several algorithms have also been
`developed to track multiple targets when keeping all targets simultaneously in view is not a priority.9, 10
`Such solutions fail if targets move unpredictably while not under observation.
`This paper presents an algorithm for tracking an arbitrary number of ground targets from a single
`multirotor. For the purposes of algorithm development it is assumed that vehicle positions are known, and
`the control system is tasked to keep all targets within the video field of view to the maximum extent possible.
`Furthermore, it is assumed that the tasks of pointing the sensor and determining target motion from the
`sensor feed (both topics that are well studied11–14) are completed by external algorithms separate from the
`controller derived here. The tracking algorithm is constructed by first generating a reference trajectory given
`predicted target motion. Based on the reference trajectory, a receding-horizon optimal control problem if
`formulated and a linear model predictive control algorithm is implemented assuming vehicle dynamics similar
`to a multirotor UAV. A particle filter algorithm is used to predict target motion over the receding control
`horizon. In addition, a target rejection algorithm limits tracking vehicle altitutde below a certain ceiling,
`ensuring that tracking resolution of the total target set is not sacrificed in order to track a single, rogue
`target.
`The paper proceeds as follows. First, the guidance algorithm is described including the trajectory gen-
`erator, model predictive controller, and target motion particle filter. Simulation results are provided for an
`example small-scale hexcopter aircraft, and trade studies are performed examining the effects of camera field
`of view, target rejection, and gimbaled vs non-gimbaled performance. Finally, results demonstrate that the
`algorithm enables simultaneous multi-target tracking performance for a broad range of ground target types
`and is robust to uncooperative target motion.
`
`2 of 19
`
`American Institute of Aeronautics and Astronautics
`
`Downloaded by Michael Daniels on August 3, 2016 | http://arc.aiaa.org | DOI: 10.2514/6.2014-2670
`
`Yuneec Exhibit 1015 Page 2
`
`

`
`II. Tracking Algorithm
`
`A1
`
`A2
`
`A3
`
`T1, T7, T18
`
`T4, T11
`
`T5, T9
`
`A1
`
`T1
`
`T7
`
`T18
`
`Asset Assignment
`(Pool of targets)
`T1, T2, ..., TN
`
`T19, T27
`
`T2, T8
`
`AM
`
`AM−1
`
`Figure 1. Graphic representing the proposed tracking architecture. The asset assignment algorithm (not the
`focus of this work) will assign each airborne tracking vehicle (denoted Am) a set of targets to track (denoted
`Tn). The focus of this work is the control system associated with the single-agent, N -target problem.
`
`A general multi-agent, multi-target tracking problem may be formulated by considering M tracking
`agents and N targets. As shown in Figure 1, a centralized architecture consists of an asset assigner which
`assigns to each agent a set of targets. Each agent is agnostic to both the other agents and targets which
`are assigned to other agents. The focus of this paper then is the single-agent, N -target tracking problem –
`specifically, the control algorithm on-board each agent that allows it to track multiple targets simultaneously.
`The process of asset assignment, whether performed in a centralized or decentralized manner, is taken for
`granted in the discussion that follows.
`The multi-target tracking algorithm operates in four layers. First, the trajectory predictions of each
`target are estimated using a simple target model and particle filter. A trajectory generator then takes the
`estimated path of all targets and constructs a desired 3D reference trajectory for the multirotor to follow.
`Linear model predictive control (MPC) with a finite time horizon is used to track this trajectory. Finally, a
`target rejection algorithm works to determine if any targets are driving the tracking vehicle to prohibitively
`high altitudes and reducing tracking performance for the majority of targets. Such rogue targets are removed
`from consideration through a mathematical rejection criteria. A diagram of the basic control architecture is
`shown in Figure 2.
`
`A. Rotorcraft Dynamic Model
`
`A simple linear model of a multirotor is constructed assuming the rotorcraft contains a stabilization system
`capable of tracking commanded Euler angles with first order lag. Translational acceleration is accomplished
`by thrust vectoring using the roll and pitch degrees of freedom. A drag term, linear in the velocity states, is
`added to the three translational velocities. The controls are taken to be commanded roll rate, commanded
`pitch rate, and commanded thrust. Two controller states, commanded roll and commanded pitch, are
`also included in the state vector. In this way both control and control rate can be penalized within the
`model predictive controller. Thrust is non-dimensionalized by vehicle mass, resulting in equations of motion
`independent of vehicle mass. Thrust, roll, pitch, roll rate, and pitch rate are constrained to reasonable
`values, discussed further in Section 3.2. Gravity is accounted for in the dynamic model by decreasing the
`non-dimensionalized thrust by 1. Equations (1)-(7) describe the linear equations of motion for the aircraft-
`camera system.
`
`(cid:104)
`
`(cid:105)t
`
`(1)
`
`(2)
`
`¯x =
`
`φ θ T x y
`
`(cid:104)
`
`¯u =
`
`z
`
`˙x
`
`˙z φc φc
`
`˙y
`
`(cid:105)t
`
`˙φc
`
`˙θc Tc
`
`3 of 19
`
`American Institute of Aeronautics and Astronautics
`
`Downloaded by Michael Daniels on August 3, 2016 | http://arc.aiaa.org | DOI: 10.2514/6.2014-2670
`
`Yuneec Exhibit 1015 Page 3
`
`

`
`Targets’ State
`
`Particle Filters
`
`Reference Trajec-
`tory Generator
`
`Control Matrix [kmin]
`
`Agent Control Sequence
`
`State
`
`Select Control Matrix
`Based on Altitude
`
`Autopilot
`
`System
`
`Vehicle Specific Controls
`
`Figure 2. Block diagram representing the multi-target tracking control algorithm
`
`
`
`−τφ
`0
`0
`0
`0
`0
`1
`0
`0
`0
`0
`
`A =
`
`0
`−τθ
`0
`0
`0
`0
`0
`1
`0
`0
`0
`
`0
`0
`−τT
`0
`0
`0
`0
`0
`1
`0
`0
`
`0
`0
`0 0
`0
`0
`0
`0
`0
`0
`0
`0
`1
`0
`0 0
`0
`0
`0 0
`0
`0
`0 0
`0 cdx
`0 0
`0
`0 0
`0
`0
`0
`0
`0
`0
`0 0
`0
`0
`0 0
`0
`
`0
`0
`0
`0
`1
`0
`0
`cdy
`0
`0
`0
`
`0
`0
`0
`0
`0
`1
`0
`0
`cdz
`0
`0
`
`
`
`0
`τθ
`0
`0
`0
`0
`0
`0
`0
`0
`0
`
`τφ
`0
`0
`0
`0
`0
`0
`0
`0
`0
`0
`
`t
`
`0
`0
`0
`0
`0 0
`
`0 1
`0 0
`0
`0
`
`0
`1
`0
`
`0
`0
`0
`
`(cid:105)t
`
`(cid:105)t
`
`(3)
`
`(4)
`
`(5)
`
`(6)
`
`0 0
`(cid:104)
`
`0
`0
`0 0
`0 0 τt
`
`0 0
`0 0
`0
`0
`
`x y
`
`z φ θ
`
`B =
`
`
`
`¯Y =
`
`(cid:104)
`
`x y
`
`: gimbaled
`: non − gimbaled
`z φ θ xcam ycam
`xcam = x + z tan (−φ) ≈ x − zφ
`
`ycam = y + z tan (−θ) ≈ y − zθ
`The xcam and ycam terms in Equation (5) are a nonlinear product of altitude and orientation angle. To
`maintain linearity, gain scheduling is implemented in Equations (6) and (7). The user defined permissible
`altitude range is discretized into ten intervals on a logarithmic scale and Equation (8) is used to generate the
`output matrix C assuming altitude z is a constant on the interval. In this way, the standard linear system
`equations, shown in (9) and (10), can be used.
`
`(7)
`
`4 of 19
`
`American Institute of Aeronautics and Astronautics
`
`Downloaded by Michael Daniels on August 3, 2016 | http://arc.aiaa.org | DOI: 10.2514/6.2014-2670
`
`Yuneec Exhibit 1015 Page 4
`
`

`
`
`
`C =
`
`
`
`0
`0
`0
`1
`0
`−znearest
`0
`
`
`
`0 0 0 1 0
`0 0 0 0 1
`0 0 0 0 0
`1 0 0 0 0
`0 1 0 0 0
`0
`0
`0
`0
`1
`0
`−znearest
`
`
`
`0
`0
`0
`0
`0
`
`0
`0
`0
`0
`0
`0
`0
`
`0
`0
`1
`0
`0
`
`0
`0
`0
`0
`0
`
`0 0
`0 0
`0 0
`0 0
`0 0
`0
`0
`1 0
`0
`0
`0 1
`0
`1
`0 0
`0
`0
`0 0
`0
`0
`0 0
`0
`0
`0 0
`0
`0
`0 0
`
`0
`0
`0
`0
`0
`
`0
`0
`0
`0
`0
`0
`0
`
`˙¯x = A¯x + B ¯u
`
`0
`0
`0
`0
`0
`0
`0
`
`¯Y = C ¯x
`
`: gimbaled
`
` : non − gimbaled
`
`0
`0
`0
`0
`0
`0
`0
`0
`0
`0
`0 0
`0
`0
`
`(8)
`
`(9)
`
`(10)
`
`The continuous system equations of motion are transformed into a discrete system using a zero order hold
`transform with a timestep of 0.2 seconds. These discrete equations of motion are given in (11) and (12).
`
`¯xk+1 = Ad ¯xk + Bd ¯uk
`
`¯Yk = Cd ¯xk
`
`(11)
`
`(12)
`
`B. Ground Target Model
`
`Targets are assumed fixed to a flat earth and are modeled as points in two dimensional space using a East-
`North-Up coordinate system. It is assumed that targets hold constant longitudinal and lateral accelerations
`over the planning horizon, and that lateral acceleration is reasonably small such that it does not change the
`ground speed of the target (only direction). For all cases in this paper, a planning horizon of 5 sec was used.
`The target equations of motion are shown in Equations (13) to (15).
`
`Figure 3. Ground target coordinate system.
`
`(cid:40)
`
`φ(t) =
`
`˙x(t) = (alont + v0) sin φ(t)
`
`˙y(t) = (alont + v0) cos φ(t)
`
`φ0 − alat ln v0
`+ alat ln (v0+alont)
`alon
`alon
`φ0 + alatt
`v0
`
`: alon (cid:54)= 0
`: alon ≈ 0
`
`5 of 19
`
`American Institute of Aeronautics and Astronautics
`
`(13)
`
`(14)
`
`(15)
`
`Downloaded by Michael Daniels on August 3, 2016 | http://arc.aiaa.org | DOI: 10.2514/6.2014-2670
`
`Yuneec Exhibit 1015 Page 5
`
`

`
`n
`(N−1)¯(X)k+Xk+1
`N
`
`¯Xk+1 =
`
`(cid:40)
`
`(cid:104)
`
`Mk+1 =
`
`N +1
`
`(cid:40) (n−1)¯(X)k+Xk+1
`(cid:105)(cid:2)Mk + (Xk+1 − ¯xk+1)2(cid:3)
`(cid:40) Mk+1
`
`: n ≤ N
`: n > N
`
`(16)
`
`(17)
`
`Accumulators are used to estimate the mean and variance of each target’s course over ground (COG),
`ground speed, and longitudinal and lateral acceleration. Distributions of these variables are assumed Gaus-
`sian. Position, velocity, and COG measurements are taken from a GPS receiver and assumed deterministic
`due to the GPS’s relatively high precision in taking these measurements. Longitudinal and lateral ac-
`celerations are determined by first order finite differencing applied to the groundspeed and COG. Online
`accumulation algorithms, as shown in Equations (16)-(18), are modified to include a forgetting factor.15
`: n ≤ N
`: n > N
`Mk + (Xk+1 − ¯Xx+1)2
`1 − 1
`
`σ2
`xk+1
`
`=
`
`: n ≤ N
`: n < N ;
`
`n−1
`Mk+1
`N−1
`Each target has an associated particle filter to estimate future target motion. Particles are sampled
`assuming a Gaussian distribution of the four unknowns on the right hand side of Equations(13)-(15), namely
`alon, alat, v0, and φ0. These equations are integrated forward in time yielding particle trajectories over a
`finite time horizon. Particles are regenerated, not resampled, with each new GPS measurement. This is
`equivalent to a particle filter in which the measurement error covariance is zero.
`
`(18)
`
`C. Reference Trajectory Generation
`
`A time horizon is chosen and discretized as shown in Equation (19). The desired multirotor trajectory is
`generated on this time horizon to keep all targets in the camera’s field of view at the lowest altitude possible.
`Figure 4 presents a graphical example of how this is done. The desired altitude over the planning horizon
`is a constant determined by the minimum altitude necessary to keep all the targets’ particles within the
`camera’s field of view at the end of the planning horizon (t = H). For this work, H was chosen to be 5
`seconds. Optionally, a buffer zone can be designated near the edge of the camera frame to keep targets from
`approaching the edge of the field of view, guarding against modeling errors and target prediction uncertainty.
`Use of this buffer zone has the effect of increasing the multirotor’s altitude and increasing the probability
`that targets will remain within the camera’s field of view. At the same time, this altitude increase means
`that the targets are viewed from a longer distance. A trade study involving this buffer distance will be
`considered in the Results section.
`
`(cid:104)
`
`¯t =
`
`0 ∆t 2∆t
`
`. . . H
`
`(cid:105)
`
`(19)
`
`The generated trajectory is stacked into a single vector as shown in Equation (20). The reason for this
`unusual stacking will be made clear in Section D. The zeroes in Equation (20) represent the desired roll and
`pitch angles. Setting them to zero forces the MPC controller to attempt to achieve the desired velocity with
`as close to a level attitude as possible, which is generally desirable for video tracking purposes. The second
`instance of the x and y positions in the non-gimbaled output equation is specifying the location of xcam and
`ycam, the center of the camera frame.
`
`(cid:105)
`
`(cid:104)(cid:101)Y
`
`=
`
`
`
`(cid:104)(cid:101)xk (cid:101)yk
`(cid:104)(cid:101)xk (cid:101)yk
`
`z
`
`0 0 . . .(cid:101)xk+K (cid:101)yk+K z
`0 0 (cid:101)xk (cid:101)yk . . .(cid:101)xk+K (cid:101)yk+K z
`
`z
`
`0
`
`(cid:105)t
`
`0
`
`0
`
`(cid:105)t
`
`0
`
`: gimbaled
`: non − gimbaled
`
`(20)
`
`Due to the inherent symmetry of multirotor systems, it is assumed that yaw angle has no effect on the vehicle
`dynamics. For simplicity the vehicle yaw angle is assumed to be zero at all times and is excluded from both
`the trajectory generation and system model.
`
`6 of 19
`
`American Institute of Aeronautics and Astronautics
`
`Downloaded by Michael Daniels on August 3, 2016 | http://arc.aiaa.org | DOI: 10.2514/6.2014-2670
`
`Yuneec Exhibit 1015 Page 6
`
`

`
`Figure 4. Determining multirotor position. The dark blue lines are observed paths of the targets. The many
`black lines emanating from the blue lines are possible target paths predicted by the particle filter. Light blue
`circles represent the expected location of each target at a future time. A bounding rectangle is drawn which
`contains the expected locations of all targets at the future time. The desired multirotor location is the center
`of this rectangle.
`
`D. Model Predictive Control
`
`A linear model predictive controller is constructed16 to track the reference trajectory. The control vector is
`stacked as shown in (21). A quadratic cost function is constructed to penalize both the states and controls,
`shown in (22). The estimated output over the planning horizon
`for control sequence
`is calculated
`¯u
`using the discrete system model using the block matrices kca and kcab given in (24) and (25).
`
`(cid:105)
`
`(cid:104)
`
`(cid:105)
`
`(cid:104)(cid:101)Y
`(cid:105)t
`
`(21)
`
`(22)
`
`(23)
`
`(cid:104)
`(cid:105)
`(cid:104)
`(cid:105)(cid:17)
`(cid:16)(cid:2) ¯Y(cid:3) −(cid:104)(cid:101)Y
`(cid:105)(cid:17)t
`(cid:16)(cid:2) ¯Y(cid:3) −(cid:104)(cid:101)Y
`(cid:2) ¯Y(cid:3) = kca ¯xk + kcab [¯u]
`
`¯u
`
`=
`
`¯ut
`k+1
`
`¯ut
`k+2
`
`¯ut
`k+2
`
`J =
`
`Q
`
`+ [¯u]t R [¯u]
`
`7 of 19
`
`American Institute of Aeronautics and Astronautics
`
`Downloaded by Michael Daniels on August 3, 2016 | http://arc.aiaa.org | DOI: 10.2514/6.2014-2670
`
`Yuneec Exhibit 1015 Page 7
`
`

`
`CdA0
`. . .
`0
`dBd
`CdA0
`CdA1
`. . .
`dBd
`dBd
`CdA1
`CdA2
`. . .
`dBd
`dBd
`...
`...
`. . .
`0
`d Bd CdAH−1
`CdAH
`CdA1dBd CdA0dB
`
`
`d Bd
`Setting the derivative of (22) with respect to [¯u] equal to zero and solving for the control vector yields the
`optimal control over the planning horizon (27). Because the MPC is re-calculated at each timestep, the only
`part of [¯u] needed is uk+1. Extracting just the top block-row of [k] allows a more computationally efficient
`calculation as shown in (26)-(28).
`
`
`
`
`
`CdAd
`CdA2
`d
`...
`CdAH
`d
`
`kca =
`
`0
`0
`CdA0
`dBd
`...
`. . .
`
`
`
`0
`0
`0
`
`
`
`kcad =
`
`
`[k] =(cid:0)ktcabQkcab + R(cid:1)−1
`(cid:17)
`(cid:105) − kca ¯xk
`(cid:16)(cid:104)(cid:101)Y
`(cid:105) − kca ¯xk
`(cid:16)(cid:104)(cid:101)Y
`
`kt
`cabQ
`
`(cid:17)
`
`[¯u] = [k]
`
`¯uk+1 = [kmin]
`
`(24)
`
`(25)
`
`(26)
`
`(27)
`
`(28)
`
`Recall from (8) the output matrix for the non-gimbaled case depends on altitude. Thus, each discretized
`altitude range has a [kmin] associated with it. When running the MPC algorithm the appropriate feedback
`matrix is selected based on the current mulitrotor altitude, using the implicit assumption that altitude does
`not change significantly over the planning horizon. A block diagram of the control system, including this
`gain scheduling, is shown in Figure 2.
`
`E. Target Rejection
`
`Targets which are prohibitively difficult to keep in the field of view, for instance due to completely uncorre-
`lated motion with the rest of the targets, are automatically removed from the target set. The target rejection
`routine is initiated once the multirotor has been within 10 percent of its user-defined altitude ceiling for two
`seconds. Three metrics are used to determine which target is rejected; distance from other targets as shown
`in Equation (29), course over ground correlation with the other targets as shown in Equation (30), and
`number of times a given target has been at the edge of the camera’s field of view. This metric is determined
`by keeping track of the number of instances in which the target was responsible for limiting the bounding
`rectangle as shown in Figure 4. The three metrics are weighted and summed, and the target with the greatest
`score is rejected.
`
`(cid:32)
`
`r2
`j =
`
`xj − 1
`N
`
`(cid:33)2
`
`xn
`
`+
`
`− N(cid:88)
`
`n=1
`
`(cid:32)
`
`yj − 1
`N
`
`ψcorrelation
`j
`
`= ψj − 1
`N
`
`− N(cid:88)
`
`n=1
`
`(cid:33)2
`
`yn
`
`(29)
`
`(30)
`
`N(cid:88)
`
`n=1
`
`ψn
`
`A. Target Data
`
`III. Simulation Results
`
`Data for simulation experiments was collected by placing a GPS receiver on three different ground vehicles;
`a pedestrian on foot, a bicyclist, and an automobile. The resulting trajectories were recorded. This data was
`
`8 of 19
`
`American Institute of Aeronautics and Astronautics
`
`Downloaded by Michael Daniels on August 3, 2016 | http://arc.aiaa.org | DOI: 10.2514/6.2014-2670
`
`Yuneec Exhibit 1015 Page 8
`
`

`
`post-processed to determine time histories of alon, alat, v0, and φ0. The data was then interpolated onto a
`uniform time grid using a timestep of 0.2 seconds.
`Walking data was collected on sidewalks in a small residential area. Every effort was made to ensure
`the GPS receiver had a clear view of the sky during the trajectory recording sessions. No further special
`accommodations were made. The pace under motion was about 2 m/s. Biking took place in a low traffic
`district. Again, the route was intentionally planned to maintain a clear view of the sky. Due to strong
`winds, riding speed varied from 5 m/s when traveling against the wind to 9 m/s with the wind. Driving was
`performed in a large residential area. Roads were selected which had clear views of the sky and speed limits
`under 18 m/s. This was done to ensure the car did not drive faster than the multirotor is capable of flying.
`Other than selecting roads with appropriate speed limits no effort was made to drive out of the ordinary
`while collecting data.
`For a given simulation a data set is selected at random. Small perturbations are applied in heading,
`velocity, start time, velocity, and start location. The tracking algorithm is then tasked with tracking 5 sets
`of randomly perturbed data. A sample data set generated from walking trajectories is shown in Figure 5.
`
`Figure 5. Example of randomized target trajectories. Targets begin near the origin and move in a north
`easterly direction. Open circles indicate locations at synchronized time points.
`
`B. System Identification
`
`To improve simulation fidelity, system identification was performed on an example hexcopter to obtain the
`linear model given in (3) and (4). The hexcopter was instrumented with an autopilot tasked with tracking
`human pilot commanded roll and pitch angles. Roll and pitch doubles were applied to determine appropriate
`rate limits and time constants, seen in Figure 6. Performance testing was conducted to determine maximum
`speed and thrust. Parameters used in the simulation are in Table1. Linearized coefficients of drag were
`
`9 of 19
`
`American Institute of Aeronautics and Astronautics
`
`Downloaded by Michael Daniels on August 3, 2016 | http://arc.aiaa.org | DOI: 10.2514/6.2014-2670
`
`Yuneec Exhibit 1015 Page 9
`
`

`
`determined from maximum speed by Newton’s Second Law applied at equilibrium at the maximum allowed
`roll and pitch angle (designated by the subscript max).
`It is assumed the only lateral and longitudinal
`forces present are the horizontal component of thrust and aerodynamic drag. The equation to determine
`coefficients of drag from maximum speed and bank angle is developed in Equations (31)-(32). Analogous
`equations were used drag coefficient cdy from | ˙ymax| and θmax. The final identified model parameters are
`(cid:88) Fx
`listed in Table 1.
`
`(31)
`
`= ¨x = cdx ˙x − tan φ
`
`m
`
`cdx =
`
`tan φ
`| ˙x|max
`
`(32)
`
`Table 1. Experimentally determined system parameters. These parameters are used in the system model
`presented in Equations (3) and (4).
`
`Param.
`
`Value
`
`τφ, τθ
`τT
`| ˙xmax|, | ˙ymax|
`cdx , cdy
`cdz
`T
`|φmax|, |θmax|
`| ˙φmax|, | ˙θmax|
`
`5
`2
`22 m/s
`0.045
`0.06
`[-0.8 1]
`45 deg
`90 deg / sec
`
`Figure 6. Application of doublets and system response.
`
`10 of 19
`
`American Institute of Aeronautics and Astronautics
`
`Downloaded by Michael Daniels on August 3, 2016 | http://arc.aiaa.org | DOI: 10.2514/6.2014-2670
`
`Yuneec Exhibit 1015 Page 10
`
`

`
`C. Example Trajectories
`
`An example trajectory generated by tracking walking targets with a gimbaled camera is shown in Figures 7
`and 9. The top plot in Figure 7 shows the tracking error between the commanded and actual locations of the
`multirotor, the second plot shows the multirotor roll and pitch angles, while the bottom plot shows gimbal
`pitch and roll angles. There are several interesting features to note in this example case. First, since the
`aircraft begins from rest, an initial transient is observed as the multirotor achieves a forward speed consistent
`with tracking the moving ground targets. The hexcopter maneuvers to keep on the desired trajectory while
`the gimbal keeps the camera pointed to the optimal location where all targets can be observed. Note that
`after the initial transient, gimbal angles remain relatively small throughout flight. Furthermore, note that the
`multirotor maneuvers fairly aggressively to maintain the desired ground track. For this case, the algorithm
`was 100% successful at keeping targets within the field of view.
`A second example trajectory for the same case except with a non-gimbaled camera is shown in Figures 9
`and 8.
`In this case, once past the initial transient the hexcopter maintains relatively low roll and pitch
`angles so that the targets are kept within the camera’s field of view. This of course leads to increased
`tracking error, as shown in Figure 8. This highlights the inherent control tradeoff faced by the non-gimbaled
`multirotor – tracking error can be improved through increased control action, but pitch and roll deviations
`tend to perturb the camera field of view significantly so that targets cannot be captured. For this case, the
`algorithm was 98.4% successful at keeping targets within the field of view. Figure 10 shows altitude time
`histories for both the gimbaled and non-gimbaled cases.
`
`D. Trade Studies
`
`A Monte-Carlo based trade study is conducted to explore sensitivity to different simulation parameters.
`Three metrics are used to judge the success of a tracking session: normalized flight time, percentage of
`targets outside the video camera frame, and mean altitude. Monte Carlo simulations use 100 cases with
`randomized data sets and the mean statistics are reported. Normalized flight time is a metric based on
`energy necessary to track targets. For example, for a vehicle with 10 minute endurance while hovering, if
`a tracking session were conducted which exhausts vehicle battery life in 8 minutes, this would result in a
`normalized flight time of 80%. In calculating normalized flight time it is assumed power consumption is
`directly proportional to thrust. A tracking session with 0% targets outside of the camera frame is able to
`keep all targets inside the camera frame at every instant in time. One with 100% targets outside of the
`camera frame has every single target outside the camera frame at every instant in time.
`
`1. Camera Frame Buffer Zone
`
`Mean altitude is directly correlated to the selected buffer zone around the edge of the camera frame. Target
`rejection is not allowed in this trade study. Maximum allowed hexcopter altitude is chosen to be (a rather
`high) 1000 m as to not restrict the trajectory. Results in Figure 11 show the gimbaled case performs well
`for all vehicles independently of the camera frame buffer zone. These results also suggest that decoupling
`vehicle motion with camera attitude, which is accomplished by the gimbal, is a key element in the algorithm’s
`performance. Normalized flight time is not strongly dependent on camera frame buffer zone size; although
`a slight trend towards decreased flight time is exhibited as the buffer zone increases. For the two quicker
`moving targets (bicycles and cars), the non-gimbaled case’s ability to keep targets within the camera’s field
`of view at low camera buffer zones is poor. The mean altitude must increase by nearly a factor of two before
`the non-gimbaled case has success on the same order of magnitude as the gimbaled case. The non-gimbaled
`case does, however, yield considerably longer flight times. This is because the non-gimbaled case does not
`maneuver as aggressively, as aggressive maneuvering with a non-gimbaled camera will result in the camera
`pointing far from the location directly below the multirotor which contains the target set.
`
`2. Target Rejection
`
`The walking tracking session is modified to include one target whose motion is substantially dissimilar from
`the motion of the other targets, as in Figure 12. Simulation is performed with both target rejection allowed
`and disallowed. Once a target is rejected it is counted as outside the camera frame for the rest of the tracking
`session. The hexcopter’s maximum allowable altitude during the target rejection simulations is 200m. The
`camera’s field of view is set to 70 degrees and the camera frame buffer zone to 0%.
`
`11 of 19
`
`American Institute of Aeronautics and Astronautics
`
`Downloaded by Michael Daniels on August 3, 2016 | http://arc.aiaa.org | DOI: 10.2514/6.2014-2670
`
`Yuneec Exhibit 1015 Page 11
`
`

`
`Results from target rejection Monte Carlo simulations with 100 simulationrs per data point are presented
`in Table 2. The target rejection algorithm is able to significantly improve tracking performance of all metrics
`by removing the rogue target from the target set for the gimbaled case. For the non-gimbaled case, the
`out-of-frame percentage is already quite high (due to the absence of a buffer zone), meaning tracking of
`some targets is somewhat poor already. Adding target rejection in this case does not significantly improve
`tracking performance of the overall target set.
`
`Table 2. Target rejection results for both gimbaled and non-gimbaled Monte Carlo cases.
`
`Gimbaled Non-Gimbaled
`
`Rejection
`Out of Frame (%)
`Normalized Flight Time (%)
`Mean Altitude (m)
`
`No Yes No
`61
`23
`56
`82
`90
`97
`186
`111
`186
`
`Yes
`59
`98
`111
`
`3. Camera Field of View
`
`The camera’s field of view is varied and its effects on controller performance are demonstrated as shown in
`Figure 13. The simulation is conducted using walking trajectories with target rejection disallowed and a
`camera frame buffer zone of zero. The maximum allowable altitude is 1000m. An increasing camera field
`of view increases performance as judged by all metrics. Most prominent is the ability of the non-gimbaled
`case to capture targets within the camera’s frame for wide fields of view. The wider field of view affords
`the vehicle greater maneuverability while still keeping targets within the camera’s frame. This analysis is
`of course independent of the effect of field of view on the camera’s ability to resolve targets. However,
`note that as field of view increases, vehicle mean altitude decreases approximately exponentially, meaning
`that tracking is generally performed at lower altitudes with higher field of view. This potentially offers the
`opportunity to

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket