`Proc. of SPIE Vol. 6230, 623010, (2006) · 0277-786X/06/$15 · doi: 10.1117/12.667356
`
`Proc. of SPIE Vol. 6230 623010-1
`
`Autonomous Target Following by Unmanned Aerial Vehicles
`
`Fahd Rafi, Saad Khan, Khurram Shafiq and Mubarak Shah
`
`Computer Vision Lab,
`Department of computer Science,
`University of Central Florida,
`Orlando FL, USA
`
`ABSTRACT
`In this paper we present an algorithm for the autonomous navigation of an unmanned aerial vehicle (UAV)
`following a moving target. The UAV in consideration is a fixed wing aircraft that has physical constraints on
`airspeed and maneuverability. The target however is not considered to be constrained and can move in any
`general pattern. We show a single circular pattern navigation algorithm that works for targets moving at any
`speed with any pattern where other methods switch between different navigation strategies in different scenarios.
`Simulation performed takes into consideration that the aircraft also needs to visually track the target using a
`mounted camera. The camera is also controlled by the algorithm according to the position and orientation of
`the aircraft and the position of the target. Experiments show that the algorithm presented successfully tracks
`and follows moving targets.
`
`Keywords: Unmanned Aerial Vehicle, Target Following, Autonomous Navigation
`
`1. INTRODUCTION
`Autonomous operation of an aerial vehicle is a challenging task. Long distance navigation and way point following
`is an easier task and is used in commercial aircrafts. Close range maneuvering and following a moving target are
`still significant problems for research. Close range maneuvering requires constant realtime decision making and
`optimizing many parameters taking care of physical constraints on the aircraft at the same time. This becomes
`even more difficult following a moving target for which future target position is not known. In such a scenario,
`we cannot make decisions based on where the target may go and navigate to an intercept.
`Unmanned aerial vehicles or UAVs have become an important part of modern warfare. They are cost effective
`and low risk to human operators. In recent years, the abilities of UAVs have been extended to be close to those of
`manned fighter bomber aircrafts. But even now, most UAVs are still used for reconnaissance purposes. Moreover,
`UAVs have been developed that can remain airborne for extended periods of time. Sometimes even months or
`years. This would be very hard if not impossible to accomplish with manned aircrafts. Remote controlling these
`UAVs is a good way of reducing cost and yet maintaining the quality of human judgement.
`The most important goal of research in autonomous flight and navigation is to reduce the time and requirement
`of human operators. The advantage of this is an increased reconnaissance capability at a lower risk and cost in
`terms of time and money. Most research goes into reducing the human:UAV ratio so that fewer human operators
`are needed to fly more UAVs. And human decisions can be moved to higher or policy level operation.
`The work presented in this paper is one step towards acquiring the ability to control a multitude of vehicles
`with fewer operators and ultimately having a wider view and better situation awareness in a battlefield. This
`paper presents a method to automate one UAV following an assigned target on the ground without any human
`intervention.
`Research has been done in this area before and some published works have appeared. Recent work similar
`to that presented here is done by Goodrich1 where they navigate a UAV to orbit around a target on the ground
`using the slope field of a supercritical Hopf bifurcation and in effect following a circular pattern around the target.
`
`Further author information: (Send correspondence to Fahd Rafi)
`E-mail: frafi@cs.ucf.edu,
`
`Yuneec Exhibit 1018 Page 1
`
`
`
`Proc. of SPIE Vol. 6230 623010-2
`
`When the target is moving however, the aircraft being faster, needs to increase the distance covered and often
`people have used wave patterns such as Hedrich2 and Husby.3 Work by Husby investigates a combination of
`movement patterns for different scenarios. This approach however requires more information about the movement
`pattern of the target to be effective and we in our work do not assume any pattern of movement for the target.
`Work presented in this paper is also of significance as we use very realistic simulation tools taking into account
`aircraft stability controls in real world scenario. Differen aircrafts have different aerodynamic constraints but
`the method presented here works for large and small fixed-wing aircrafts as no tight constraints have been put
`on the aircraft capabilities.
`The little published material there is on this application, shows that there is no single best strategy for a
`UAV following a target. Many different strategies have been tried with different results.
`
`2. METHOD
`In this section is described the problem we set out to solve with all given constraints and desired behavior. The
`method should work with different models of UAVs as no model specific constraints are considered. The only
`assumption here is that our UAV is a fixed wing aircraft. High level control strategy for helicopter type vehicles
`would be much simpler as there are fewer movement constraints. Low level stabilization and control is much more
`complicated and in many cases not feasible for autonomous operation even with state of the art equipment.4
`
`2.1. Problem Description
`Our abstract level goal is to have a UAV follow a target on the ground. There are two main components to this
`goal. Firstly, we need to maintain knowledge of the position of the target. This is obviously necessary in order to
`follow the target. Secondly, we need to maintain proximity to the target or in other words, to follow the target.
`On the UAV, we have: Image from a camera mounted on it; The camera mounted on a gimbal with two
`degrees of freedom of movement to control the direction of camera; An autonomous flight avionics package,
`‘piccolo’ to maintain stability of the aircraft and translate semantic navigation commands into commands for
`control surfaces; Realtime telemetry information available from the piccolo system.
`The UAV being a fixed wing aircraft, has physical constraints on maneuverability in terms of minimum and
`maximum airspeed, maximum turn rate, control response latency and aerodynamic constraints on the orienta-
`tion of the aircraft. Orientation of an aircraft performing any given maneuver is determined by the aircraft
`aerodynamics. Which differ from model to model. These are common constraints for any fixed wing aircraft
`that any real time control algorithm must cater for.
`The target however is free to move at any speed and in any direction. In the simulations however, we consider
`a realistically moving ground vehicle. One limitation that must be accepted is that the target velocity does not
`exceed aircraft velocity for extended periods of time.
`In which case the UAV is unable to follow the target.
`Airspeed is a design feature of an aircraft and has nothing to do with the navigation algorithm. If the target
`moves faster than the UAV, it can just not be followed.
`Our algorithm generates two controls for the system: Turn rate for the aircraft; Camera control angles for
`the gimbal.
`
`2.2. Camera Setup
`In figure 1, is shown the kind of camera gimbal simulated. The gimbal is mounted under the aircraft airframe.
`It has two degrees of freedom. θ is the heading of the camera with respect to the aircraft and has a range [0
`360), 0 being the direction of the aircraft and clockwise direction being positive. φ is the elevation of the camera
`with respect to the aircraft and has a range [0 90], 0 being straight ahead and 90 being straight downward.
`The UAV coordinate system is related to world coordinate system by three angles: yaw, pitch and roll, and
`a translation. This is shown in Figure 2.
`
`Yuneec Exhibit 1018 Page 2
`
`
`
`Proc. of SPIE Vol. 6230 623010-3
`
`UAV
`
`θ
`
`φ
`
`ZUAV
`
`Y
`UAV
`
`XUAV
`
`Camera Gimbal
`
`θ
`
`φ
`
`Figure 1. Camera gimbal setup
`ZUAV
`
`Z C a m e r a
`
`Zworld
`
`φ (Elevation)
`
`Yaw
`
`XUAV
`
`X C a m e r a
`
`θ (Azimuth)
`
`Y C a m e r a
`
`YUAV
`
`Yworld
`
`Pitch
`
`Roll
`
`Xworld
`
`Figure 2. Relationship between camera, UAV and World coordinate systems
`
`2.3. Target Tracking
`In order to follow the target, we must track it. Let us first describe how we track the target. A representation of
`the system setup is shown in figure 3. The goal of the tracking part of the system is to get the world coordinates
`Xw given the image coordinates Xi of the target.
`To achieve this, we use a sensor model described in Section 2.3.1 that gives us geographical position of the
`target, using which we may employ a navigation algorithm for the UAV to follow the target.
`
`2.3.1. Sensor Model
`We have as input, image coordinates for the target (xi, yi). We require world coordinates of the target (xw, yw, zw).
`To achieve this, we first transform pixel coordinates of the target in input image to world coordinates with the
`sensor transform given by
`
`Πsensor = T a
`
`
`
`
`
`
`z T ay T ax RayRapRar RgφRg
`
`
`
`
`
`
`
`θ
`
`(1)
`
`where ‘Πsensor’ is the sensor transform as in Figure 4. The superscript ‘a’ represents aircraft rotation and
`translation parameters and superscript ‘g’ represents gimbal rotation parameters.
`‘R’ is rotation while ‘T ’ is
`‘φ’ and ‘θ’ are gimbal parameters as shown in Figure 1. The subscripts ‘r’, ‘p’ and ‘y’ are ‘roll’,
`translation.
`‘pitch’ and ‘yaw’ respectively and applied as shown in Figure 2.
`
`Yuneec Exhibit 1018 Page 3
`
`
`
`Proc. of SPIE Vol. 6230 623010-4
`
`zw
`
`yw
`
`xw
`
`yim
`
`zim
`
`xim
`
`Figure 3. Image and World Coordinates
`Target
`Detected in
`Image
`
`
`
`o ro r
`
`
`
`ss
`
`
`
`nn
`
`
`
`s es e
`
`
`
`sf o r m sf o r m Π
`
`
`
`nn
`
`
`
`o r T r ao r T r a
`
`s
`s
`
`n
`n
`
`e
`e
`
`
`
`SS
`
`Camera Center
`
`ImageImage
`
`
`CoordinatesCoordinates
`
`Target
`
`Terrain
`
`World Coordinates
`World Coordinates
`
`Figure 4. Sensor Transformation (blue) and Ray Tracing (green) to obtain target coordinates
`
`‘Πsensor’ cannot be applied to the 2D image coordinates of the target. We obtain 3D camera coordinates
`of the target pixel as Xi = (xi, yi,−f), where ‘f’ is focal length of the camera. Camera center or center of
`projection is initially at origin and is also transformed by Equation 1.
`Once we have the transformed image location of the target in world coordinates as,
`
`We can project this image point on to the ground using a simple ray tracing function that we shall call
`T errainP rojection. And we have coordinates of the detected target in world coordinates as,
`
`ΠsensorXi.
`
`(2)
`
`Xw = T errainP rojection (ΠsensorXi)
`
`(3)
`
`The function, T errainP rojection, projects a ray from the camera center, to the target image pixel, on the
`terrain. Ray tracing requires geometric information about the environment which in our case is the height of
`the terrain at each point. This information is commercially available as a Digital Elevation Map or DEM from
`United States Geographical Survey. If this information is not available, a planar terrain can be assumed as in our
`simulation experiments. For detail of ray tracing, refer to an introduction5 or any standard computer graphics
`text.
`
`2.3.2. Target Detection and Tracking
`Detection and tracking of target in the image is not addressed as part of this work. For some detection methods,
`refer to Lipton,6 Meer.7 The job of the camera is to be able to keep a track of the target regardless of aircraft
`position and orientation. This requires additional control for the camera gimbal that we must calculate in real
`
`Yuneec Exhibit 1018 Page 4
`
`
`
`Proc. of SPIE Vol. 6230 623010-5
`
`time. This camera control is, in a closed loop, tied to the world model and geographically tracking the target in
`the reference frame of the earth. Camera mounted on the vehicle provides us input video frame by frame. In any
`given input frame. Target is detected for that frame. At this point, we have the telemetry information of the
`aircraft as well as the camera gimbal available to us. Using this information, the target is projected on to the
`plane of the earth (using telemetry information) and its geographical location is found out. This geographical
`position of the target and the location and orientation of the aircraft enable us to calculate the control angles of
`the camera gimbal, so that it looks directly at the target. Problem with the above method is that it requires the
`camera to already be looking at the target in order to calculate the angles to look at the target again! To solve
`this dilemma, we employ a kalman filter to estimate the future telemetry information of the aircraft as well as
`the target to generate camera controls to be looking at the target in advance.
`Once, we know the telemetry estimate for the target and the aircraft, we can use this to estimate the camera
`angles (θ and φ as described in Section 2.2)to look at the target at the next time step.
`
`2.4. Navigation
`
`d
`
`τ 1
`
`τ 2
`
`UAV
`
`t 1
`
`r
`
`Target
`
`t 2
`
`Figure 5. Navigation strategy
`
`Navigation algorithm presented is simple and has very few parameters to adjust. In fact, just one parameter.
`We ideally make a perfect circle around a static target. Only parameter to the navigation algorithm is the radius
`of that circle.
`
`τ = min(τ1, τ2)
`
`(4)
`
`In Fig 5, the UAV and target are represented in a topographical view. Direction vector of the UAV is
`represented by d. The circle of radius r around the target is the goal trajectory we are trying to achieve.
`Direction vector d makes angles τ1 and τ2 with the two tangents t1 and t2. The goal is to take the direction of
`the nearest tangent possible. Hence, we minimize the minimum of τ1 and τ2 by telling the UAV to change the
`direction d by τ as shown in Equation 4. In other words, to turn.
`To calculate τ1 and τ1, we first take a vector (say T ) from the UAV to the target. Calculate ∆τ as,
`∆τ = arcsin r|T| .
`
`(5)
`
`Then, angle δ between d and T is,
`
`(cid:2)
`
`(cid:1)
`
`ˆd · ˆT
`
`δ = arccos
`
`(6)
`
`Where, ˆd and ˆT are unit vectors. τ1 and τ1 are:
`
`Yuneec Exhibit 1018 Page 5
`
`
`
`Proc. of SPIE Vol. 6230 623010-6
`
`τ1 = δ − ∆τ
`τ2 = δ + ∆τ
`
`(7)
`(8)
`
`There are two cases other than the ones shown in figure 5 to consider. First, if d lies between t1 and t2. We
`again do the same and head slightly away from the target. This results in better overall tracking of the target
`because if the UAV passes directly over the target, it has to loop around again which might take the aircraft
`farther away from the target. Second, what if the UAV is within the circle of radius r around the target. In
`this case, we simply allow the aircraft to go straight ahead because the goal of staying close to the target is
`already being achieved. And whenever the aircraft goes beyond a distance of r from the target, the algorithm
`will adjust heading of the aircraft to start following a circular pattern around the target. The only parameter
`to the algorithm, the distance r, can be set to any desired value depending on the capability of the camera to
`be able to track a target from the aircraft. Also, depending on the maneuverability and speed of the aircraft for
`example it might not be practical to keep a very high speed aircraft very close to the target.
`
`2.5. Overall Loop
`The overall closed loop algorithm is summarized below:
`
`1. Detect target in camera input image. (Known from telemetry for the scope of this paper)
`
`2. Project target from image on to earth plane and find geographical position of target.
`
`3. Predict position of target and aircraft.
`
`4. Calculate required turn rate of the aircraft. (Equation 4)
`
`5. Calculate tracking controls for camera.
`
`6. Apply turn rate and camera controls in simulation and move to next iteration.
`
`3. RESULTS
`To test our ideas, we used simulation tools provided with piccolo8 that use real world flight dynamics for realistic
`evaluation of our methods. The turn rate of the aircraft is caped to a maximum depending on the type of aircraft
`as well as time it takes for the entire algorithm to complete one iteration. The cap on the turn rate is inversely
`proportional to the time it takes for the entire algorithm to complete.
`If we can update the turn rate very
`frequently, we can afford asking the aircraft to turn very quickly. If however it takes some time for the system to
`iterate through the whole process, a high turn rate may result in the aircraft going beyond the desired heading.
`In Figures 6(a), 6(b) and 6(c), the target follows a straight line at varying speeds and the aircraft is able to
`follow the target quite closely in each case.
`Experiment in Figure 6 is performed to ascertain the ability of the navigation strategy to work at different
`speeds of the target.
`Figure 8 is the topographical plot of a target moving with changing directions and the UAV following it with
`reasonable proximity. See that we do not plan a particular path for the UAV but rather the path is a consequence
`of our control decisions at different times.
`Several experiments were performed with the target moving in different directions, making turns, varying
`speed and even stopping. In all cases, the aircraft is able to closely follow the target. Figure 7 shows a simulation
`of the target with varying speeds and patterns of movement with sudden changes in direction and/or speed. At
`all times, the aircraft satisfactorily follows the target.
`
`Yuneec Exhibit 1018 Page 6
`
`
`
`Proc. of SPIE Vol. 6230 623010-7
`
`2000
`
`3000
`
`4000
`
`5000
`
`6000
`
`7000
`
`2000
`
`3000
`
`4000
`
`5000
`
`6000
`
`7000
`
`2000
`
`3000
`
`4000
`
`5000
`
`6000
`
`7000
`
`8000
`3000
`
`3500
`
`4000
`
`4500
`
`5000
`
`8000
`3000
`
`3500
`
`4000
`
`4500
`
`5000
`
`8000
`3000
`
`3500
`
`4000
`
`4500
`
`5000
`
`(a)
`
`(b)
`
`(c)
`
`Figure 6. Aircraft is in red, Target is in Blue. Scale is in meters with respect to an arbitrary origin.
`
`4. CONCLUSION
`The work presented here is part of an ongoing project to achieve complete autonomous control for an aerial
`vehicle following a moving target on the ground. This would enable us to automate one task that currently
`requires continuous human intervention. Automation of these simple tasks can free up human resources for more
`complex strategic decisions as well as reducing time and cost and increasing situational awareness in a large area
`of interest.
`
`REFERENCES
`1. M. Quigley, M. A. Goodrich, S. Griffiths, A. Eldredge, and R. Beard, “Target acquisition, localization, and
`surveillance using a fixed-wing, mini-uav and gimbaled camera,” in International Conference on Robotics and
`Automatio, 2005.
`2. J. Lee, R. Huang, A. Vaughn, X. Xiao, J. K. Hedrich, M. Zennaro, and R. Sengupta, “Strategies of path-
`planning for a uav to track a ground vehicle,” in AINS Conference, 2003.
`3. C. R. Husby, “Path generation tactics for a uav following a moving target,” Master’s thesis, Air Force Institute
`of Technology, Wright-Patterson Airforce Base, OH, 2005.
`4. H. Shim, T. J. Koo, F. Hoffmann, and S. Sastry, “A comprehensive study of control design for an autonomous
`helicopter,” in 37th IEEE Conference on Decision and Control, 4, pp. 3653–3658, 1998.
`5. A. S. Glassner, An Introduction to Ray Tracing, Morgan Kaufmann Publishers, Inc., San Diego, 1989.
`6. A. J. Lipton, H. Fujiyoshi, and R. S. Patil, “Moving target classification and tracking from real-time video,”
`in 4th IEEE Workshop on Applications of Computer Vision (WACV’98), p. 8, 1998.
`7. D. Comaniciu, V. Ramesh, and P. Meer, “Kernel-based object tracking,” in IEEE Transactions on Pattern
`Analysis and Machine Intelligence, 2003.
`8. B. Vaglienti, R. Hoag, and M. Niculescu, Piccolo System User’s Guide. Cloud Cap Technologies,
`http://www.cloudcaptech.com, 2001-05.
`9. C. Olson, The FlightGear Project. FlightGear.org, 2001.
`
`Yuneec Exhibit 1018 Page 7
`
`
`
`Proc. of SPIE Vol. 6230 623010-8
`
`Figure 7. Trajectories of UAV (red) and Target (blue). Following in a varied and complex pattern with the target
`moving at different speeds.
`It is clear from the illustration that a single following strategy works regardless of the
`movement pattern f the target.
`
`6000
`
`5000
`
`4000
`
`3000
`
`2000
`
`1000
`
`6000
`
`5000
`
`4000
`
`3000
`
`2000
`
`1000
`
`−1000
`
`0
`
`1000
`
`2000
`
`3000
`
`4000
`
`5000
`
`6000
`
`7000
`
`−1000
`
`0
`
`1000
`
`2000
`
`3000
`
`4000
`
`5000
`
`6000
`
`7000
`
`(a)
`
`(b)
`
`Figure 8. Figures 8(a) and 8(b) show the target moving at different speeds and how the UAV makes different paths to
`adjust accordingly
`
`Yuneec Exhibit 1018 Page 8