*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Ph.D. Thesis Defense
by
Takuma Nakamura
(Advisor: Professor Eric N. Johnson)
Multiple-Hypothesis Vision-Based Landing Autonomy
1:00 PM, Tuesday, July 24, 2018
Montgomery Knight 317
ABSTRACT:
Unmanned aerial vehicles (UAVs) need humans in the mission loop for many tasks, and
landing is one of the tasks that typically involves a human pilot. This is because of the complexity of a
maneuver itself and flight-critical factors such as recognition of a landing zone, collision avoidance,
assessment of landing sites, and decision to abort the maneuver. Another critical aspect to be
considered is the reliance of UAVs on GPS (global positioning system). A GPS system is not a reliable
solution for landing in some scenarios (e.g. delivering a package in an urban city, and a surveillance
UAV repatriating a home ship with the jammed signals), and a landing solely based on a GPS
extremely decreases the UAV operation envelope. Vision is promising to achieve fully autonomous
landing because it is a rich-sensing, light, affordable device that functions without any external
resource. Although vision is a powerful tool for autonomous landing, the use of vision for state
estimation requires extensive consideration. Firstly, vision-based landing faces a problem of occlusion.
The target detected at a high altitude would be lost at certain altitudes while a vehicle descends;
however, a small visual target can not be recognized at high altitude. Second, standard filtering
methods such as extended Kalman filter (EKF) faces difficulty due to the complex dynamics of the
measurement error created due to the discrete pixel space, conversion from the pixel to physical units,
the complex camera model, and complexity of detection algorithms. The vision sensor produces an
unfixed number of measurements with each image, and the measurements may include false positives.
Plus, the estimation system is excessively tasked in realistic conditions. The landing site would be
moving, tilted, or close to an obstacle. The available landing location may not be limited to one. In
addition to assessing these statuses, understanding the confidence of the estimations is also the tasks of
the vision, and the decisions to initiate, continue, and abort the mission are made based on the
estimated states and confidence. The system that handles those issues and consistently produces the
navigation solution while a vehicle lands eliminates one of the limitations of the autonomous UAV
operation. This thesis presents a novel state estimation system for UAV landing. In this system, vision
data is used to both estimate the state of the vehicle and map the state of the landing target (position,
velocity, and attitude) within the framework of simultaneous localization and mapping (SLAM). Using
the SLAM framework, the system becomes resilient to a loss of GPS and other sensor failures. A novel
vision algorithm that detects a portion of the marker is developed, and the stochastic properties of the
algorithm are studied. This algorithm extends the detectable range of the vision system for any known
marker. However, this vision algorithm produces highly nonlinear, non-Gaussian, and multi-modal
error distribution, and a naive implementation of filters would not accurately estimate the states. A
vision-aided navigation algorithm is derived within extended Kalman particle filter (PF-EKF) and Rao-
Blackwellized particle filter (RBPF) frameworks in addition to a standard EKF framework. These
multi-hypothesis approaches not only deal well with a highly non-linear and non-Gaussian distribution
of the measurement errors of vision but also results in numerically stable filters. The computational
costs are reduced compared to a naive implementation of particle filter, and these algorithms run in real
time. This system is validated through numerical simulation, image-in-the-loop simulation, and flight
tests.
COMMITTEE MEMBERS:
Professor Eric N. Johnson, School of Aerospace Engineering (Advisor)
Professor Panagiotis Tsiotras, School of Aerospace Engineering
Professor Eric Feron, School of Aerospace Engineering
Professor James Hays, School of Computer Science
Professor Patricio Antonio Vela, School of Electrical Engineering