*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Mengzhen Chen
(Advisor: Prof. Dimitri Mavris]
will propose a doctoral thesis entitled,
Robust Autonomous Navigation Framework for Exploration
in GPS-absent and Challenging Environment
On
Wednesday, May 25 at 2:00 p.m.
Collaborative Visualization Environment (CoVE)
Weber Space Science and Technology Building (SST II)
And
https://bluejeans.com/768488124/8298
Abstract
The benefits of autonomous systems have attracted the industry’s attention during the past decade. Different kinds of autonomous systems have been applied to various fields such as transportation, agriculture, healthcare, etc. Tasks unable or risky to be completed by humans alone can now be handled by autonomous systems efficiently, and the labor cost has been greatly reduced. With the rapidly growing autonomous market, a higher level of autonomy has been put on the agenda, which requires less human intervention and more robust functionality. Among various kinds of tasks that an autonomous system can perform, the capability of an autonomous system to understand its surrounding environment is of great importance. Either using Unmanned Aircraft System (UAS) for package delivery or self-driving vehicles requires the autonomous system to be more robust during operation under different scenarios. This work will improve the robustness of autonomous systems under challenging and GPS-absent environments.
When exploring an unknown environment, if external information such as GPS signal is unavailable, mapping and localization are equally important and complementary. Therefore, simultaneously creating a map and localizing itself is essential. Under such conditions, Simultaneous Localization and Mapping (SLAM) architecture was proposed to provide the capability of building a map for the surroundings of an autonomous system and localizing itself during operation. SLAM architecture has been designed for different kinds of sensors and scenarios during the past several decades. Among different SLAM categories, visual SLAM, which uses cameras as the sensors, outperforms others. It has the advantage of extracting rich information from images while other sensors alone are incapable. Since the images captured by the camera are treated as the inputs, therefore, the accuracy of the results will heavily depend on their quality. Most SLAM architecture can easily handle high-quality images or video streams, while poor-quality ones will be challenging. This work will mainly focus on improving the SLAM architecture performance with the following two scenarios: 1) Improve the SLAM performance when the input images captured from the camera suffer from blurriness caused by camera motion and dynamic objects. 2) Improve the SLAM performance under low-light conditions such as driving at night. Furthermore, there’s no publicly available SLAM benchmark dataset designed for the blurry scenario. Existing methods proposed for creating synthesized blurry image datasets for single image deblurring require to be equipped with pricey ultra-high frame rate cameras. This work will present a method to create a SLAM benchmark dataset for the blurry scenario.
To achieve the first two objectives, two existing deep neural nets will be chosen and retrained to improve the quality of the input images. To maintain the real-time operation characteristics of the SLAM architecture, the retrained deep neural nets are able to recover the input images at a certain frame rate. The blurry SLAM benchmark dataset will be generated with the help of an open-source virtual simulation environment and will be used to evaluate the proposed image deblurring SLAM framework. The newly proposed framework will be able to operate under challenging conditions such as high-speed moving and low-light environments with a commonly used camera.
Committee