*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Title: Visual Dense Three-Dimensional Motion Estimation in the Wild
Date: Wednesday, December 12, 2018
Time: 10:00 am to 11:30 am (EST)
Location: GVU cafe, TSRB
Zhaoyang Lv
School of Interactive Computing, College of Computing
Georgia Institute of Technology
https://www.cc.gatech.edu/~zlv30/
Committee:
Dr. James
M. Rehg (Advisor, School of Interactive Computing, Georgia
Institute of Technology)
Dr. Frank
Dellaert (Co-Advisor, School of Interactive Computing, Georgia
Institute of Technology)
Dr. James
Hays (School of Interactive Computing, Georgia
Institute of Technology)
Dr. Zsolt
Kira (School of Interactive Computing, Georgia
Institute of Technology; Georgia Tech Research
Institute)
Dr. Andreas
Geiger (Autonomous Vision Group, Max Planck Institute Intelligent System;
University of Tuebingen)
Abstract:
One of the most fundamental ability of the human perception system is to seamlessly sense the changing 3D worlds from our ego-centric visual observations. Driven by the modern applications of robotics, autonomous driving, and mixed reality, the machine perception requires a precise dense representation of 3D motion with low latency. In this thesis, I focus on the task of estimating absolute 3D motions in world coordinate in unconstrained environments from ego-centric visual information only. The goal is to achieve a fast algorithm that can produce a dense and accurate representation of the 3D motions.
To achieve this goal, I propose to investigate the problem from four perspectives with the following contributions:
1) Present a fast and accurate continuous optimization approach that solves the 3D scene motions as moving fixed-a-priori planar segments.
2) Present a learning-based approach that recovers the dense scene flow from ego-centric motion and optical flow, decomposed by a novel data-driven rigidity prediction.
3) Present a modern synthesis of the classic inverse compositional method for 3D rigid motion estimation using dense image alignment.
4) I will propose a novel object-centric scene flow representation incorporating top-down recognition. Specifically, scene flow of each instance composes 3D rigid motion basis and local deformation field. The proposed approach will progressively predict local deformation of the rigid instance motion to align the instance across views.