*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Title: A Scalable Edge-Centric System Design for Camera Networks to aid Situation Awareness Applications
Date: Monday, July 18th, 2022
Time: 3pm - 5pm ET
Virtual Location: Meeting Link
Zhuangdi "Andy" Xu
Ph.D. Student, Computer Science
School of Computer Science
Georgia Institute of Technology
Committee:
Dr. Ramachandran, Umakishore (advisor, School of Computer Science, Georgia Institute of Technology)
Dr. Arulraj, Joy (School of Computer Science, Georgia Institute of Technology)
Dr. Tumanov, Alexey (School of Computer Science, Georgia Institute of Technology)
Dr. Rehg, James M (School of Interactive Computing, Georgia Institute of Technology)
Dr. Krishna, Tushar (School of Electrical and Computer Engineering, Georgia Institute of Technology)
------------------------
Abstract:
The ubiquity of cameras in our environment coupled with advances in computer vision and machine learning has enabled several novel applications combining sensing, processing, and actuation. Often referred to as situation awareness applications, they span a variety of domains including safety (e.g., surveillance), retail (e.g., drone delivery), and transportation (e.g., assisted/autonomous driving). There is a perfect storm of technology enablers that have come together making it a ripe time for realizing a smart camera system at the edge of the network to aid situation awareness applications. There are two types of smart camera systems, live processing at ingestion time and post-mortem video analysis. Live processing features a more timely response when the queries are known ahead of time. At the same time, post-mortem analysis fits the exploratory analysis where the queries (or the parameters of queries) are not known in advance. Various situation awareness applications can benefit from either type of the smart camera system or even both. There is prior art which are mostly standalone techniques to facilitate camera processing. For example, efficient live camera processing frameworks feature the partition of the video analysis tasks and the placement of these tasks across Edge and Cloud. Databases for building efficient query processing systems on archived videos feature modern techniques (e.g., filters) for accelerating video analytics.
This dissertation research has been looking into both types of smart camera systems (i.e., live processing at ingestion time and postmortem exploratory video analysis) for various situation awareness applications. Precisely, this dissertation seeks to fill the void left by prior art by asking these questions:
To aid various situation awareness applications, this dissertation proposes a “Scalable-by-Design” approach to designing edge-centric systems for camera networks, efficient resource orchestration for live camera processing at ingestion time, and a postmortem video engine featuring reuse for exploratory video analytics in a scalable edge-centric system for camera networks.