*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Title: Subspace Learning by Randomized Sketching
Committee:
Dr. Justin Romberg, ECE, Chair , Advisor
Dr. Mark Davenport, ECE
Dr. Vladimir Koltchinskii, Math
Dr. Santosh Vempala, CoC
Dr. Kiryung Lee, Ohio State
Abstract: High dimensional data is often accompanied by inherent low dimensionality that can be leveraged to design scalable machine learning and signal processing algorithms. Developing efficient computational frameworks that take advantage of the underlying structure in the data is crucial. In this thesis, we consider a particular form of inherent low dimensionality in data: subspace models. In many applications, data is known to lie close to a low dimensional subspace. The underlying subspace itself may or may not be known a priori. Incorporating this structure into data acquisition systems and algorithms can aid in scalability. We first consider two specific applications in the field of array signal processing where subspace priors on the data are commonly used. For both these applications, we develop algorithms that require a number of measurements that scale with only the dimension of the underlying subspace. In doing so, we show that arrays demand dimensionality reduction maps that can operate on individual subsets or blocks of data at a time, without having access to other blocks. Inspired by such block constraints, we consider more general problems in numerical linear algebra where data has a natural partition into blocks. This is common in applications with distributed or decentralized data. We study the problems of sketched ridge regression and sketched matrix multiplication under this constraint and give sample optimal theoretical guarantees on block diagonal sketching matrices. Extending the block model to low-rank matrix sensing, we then study the problem of recovering a low-rank matrix from compressed observations of each column. While each column itself is compressed to a point that is beyond recovery, we leverage their joint structure to recover the matrix as a whole. To do so, we establish a new framework to design estimators of low-rank matrices that obey the constraints imposed by different observation models. Finally, we extend our framework of designing low-rank matrix estimators to the application of blind deconvolution. We provide a novel estimator that enjoys uniform recovery guarantees over the entire signal class while being sample optimal.