PhD Dissertation Defense by Ramakrishnan Kannan

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Event Details
  • Date/Time:
    • Monday February 22, 2016 - Tuesday February 23, 2016
      12:00 pm - 1:59 pm
  • Location: Klaus 1212
  • Phone:
  • URL:
  • Email:
  • Fee(s):
    N/A
  • Extras:
Contact
No contact information submitted.
Summaries

Summary Sentence: Scalable and Distributed Constrained Low Rank Approximation

Full Summary: No summary paragraph submitted.

COMMITTEE:

------------------

Haesun Park (Advisor, School of Computational Science and Engineering, Georgia Tech)

Rich Vuduc (School of Computational Science and Engineering, Georgia Tech)

Polo Chau (School of Computational Science and Engineering, Georgia Tech)

Charu Aggarwal (IBM Research)

Grey Malone Ballard (Sandia National Laboratories)

 

Abstract :

-----------
Low rank approximation is the problem of finding two low rank factors W and H such that the rank(WH) << rank(A), where the product WH is approximately equal to the given input matrix A. These low rank factors W and H are constrained for meaningful physical interpretation and referred as Constrained Low Rank Approximation (CLRA).  Like most of the constrained optimization problem, performing CLRA is computationally expensive than its unconstrained counterpart. A widely used CLRA is the Non-negative Matrix Factorization (NMF) which enforces non-negativity constraints in each of its low rank factors W and H. In this talk, we focus on scalable/distributed CLRA algorithms for constraints such as boundedness and non-negativity for large real world matrices that includes text, High Definition (HD) video, social networks and recommender systems.

In this talk, we begin with the Bounded Matrix Low Rank Approximation (BMA) which imposes a lower and an upper bound on every element of the lower rank matrix. BMA is more challenging than NMF as it imposes bounds on the product WH rather than on each of the low rank factors W and H. For very large input matrices, we extend our BMA algorithm to Block BMA that can scale to a large number of processors.  In applications, such as HD video, where the input matrix to be factored is extremely large, distributed computation is inevitable and the network communication becomes a major performance bottleneck. Towards this end, we propose a novel distributed Communication Avoiding NMF (CANMF) algorithm that communicates only the right low rank factor to its neighboring machine. Finally, a general distributed HPC-NMF framework that uses HPC techniques in communication intensive NMF operations and suitable for broader class of NMF algorithms.

Additional Information

In Campus Calendar
No
Groups

Graduate Studies

Invited Audience
Public
Categories
Other/Miscellaneous
Keywords
PhD Dissertation Defense
Status
  • Created By: Jacquelyn Strickland
  • Workflow Status: Published
  • Created On: Feb 10, 2016 - 10:50am
  • Last Updated: Oct 7, 2016 - 10:16pm