PhD Proposal by Samira Samadi

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Event Details
  • Date/Time:
    • Thursday November 29, 2018 - Friday November 30, 2018
      9:30 am - 10:59 pm
  • Location: KACB 3402
  • Phone:
  • URL:
  • Email:
  • Fee(s):
    N/A
  • Extras:
Contact
No contact information submitted.
Summaries

Summary Sentence: Human Aspects of Machine Learning

Full Summary: No summary paragraph submitted.

Title: Human Aspects of Machine Learning

 

Samira Samadi

Ph.D. Student

School of Computer Science

College of Computing

Georgia Institute of Technology

http://www.samirasamadi.com

 

Date: Thursday, November 29th, 2018

Time: 9:30am to 11am (EDT)

Location: KACB 3402

 

Committee:

 

Dr. Santosh Vempala (Advisor, School of Computer Science, Georgia Institute of Technology)

Dr. Mohit Singh (School of Computer Science, Georgia Institute of Technology)

Dr. Jamie Morgenstern (School of Computer Science, Georgia Institute of Technology)

 

 

Abstract:

 

As humans are inevitably being influenced by machine learning algorithms, it is crucial to study the human aspects of these algorithms. In this proposal, I investigate several ML paradigms from the viewpoint of  human usability and fairness. In the first line of work, I present the first usability study of humanly computable password strategies -- mental algorithms proposed by Blum and Vempala to help people calculate, in their heads, passwords for different websites without dependence on third-party tools or external devices. In the second line of work, I study fairness for Principal Component Analysis (PCA), one of the most commonly used dimensionality reduction techniques. We show on real-world data sets that PCA can inadvertently produce low-dimensional representations with different fidelity for two different populations (e.g., men and women). We define the notion of Fair PCA and present a polynomial-time algorithm for finding a low-dimensional representation of the data which is nearly-optimal with respect to this measure. Finally, I will discuss two of my ongoing projects: (a) spectral clustering with the fairness constraint that each population should have approximately equal representation in every cluster, and (b) fair interpretable classifiers for structured outcomes. 

Additional Information

In Campus Calendar
No
Groups

Graduate Studies

Invited Audience
Faculty/Staff, Public, Undergraduate students
Categories
Other/Miscellaneous
Keywords
Phd proposal
Status
  • Created By: Tatianna Richardson
  • Workflow Status: Published
  • Created On: Nov 26, 2018 - 11:20am
  • Last Updated: Nov 27, 2018 - 1:05pm