MLsploit Tackles Machine Learning Security with a Cloud-based Platform

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Contact

Kristen Perez

Communications Officer

Sidebar Content
No sidebar content submitted.
Summaries

Summary Sentence:

Georgia Tech and Intel researchers launch MLsploit, a cloud-based platform for researching adversarial Machine Learning.

Full Summary:

No summary paragraph submitted.

Media
  • MLsploit MLsploit
    (image/png)

Machine Learning (ML) algorithms are pervasive in our daily lives and are the basis for everything from suggestions on streaming platforms to fraud detection services, yet recent research has found that they are highly vulnerable to attacks. These attacks come in many forms, including bypassing Android and Linux malware detection, and attacking deep learning models for image misclassification and objection detection. 

To patch these vulnerabilities and increase security for safety-critical applications, researchers at Georgia Tech and Intel have teamed up to create MLsploit. It is the first user-friendly, cloud-based framework that enables researchers and developers to rapidly evaluate and compare state-of-the-art adversarial attacks and defenses for ML models. 

What Does MLsploit Do?

MLsploit’s web interface is open-source and allows researchers to quickly perform experiments on attack and defense algorithms by easily adjusting their parameters. Once tests are finished, the user may store the results in the framework to serve as a growing database for future adversarial ML research to build on.

“MLsploit is unique in that it is a collection and repository in the specific space of adversarial ML,” said School of Computational Science and Engineering Ph.D. student Nilaksh Das, a primary student investigator of the project.

MLsploit researchers built the tool as the springboard for students and researchers in adversarial ML, deep learning practitioners in industry who want to perform in-depth experimentation on a new model before rolling it out for private or public use.

“Ultimately, our goal is for MLsploit to become a collection of all the literature in the adversarial ML space,” he said.

How Does MLsploit Work?

MLsploit was built to be modular so that users can easily integrate their own work into the framework. 

MLsploit provides the user the web-user interface and the back-end computation engine. Then, the user can upload their own modules or functions. Once these are created, they can be used in conjunction with the whole MLsploit framework.

The tool was developed at the Intel® Science & Technology Center for Adversary-Resilient Security Analytics (ISTC-ARSA) housed at Tech. The center specializes in identifying vulnerabilities of ML algorithms and developing new security approaches to improve the resilience of ML applications.The project represents a culmination of the last three years of research in the center. 

MLsploit was first presented at Black Hat Asia 2019 and will be presented again as a Project Showcase at the 25th ACM SIGKDD Conference on Knowledge Discovery and Data Mining.

An extended abstract and complete listing of co-authors for the paper can be found here.

Additional Information

Groups

College of Computing, OMS, School of Computational Science and Engineering, School of Computer Science

Categories
Student and Faculty, Student Research, Research
Related Core Research Areas
Cybersecurity, People and Technology
Newsroom Topics
No newsroom topics were selected.
Keywords
MLsploit, Polo Chau, Nilaksh Das, cse-cyber, cse-ml
Status
  • Created By: Kristen Perez
  • Workflow Status: Published
  • Created On: Jul 31, 2019 - 10:10am
  • Last Updated: Jul 31, 2019 - 10:11am