IC's Dhruv Batra Named PECASE Winner, One of Three at Georgia Tech

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Contact

David Mitchell

Communications Officer

david.mitchell@cc.gatech.edu

Sidebar Content
No sidebar content submitted.
Summaries

Summary Sentence:

The PECASE is the highest honor bestowed by the United States government to outstanding scientists and engineers beginning independent research careers.

Full Summary:

No summary paragraph submitted.

Media
  • Dhruv Batra Dhruv Batra
    (image/jpeg)

School of Interactive Computing Assistant Professor Dhruv Batra was awarded the prestigious Presidential Early Career Award for Scientists and Engineers (PECASE) on Wednesday in an announcement by President Donald Trump. The PECASE is the highest honor bestowed by the United States government to outstanding scientists and engineers beginning independent research careers.

Batra is one of three Georgia Tech faculty members this year to earn the award, giving the Institute a total of 18 in its history. The other two awardees in this class are Associate Professor Mark Davenport of the School of Electrical and Computer Engineering and Assistant Professor Matthew McDowell of the School of Materials Science and Engineering.

Along with the Department of Defense, the White House Office of Science and Technology Policy will provide $1 million over the course of five years to support Batra’s research to make artificial intelligence (AI) systems more transparent, explainable, and trustworthy. The award comes as a result of Batra’s selection for a similar early-career award by the Army Research Office Young Investigator Program in 2014.

The research Batra’s lab will pursue with the funding addresses a fundamental challenge in development of AI – their “black-box” nature, the consequent difficulty humans face in identifying why or how AI systems fail, and how to improve upon those technologies. When a self-driving car from a major tech company, for example, suffered its first fatality in 2015, legal and regulatory agencies understandably questioned what went wrong. The challenge at the time was providing a sufficient answer to that question.

“Your response can’t just be, ‘Well, there was this machine learning box in there, and it just didn’t detect the car. We don’t know why,’” Batra said.

Batra’s research aims to create AI systems that can more readily explain what they do and why. This could come in the form of natural language or visual explanations, both of which – computer vision and natural language processing – are central areas of focus in Batra’s lab. The machine could, for example, identify regions in image that provide support for its predictions, potentially assisting a user’s understanding of what the machine can or cannot do.

It’s an important area of study for a few reasons, Batra said. He classifies AI technology into three levels of maturity:

  • Level 1 is technology that is in its infancy. It is not near deployment to everyday users, and the consumers of the technology are researchers. The goal for transparency and explanation is to help researchers and developers to understand the failure modes and current limitations, and deduce how to improve the technology – “actionable insight,” as Batra called it.
     
  • Level 2 is when things are working to a degree, enough so that the technology can and has been deployed.

    “The technology may be mature in a narrow range, and you can ship the product,” Batra said. “Like face detection or fingerprint technology. It’s built into products and being used at agencies, airports, or other places.”

    In such cases, you want explanations and interpretability that helps build appropriate trust with users. Users can understand when the system reliably works and when it might not work – face detection in bad lighting, for example – and make efforts to use in a more appropriate setting.
     
  • Level 3 is typically a fairly narrow category where the AI is better – sometimes significantly so – than the human. Batra used chess-playing and Go-playing bots as an example. The best chess-playing bots convincingly outperform the best humans and reliably hand a resounding defeat to the average human player.

    “We already know bots play much better than humans,” he said. “In such cases, you don’t need to improve the machine and you already trust its skill level. You want the machine to give you explanations not so that you can improve the AI, but so that you can improve yourself.”

Batra envisions scenarios where the techniques his lab develops could assist at all three levels, but the experiments will take place between Levels 1 and 2. They will work in Visual Question Answering, which are agents that answer natural language questions about visual content, and other areas of maturity that may reach the product level in five or more years.

Batra has served as an assistant professor at Georgia Tech since Fall 2016. Visit his website for more information about his research.

Additional Information

Groups

College of Computing, GVU Center, ML@GT, OMS, School of Interactive Computing

Categories
No categories were selected.
Related Core Research Areas
People and Technology
Newsroom Topics
No newsroom topics were selected.
Keywords
cc-research; ic-ai-ml
Status
  • Created By: David Mitchell
  • Workflow Status: Published
  • Created On: Jul 5, 2019 - 12:18pm
  • Last Updated: Jul 5, 2019 - 12:18pm