*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Atlanta, GA | Posted: April 10, 2019
Self-driving cars are supposed to make driving safer, but they may endanger the lives of certain groups. New Georgia Tech research suggests that pedestrians with darker skin may be more likely to get hit by self-driving cars than those with lighter skin.
The researchers tested machine learning (ML) object detection models to see how well they could see people with different skin tones. Their results revealed models were nearly 5 percent less likely to detect darker-skinned pedestrians.
This predictive imbalance remained regardless of how researchers accounted for variables in the training data set, such as time of day, partially blocked views of pedestrians, and pixel size of the person.
“Companies don’t want the public to know about any issues of inaccuracy, so consumers need to learn to ask a lot of questions,” said Jamie Morgenstern, School of Computer Science (SCS) assistant professor and the study’s lead author.
This discrepancy might introduce problems because of the ML method known as loss function, which determines how well an algorithm models a dataset. A model learns by measuring loss function between predicted values and actual values. The goal is to get as small of a loss function output as possible, indicating the model fits the data well. This approach is more accurate with larger subsets in the data, but can minimize the value of smaller groups. In effect, this 3.5 difference made the results even more accurate for lighter-skinned pedestrians.
Despite the bias, Morgenstern remains optimistic. The team was able to correct for the inequity by reweighing the model to better analyze smaller groups.
The findings, published earlier this month, have attracted media coverage and some criticism. Much of this stems from the fact that Morgenstern and her fellow researchers – School of Interactive Computing Assistant Professor Judy Hoffman and machine learning Ph.D. student Benjamin Wilson — were not able to investigate ML models and training data actually used by the self-driving car industry because they are not publicly available.
[RELATED: Georgia Tech Researchers Improve Fairness in the Machine Learning Pipeline]
This is not the first study of ML systems having varying predictive accuracy on different demographics. Other researchers have found examples in the financial sector. Yet in many of these scenarios, developers won’t take responsibility, according to Morgenstern.
“Developers blame any biased outcomes of their system on biased historical trends, such as the fact that more loans were applied for and issued in whiter neighborhoods, or biased training data,” she said. “For example, if the training labels used for creditworthiness instead reflect only the decisions of lenders who are now known to have had higher predictive accuracy on white applicants.”
With self-driving cars, however, a system developer would have a harder time blaming object detection system bias on historical trends or behavior of certain demographic groups. This was what appealed to Morgenstern about this research.
“There is no capacity for arguing that historical behavior of some group should affect the trade-offs made by self-driving cars,” Morgenstern said. “No one deserves to be hit by a car.”