Tuesday, 19 March 2019 11:48

Autonomous Cars Can't Recognize Pedestrians With Darker Skin Tones

Written by Jessica Miley, Interesting Engineering
Autonomous Cars Can't Recognize Pedestrians With Darker Skin Tones Wikimedia Commons


A new report shows that the systems designed to help autonomous cars recognize pedestrians may have trouble recognizing people with darker skin tones.


The worrying research has been uploaded to the preprint server arXiv.


Evidence already existed that some facial recognition software struggled to work with darker skin tones. But the results of the study on autonomous cars have a potentially deadly outcome.


World’s Best Show Bias


Researchers from Georgia Tech investigated eight AI models used in state-of-the-art object detection systems to complete their study. These systems allow autonomous vehicles to recognize road signs, pedestrians and other objects as they navigate roads.


They tested these systems using two different categories based on the Fitzpatrick scale---a scale commonly used to classify human skin color.


Darker Skin at Higher Risk


Overall, the accuracy of the system decreased by 5 percent when it was presented with groups of images of pedestrians with darker skin tones. And according to the published paper, the models showed “uniformly poorer performance” when confronted with pedestrians with the three darkest shades on the scale.


These results take into consideration whether the photo was taken during the day or night. In summary, the report suggests that people with darker skin tones will be less safe near roads dominated by autonomous vehicles than those with lighter skin.


Bias-Elimination Starts With Diversity in Research


The report thankfully gives a brief outline on how to remedy this unfathomable reality. This starts with simply increasing the number of images of dark-skinned pedestrians in the data sets used to train the systems.


Engineers responsible for the development of these systems need to place more emphasis on training the systems with higher accuracy for this group.


The report, which the authors say they hope gives enough compelling evidence to address this critical issue before deploying these recognition systems into the world, is another reminder of the general lack of diversity in the AI world.


Unfortunately, this isn't the first report of potentially deadly racism in AI-powered systems. In May of 2018, ProPublica reported that software used to assist judges in determining the risk a perpetrator posed of recommitting a crime was biased against black people.


Racial Profiling Is Lethal


The system is used by judges in criminal sentencing; it provides a score based on whether the person is likely to reoffend. A high score suggests they will reoffend; a low score suggest it is less likely.


The investigative journalists assessed the risk score assigned to more than 7,000 people in Broward County in Florida in 2013 and 2014 and then watched to see if the same people were charged with any new crimes in the next two years.


The algorithm not only proved to be unreliable---only 20 percent of the people predicted to commit violent crimes did so---but also racially biased.


Black defendants were more likely to be flagged as future criminals, wrongly labeling them at almost twice the rate of white defendants, while white defendants were mislabeled as a low risk more often than black defendants.


The AI development community must come together and take a public stand against this sort of massively damaging bias.


We thank Interesting Engineering for reprint permission.