fbpx
Tuesday, 19 March 2019 18:48

Autonomous Cars Can't Recognize Pedestrians With Darker Skin Tones

Written by Jessica Miley, Interesting Engineering
Autonomous Cars Can't Recognize Pedestrians With Darker Skin Tones Wikimedia Commons

Index

A new report shows that the systems designed to help autonomous cars recognize pedestrians may have trouble recognizing people with darker skin tones.

 

The worrying research has been uploaded to the preprint server arXiv.

 

Evidence already existed that some facial recognition software struggled to work with darker skin tones. But the results of the study on autonomous cars have a potentially deadly outcome.

 

World’s Best Show Bias

 

Researchers from Georgia Tech investigated eight AI models used in state-of-the-art object detection systems to complete their study. These systems allow autonomous vehicles to recognize road signs, pedestrians and other objects as they navigate roads.

 

They tested these systems using two different categories based on the Fitzpatrick scale---a scale commonly used to classify human skin color.

 

Darker Skin at Higher Risk

 

Overall, the accuracy of the system decreased by 5 percent when it was presented with groups of images of pedestrians with darker skin tones. And according to the published paper, the models showed “uniformly poorer performance” when confronted with pedestrians with the three darkest shades on the scale.

 

These results take into consideration whether the photo was taken during the day or night. In summary, the report suggests that people with darker skin tones will be less safe near roads dominated by autonomous vehicles than those with lighter skin.

 

Bias-Elimination Starts With Diversity in Research

 

The report thankfully gives a brief outline on how to remedy this unfathomable reality. This starts with simply increasing the number of images of dark-skinned pedestrians in the data sets used to train the systems.

 

Engineers responsible for the development of these systems need to place more emphasis on training the systems with higher accuracy for this group.

 

The report, which the authors say they hope gives enough compelling evidence to address this critical issue before deploying these recognition systems into the world, is another reminder of the general lack of diversity in the AI world.

 

Unfortunately, this isn't the first report of potentially deadly racism in AI-powered systems. In May of 2018, ProPublica reported that software used to assist judges in determining the risk a perpetrator posed of recommitting a crime was biased against black people.


Previous Page Continue reading »

Read 979 times