Twitter You Tube Facebook Autobodynews Linked In

Thursday, 07 February 2019 22:33

New Training Model Helps Autonomous Cars See AI’s Blind Spots

Written by John Loeffler, Interesting Engineering
New Training Model Helps Autonomous Cars See AI’s Blind Spots Tesla

Index

 

Modeling at the Edge

 

In two papers presented at last year’s Autonomous Agents and Multiagent Systems conference and the upcoming Association for the Advancement of Artificial Intelligence conference, researchers explain a new model for training autonomous systems such as self-driving cars that use human input to identify and fix these “blind spots” in AI systems.

 

The researchers run the AI through simulated training exercises like traditional systems go through, but in this case, a human observes the machine’s actions and identifies when the machine is about to make or has made a mistake.

 

The researchers then take the machine’s training data and synthesize it with the human observer's feedback and put it through a machine-learning system. This system will then create a model that researchers can use to identify situations where the AI is missing critical information about how it should behave, especially in edge cases.

 

“The model helps autonomous systems better know what they don’t know,” according to Ramya Ramakrishnan, a graduate student in the computer science and artificial intelligence laboratory at MIT and the lead author of the study.

 

“Many times, when these systems are deployed, their trained simulations don’t match the real-world setting [and] they could make mistakes, such as getting into accidents. The idea is to use humans to bridge that gap between simulation and the real world, in a safe way, so we can reduce some of those errors,” Ramakrishnan said.

 

The problem arises when a situation occurs, such as the distorted stop sign, in which the majority of cases the AI has been trained on does not reflect the real-world condition that it should have been trained to recognize. In this case, it has been trained that stop signs have a certain shape, color, etc. It could even have created a list of shapes that could be stop signs and would know to stop for those, but if it cannot identify a stop sign properly, the situation could end in disaster.

 

“Because unacceptable actions are far rarer than acceptable actions, the system will eventually learn to predict all situations as safe, which can be extremely dangerous,” said Ramakrishnan.

 

Meeting the Highest Standards for Safety

 

By showing researchers where the AI has incomplete data, autonomous systems can be made safer at the edge where high-profile accidents can occur. If they can do this, we may get to the point where public trust in autonomous systems can start growing and the rollout of autonomous vehicles can begin in earnest, making us all safer as a result.

 

We thank Interesting Engineering for reprint permission.


« Previous Page Next Page

Read 832 times