Getting robots to do what you want – without unintended consequences

Researchers at Stanford University are seeking ways to tell autonomous systems what to do, that reduce the chance of unintended consequences when the instructions are obeyed to the letter.

“In the future, I fully expect there to be more autonomous systems in the world and they are going to need some concept of what is good and what is bad,” said Stanford computer scientist Andy Palan. “It’s crucial, if we want to deploy these autonomous systems in the future, that we get that right.”

The team has combined two ways of setting goals for robots – demonstrations, where humans show the robot what to do – also called ‘inverse reinforcement learning’

(IRL), and interactive preference surveys, in which people answer questions about how they want the robot to behave – into a single process which performed better than either of its parts alone in both simulations and real-world experiments, according to the university.

“Demonstrations are informative but they can be noisy,” said project leader Dorsa Sadigh. “On the other hand, preferences provide, at most, one bit of information, but are way more accurate. Our goal is to get the best of both worlds, and combine data coming from both of these sources more intelligently to better learn about humans’ preferred reward function.”

Share:

Leave a Comment

Your email address will not be published.

X