If smart machines and AI agents are meant to support human goals, how might we help them better understand human needs and behaviors? UC Berkeley professor and AI scientist Anca Dragan suggests we need a more human-centered approach in our algorithms:

Though we’re building these agents to help and support humans, we haven’t been very good at telling these agents how humans actually factor in. We make them treat people like any other part of the world. For instance, autonomous cars treat pedestrians, human-driven vehicles, rolling balls, and plastic bags blowing down the street as moving obstacles to be avoided. But people are not just a regular part of the world. They are people! And as people (unlike balls or plastic bags), they act according to decisions that they make. AI agents need to explicitly understand and account for these decisions in order for them to actually do well. […]

How do we tell a robot what it should strive to achieve? As researchers, we assume we’ll just be able to write a suitable reward function for a given problem. This leads to unexpected side effects, though, as the agent gets better at optimizing for the reward function, especially if the reward function doesn’t fully account for the needs of the people the robot is helping. What we really want is for these agents to optimize for whatever is best for people. To do this, we can’t have a single AI researcher designate a reward function ahead of time and take that for granted. Instead, the agent needs to work interactively with people to figure out what the right reward function is. […]

Supporting people is not an after-fix for AI, it’s the goal. To do this well, I believe this next generation should be more diverse than the current one. I actually wonder to what extent it was the lack of diversity in mindsets and backgrounds that got us on a non-human-centered track for AI in the first place.

Read more about...