How Human-Like Should Social Robots Be?

Social robots today are a $2 billion industry, which is expected to grow to $11 billion by 2026. While some believe that social robots are just human-like versions of voice assistants like Siri and Alexa, the emerging technology is being used in a variety of areas, including public health, elderly care, family communication and education. They have been used extensively as a companion to children with autism, helping them with social cues that the robots themselves are increasingly presenting.

The primary function of social robots is for social interaction with human beings. How well they communicate, and how they are perceived by humans, is a primary area of research for Kun Xu, assistant professor in emerging media at the University of Florida College of Journalism and Communications. Research suggests that human beings do not know to what extent social robots should be human-like. “Some form of human likeness will help the communication process,” according to Xu. “But you don’t want the social robots to be too human-like, otherwise human beings will feel a little insecure facing a very human-like robot.”

Xu’s interest in human-machine communication began at an early age. In the 1980s, people mostly played against a pre-programmed computer agent or computer program.  “That’s the starting point of me feeling that somehow computers can be a companion or can be a thing that can be perceived as a human being,” he said.

With rapidly growing voice assistant technologies, humans have become accustomed to interacting with technology through social cues. With social robots, the emergence of artificial intelligence has made the cues more elaborate, including gestures, eye gaze, visual and voice recognition, and facial expressions.  However, there is a question of how social they need to be when interacting with human beings. Can we find the best combination of social cues that evokes our trust in them and how do we best design them into these technologies?

Kun Xu demonstrating social robot functions to doctoral student Mo Chen.

“There’s this notion called uncanny valley effect, which suggests that as robots become really human like, we as human beings will be afraid of interacting with these machines because we feel that our identity is threatened and is blurred by designing so many social cues into the machines. Depending on different contexts and tasks, there should be an ideal point where there’s a combination of social cues that people like, but don’t make them afraid of the machines,” Xu explained.

“We did a meta-analysis to compare all these different single social cues. And we found that movement probably has the largest effect compared to other social cues like appearances, voices, sounds and eye gaze,” Xu says. “People often think voice and facial expressions and appearances are most important, but our research shows that actually voice may not be as important as expected or as imagined. That is probably because there are already so many technologies in the market that use voice-based communication cues, and people are already used to it and no longer feel that there’s something that is special. But movement is rare. If you imagine that you have an intelligent machine in your home that can move, that may be something that you feel a little bit creepy about or may feel a little bit scared.

For a video interview with Kun Xu, click here

“The reason that people interact with social robots or other machines as if they were social is because our brain is old. When our old brain is involved with new media, we still use the old way to interact with these new machines. It’s not that the media is new. It’s because our way for interaction is still based on our interpersonal communication and evolution-based reactions. We interact with a lot of things in a human way, rather than in a more evolved or a more advanced way of interacting with machines.”

As Kun continues to train robots on how to learn and respond to social cues, and how people respond to these machines, he also would like to explore ways that humans can learn from robots.  “Scholars try to see how robots can actually imitate humans or model humans. Robots have been designed to move like humans, smile like humans, talk like humans. I would like to explore a way that humans can actually model robots. For example, I was thinking about a scenario where robots can be a role model in doing recycling tasks. When humans watch robots do this recycling behavior, will they model the robots? Will people learn from those social robots? If that’s the case, then that will make a lot of things easier.”

Posted: February 7, 2022
Category: Profiles
Tagged as: , , ,