As we embrace Alexa or Siri in our lives, researchers report that Chatbots are more successful than humans in certain human-machine interactions — but only if they are allowed to hide their non-human identity.
The artificial voices of Siri, Alexa or Google, and their often awkward responses, leave no room for doubt that we are not talking to a real person.
An international team, including Iyad Rahwan, Director of the Center for Humans and Machines at the Max Planck Institute for Human Development in Berlin, sought to find out whether cooperation between humans and machines is different if the machine purports to be human.
In the study published in Nature Machine Intelligence, the team asked almost 700 participants in an online cooperation game to interact with a human or an artificial partner.
In the game, known as the Prisoner's Dilemma, players can either act egotistically to exploit the other player, or act cooperatively with advantages for both sides.
The findings showed that bots impersonating humans were more successful in convincing their gaming partners to cooperate.
As we embrace Alexa or Siri in our lives, researchers report that Chatbots are more successful than humans in certain human-machine interactions — but only if they are allowed to hide their non-human identity. Pixabay
As soon as they divulged their true identity, however, cooperation rates decreased.
"Translating this to a more realistic scenario could mean that help desks run by bots, for example, may be able to provide assistance more rapidly and efficiently if they are allowed to masquerade as humans," the researchers wrote.
The society will have to negotiate the distinctions between the cases of human-machine interaction that require transparency and those where efficiency is key.
A previous research has shown that humans prefer not to cooperate with intelligent bots. (IANS)