AI in Lie Detection: Social Harmony at Risk?

Humans are bad at recognizing lies. As studies consistently demonstrate, their judgments are barely better than chance. This inability could be one of the reasons why most people refrain from accusing others of dishonesty.
Social Harmony:- Humans are bad at recognizing lies. As studies consistently demonstrate, their judgments are barely better than chance. [Pixabay]
Social Harmony:- Humans are bad at recognizing lies. As studies consistently demonstrate, their judgments are barely better than chance. [Pixabay]
Published on

Social Harmony:- Humans are bad at recognizing lies. As studies consistently demonstrate, their judgments are barely better than chance. This inability could be one of the reasons why most people refrain from accusing others of dishonesty. The prospect of finding out that the person accused of lying was actually telling the truth would be deeply embarrassing, and the resulting anger could be substantial.

From this angle, a technology that can identify lies much better than humans sounds highly promising - especially in a time when fake news, dubious statements by politicians and manipulated videos are becoming more widespread. Artificial intelligence (AI) could make this possible.

Researchers from Würzburg, Duisburg, Berlin, and Toulouse have explored how effectively AI can detect lies and the resulting impact on human behavior. The team has now published their results in the journal iScience. The lead author is Alicia von Schenk, Junior Professor of Applied Microeconomics, esp. Human-Machine Interaction at Julius-Maximilians-Universität Würzburg (JMU); and she shares first authorship with Victor Klockmann, Junior Professor of Microeconomics, esp. Economics of Digitization at JMU.

The key findings of this study can be summarized as follows:

  • Artificial intelligence outperforms human accuracy in text-based lie detection.

  • Without the support of AI, people hesitate to accuse others of lying.

  • With AI support, people are much more likely to express their suspicion that they have encountered a lie.

  • Only around a third of study participants take advantage of the opportunity to ask an AI for its assessment. However, when they do, most follow the algorithm’s advice.

"These results suggest that AI for detecting lies could significantly disrupt social harmony," says Alicia von Schenk. After all, if people more frequently express the suspicion that their counterpart may have lied, this fosters general mistrust and increases polarization between among that already struggle to trust one another.

On the other hand, the use of AI could also have positive effects. "In this way, it may be possible to prevent dishonesty and explicitly encourage honesty in communication," adds Victor Klockmann.

Politicians Urged to Act

While people are currently still reluctant to use technical support to detect lies, organizations and institutions may embrace it differently - for example, when companies communicate with suppliers or customers, when HR staff conduct job interviews or insurance companies verify claims.

For this reason, the authors of the study call for "a comprehensive legal framework to regulate the impact of AI-based lie detection algorithms". The protection of privacy and the responsible use of AI, particularly in education and healthcare, are key aspects of this. The researchers emphasize that they do not intend to fundamentally reject the use of this technology. However, they urge caution that "taking a proactive approach to shaping the political landscape in this area will be crucial to harness the potential benefits of these technologies while mitigating their risks."

The Study

In preparation for the study, the research team asked almost 1,000 people to write down their plans for the upcoming weekend. In addition to a true statement, they were also asked to write a fictitious statement about their plans. They were given monetary incentives to formulate the fictitious statement as convincingly as they could. After a quality check, the team ended up with a data set containing 1,536 statements from 768 authors.

Based on this data set, an algorithm for lie detection was then developed and trained, leveraging Google's open-source language model BERT. After training, this algorithm actually identified almost 81 percent of all lies.

Who Trusts the AI?

For the main study, 510 statements were then randomly selected, and 2,040 additional participants were recruited. Divided into four groups, these subjects were asked to assess whether the statements presented to them were true or false.

While group 1 had to evaluate the statements without AI support, group 2 was always shown the algorithm's assessment before making their own judgment. Groups 3 and 4 were able to actively request the AI's judgment but had to pay a small amount of money for this. At the same time, the members of these groups were informed that they might not receive an AI judgment despite requesting it. Ultimately, group 3 did not see an AI recommendation, while members of group 4 always received the AI's judgment whenever they requested it.

"With this experimental design, we were able to investigate how many study participants actually want to receive advice from the AI and whether those who request advice are fundamentally different in their behavior from those who do not," explains Alicia von Schenk.

Some Results

Humans struggle to differentiate between truthful statements and lies, as demonstrated by the findings of group 1. Without AI support, its members achieve an accuracy rate of 46.5 percent in their judgments - i.e. roughly equivalent to chance. In contrast, group 2 achieved an accuracy rate of 60.1 percent in identifying lies with AI support.

People are reluctant to accuse others of lying. In group 1, fewer than 20 percent of its members opted to do so. In group 2, where members automatically received the AI's assessment, this number increased to 30 percent. The rate was even higher for members of group 4, who requested and received a prediction: their accusation rate rose significantly to around 58 percent.

Only about a third of people request a prediction from the lie detection algorithm. Among those who do, however, there is a strong tendency to follow the algorithm's advice, with approximately 88 percent compliance. In contrast, among those who receive the AI assessment automatically, only 57 percent follow its recommendation. This discrepancy becomes more pronounced when considering cases where the AI determines a statement as a "lie": 85 percent of those who requested the AI assessment agree with this judgement. Among those who received the AI assessment automatically, only 40 percent followed this advice. AlphaGalileo/SP

logo
NewsGram
www.newsgram.com