Election Coverage Hard:- In the run-up to Pakistan's general elections this month, the country's 128 million voters were exposed to disinformation, including through artificial intelligence and deepfake videos.
Aside from a much talked about AI video of former Prime Minister Imran Khan claiming victory — a case where the jailed leader had given his consent — were cases of fake videos being used to spread disinformation.
In several instances, deepfake videos were shared online, including three in which PTI-backed candidates appeared to announce boycotts of the vote.
In one of these fake videos, Khan said his party would boycott the vote. And false videos also circulated of former Punjab Law Minister Basharat Raja and PTI-backed candidate Rehana Dar, who contested elections from Sialkot.
Nearly all had to respond by putting out statements denying — and denouncing — the false claims.
Threat to integrity of process
Sadaf Khan, co-founder of the Pakistan policy research group Media Matters for Democracy, said AI and deepfakes are particularly dangerous in elections.
When deepfakes, especially of a leader like Khan, are released, "there is a much higher chance of the public being duped," she said. This creates an environment in which people "are much more likely — where hordes of people, essentially — are much more likely to make voting decisions based on information that is not true," Sadaf Khan said.
When that happens, it "brings the integrity of the electoral process and the quality of democracy into question," she said.
In Pakistan's case, the government decision to cut internet access made it harder for media to debunk the false narratives, one journalist told VOA.
Pakistan's Interior Ministry defended the shutdown, saying that incidents of terrorism made the move necessary. But journalists and others said the shutdown hindered access to critical information, including polling station locations.
And journalists said they were unable to promptly report any suspected irregularities.
The Election Commission of Pakistan did not respond to VOA's request for comment. However, the commission instructed the Pakistan Electronic Media Regulatory Authority to ensure adherence to the code of conduct for national media.
The Free and Fair Election Network, or FAFEN, said AI and deepfakes can affect voters.
The Pakistan-based civil society group works to strengthen democracy through observing and overseeing election processes.
"The impact of social media itself is an evolving phenomenon that the world is struggling to understand and respond to," FAFEN spokesperson Salahudin Safdar told VOA. "The AI and deepfake are further complicating this phenomenon.
"It is a challenge for election management bodies to tackle these issues as well as for election observer groups to systematically observe them," Safdar said.
Islamabad-based freelance journalist Asad Toor covered the February 8 elections. He said he thought AI-generated videos played a role in PTI's victory, influencing voter turnout in favor of Khan's candidates.
"Earlier we were skeptical about impact of these videos, but a big turnout of PTI voters on appeal of their jailed leader Imran Khan's call through these AI videos turned the tide," Toor said.
Positive possibilities
Enrico Pontelli, dean of the College of Arts and Letters at New Mexico State University, said that in some instances AI can have a positive effect.
"[AI] can provide a voice to those who are not in a position to speak, by breaking barriers and bringing such messages to the masses," Pontelli told VOA via email. "AI has also the potential of detecting artificial content, facilitating the detection of AI-generated content and contributing to fact-checking news stories."
On the negative side, however, the technology is "capable of generating on demand, fake content and adapt[ing] its presentation to what each individual 'expects' to see," Pontelli said.
Sadaf Khan, of Media Matters for Democracy, said she didn't think Pakistan's legal framework was able to deal properly with these issues.
"There is no specific criminalization of AI-related content, but if an abuse of AI is being done to defame somebody, then you have multiple legal structures that can be used to initiate defamation proceedings," she said.
For Pontelli, the solution could come from AI.
"The field of AI has started developing effective solutions for AI detection," Pontelli said. "This would provide one avenue to respond to the risk of AI-generated fake content." VOA/SP