General

Facebook to Tackle the Problem of Manipulated Media on its Platform

NewsGram Desk

Alarmed at the growing forged or deepfake videos on its platform, Facebook has announced tough policies against the spread of manipulated media on its platform.

The company said that going forward, it will remove misleading manipulated media if it has been edited or synthesized beyond adjustments for clarity or quality "in ways that aren't apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say".

"If it is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.
This policy does not extend to content that is parody or satire, or video that has been edited solely to omit or change the order of words," Monika Bickert, Vice President, Global Policy Management, said in a statement on Monday.

"Deepfakes" are video forgeries that make people appear to be saying things they never did, like the popular forged videos of Zuckerberg and that of US House Speaker Nancy Pelosi that went viral last year.

Facebook said it is driving conversations with more than 50 global experts with technical, policy, media, legal, civic and academic backgrounds to inform its policy development and improve the science of detecting manipulated media.

Facebook said it is driving conversations with more than 50 global experts with technical, policy, media, legal, civic and academic backgrounds to inform its policy development. Pixabay

"Consistent with our existing policies, audio, photos or videos, whether a deepfake or not, will be removed from Facebook if they violate any of our other Community Standards including those governing nudity, graphic violence, voter suppression and hate speech," said Bickert.

Videos that don't meet these standards for removal are still eligible for review by one of Facebook's independent third-party fact-checkers, which include over 50 partners worldwide fact-checking in over 40 languages.

"If a photo or video is rated false or partly false by a fact-checker, we significantly reduce its distribution in News Feed and reject it if it's being run as an ad. And critically, people who see it, try to share it, or have already shared it, will see warnings alerting them that it's false," said Facebook.

The social media platform said it has partnered with Reuters to help newsrooms worldwide to identify deepfakes and manipulated media through a free online training course.

"News organizations increasingly rely on third parties for large volumes of images and video, and identifying manipulated visuals is a significant challenge. This programme aims to support newsrooms trying to do this work," said Facebook. (IANS)

Book Your Airport Taxi Limo Service Today for a Smooth and Stylish Arrival

American Children Who Appear to Recall Past-Life Memories Grow Up to Be Well-Adjusted Adults

In the ‘Wild West’ of AI Chatbots, Subtle Biases Related to Race and Caste Often Go Unchecked

Future of Education with Neuro-Symbolic AI Agents in Self-Improving Adaptive Instructional Systems

Lower turkey costs set table for cheaper US Thanksgiving feast this year