General

Facebook Put 50 Million Warning Labels on Misleading COVID-19 Posts in April

NewsGram Desk

Amid mounting pressure to keep a check on fake news related to COVID-19, Facebook has said that it put warning labels on about 50 million pieces of content related to the pandemic during the month of April.

The content was flagged based on around 7,500 articles by Facebook's independent fact-checking partners, the social networking giant said on Tuesday.

Facebook said it also removed more than 2.5 million pieces of content for the sale of masks, hand sanitizers, surface disinfecting wipes and Covid-19 test kits since March 1.

The social networking said Artificial intelligence (AI) is a crucial tool to prevent the spread of misinformation, because it allows the company to leverage and scale the work of the independent fact-checkers who review content on its services.

Facebook said it also removed more than 2.5 million pieces of content. Pixabay

Facebook works with over 60 fact-checking organizations around the world that review content in more than 50 languages.

"Since the pandemic began, we've used our current AI systems and deployed new ones to take COVID-19-related material our fact-checking partners have flagged as misinformation and then detect copies when someone tries to share them," Facebook said in a blog post.

"But these are difficult challenges, and our tools are far from perfect," it added.

In its fifth 'Community Standards Enforcement Report (from October 2019 through March this year), on Tuesday, Facebook revealed in detail how its automated tools and technologies are curbing hate speech, adult nudity and sexual activity, violent and graphic content, and bullying and harassment on its main platform as well as Instagram.

Action taken on content related to adult nudity and sexual activity on Facebook increased from 30.3 million pieces of content in Q3 2019 to 38.9 million in Q4 2019, primarily driven by a few pieces of violating content that were shared widely in October and November.

In Q1 2020, content action increased to 39.5 million. Facebook said this was due to improvements in its technology for detecting and removing content that is identical or near-identical to existing violations in our database.

Action taken on bullying and harassment related content on Facebook decreased from 2.8 million pieces of content in Q4 2019 to 2.3 million in Q1 2020, due to a reduced and remote workforce in late March as a result of COVID-19. (IANS)

Suicide bombing kills 12 Pakistan soldiers

Dark energy pushing our universe apart may not be what it seems, scientists say

Climate change boosted hurricane wind strength by 29 kph since 2019, study says

Can Bayern Munich win a seventh Champions League?

AI Speaks Volumes When It Comes to Detecting Parkinson's Disease