Facebook Makes New Policy to Take Violent Content Down

Facebook Makes New Policy to Take Violent Content Down
Published on

Accused of helping to spur violence in countries like Myanmar, Sri Lanka and India, Facebook has said it will begin removing misinformation that leads to violence and physical harm.

Currently, Facebook bans content that directly calls for violence but the new policy will cover fake news that has the potential to stir up physical harm, CNET reported late on Wednesday.

"There are certain forms of misinformation that have contributed to physical harm, and we are making a policy change which will enable us to take that type of content down," Facebook said in a statement.

"We will begin implementing the policy during the coming months," it added.

Facebook-owned WhatsApp is facing the flak in India for allowing the circulation of large number of irresponsible messages filled with rumours and provocation that has led to growing instances of lynching of innocent people.

In June, Facebook removed content that alleged Muslims in Sri Lanka were poisoning food given and sold to Buddhists.

A coalition of activists from eight countries, including India and Myanmar, in May called on Facebook to put in place a transparent and consistent approach to moderation.

In a statement, the coalition demanded civil rights and political bias audits into Facebook's role in abetting human rights abuses, spreading misinformation and manipulation of democratic processes in their respective countries.

Besides India and Myanmar, the other countries that the activists represented were Bangladesh, Sri Lanka, Vietnam, the Philippines, Syria and Ethiopia.

Facebook will review posts that are inaccurate or misleading, and are created or shared with the intent of causing violence or physical harm. Pixabay

The demands raised by the group bore significance as Facebook came under fire for its failure to stop the deluge of hate-filled posts against the disenfranchised Rohingya Muslim minority in Myanmar.

Sri Lanka temporarily shut down Facebook earlier in 2018 after hate speech spread on the company's apps resulted in mob violence.

According to The Verge, Facebook will review posts that are inaccurate or misleading, and are created or shared with the intent of causing violence or physical harm.

The posts will be reviewed in partnership with firms in the particular country including threat intelligence agencies.

"Partners are asked to verify that the posts in question are false and could contribute to imminent violence or harm," Facebook said. (IANS)

logo
NewsGram
www.newsgram.com