General

Twitter Expands its Rules Around Hate Speech

NewsGram Desk

Twitter has expanded its rules around hate speech to include language that dehumanises people on the basis of their age, disability or disease. This is the latest news.

Last year, the micro-blogging platform updated its 'Hateful Conduct' policy to address dehumanising speech, starting with one protected category: religious groups.

"Our primary focus is on addressing the risks of offline harm, and research shows that dehumanising language increases that risk. As a result, we expanded our rules against hateful conduct to include language that dehumanises others on the basis of religion. "Today, we are further expanding this rule to include language that dehumanizes on the basis of age, disability or disease," Twitter said in a statement on Thursday.

Disease is important as the novel coronavirus is spreading across the globe and people are sharing all kinds of information, including jokes, videos, memes and GIFs related to certain communities which can hurt their sentiments.

In 2018, Twitter asked for feedback to hear directly from the different communities and cultures. Pixabay

"Tweets that break this rule pertaining to age, disease and/or disability, sent before today will need to be deleted, but will not directly result in any account suspensions because they were Tweeted before the rule was in place," said the company.

In 2018, Twitter asked for feedback to hear directly from the different communities and cultures. In two weeks, it received more than 8,000 responses from people located in more than 30 countries. Across languages, people believed the proposed change could be improved by providing more details, examples of violations, and explanations for when and how context is considered.

Respondents said that "identifiable groups" was too broad, and they should be allowed to engage with political groups, hate groups, and other non-marginalised groups with this type of language. Many people wanted to "call out hate groups in any way, any time, without fear".

In other instances, people wanted to be able to refer to fans, friends and followers in endearing terms, such as "kittens" and "monsters".

"We are continuing to learn as we expand to additional categories," said Twitter, adding that it has developed a global working group of outside experts to help how it should address dehumanising speech around more complex categories like race, ethnicity and national origin. (IANS)

Rollover Accidents Involving SUVs: Why Are They So Common?

10 Ways to Drive Customer Engagement with Interactive Mobile App Features

How to Store Vape Juice in Good Condition

Book Your Airport Taxi Limo Service Today for a Smooth and Stylish Arrival

American Children Who Appear to Recall Past-Life Memories Grow Up to Be Well-Adjusted Adults