Facebook labeled 180 million posts for election misinformation

Joanna Estrada
November 21, 2020

Guy Rosen, the Vice President of Integrity, has recently shared the position and stats over a telephone conversation with the reporters on 19 November, Thursday. But some content, such as explicit child imagery, needed to be viewed in a secure environment legally; in those cases, Facebook either attempted to use its artificial intelligence to weed out bad content, or asked some moderators to work in offices in places such as Austin, Texas. In other words, out of every 10,000 views of content on Facebook, 10 to 11 of them included hate speech. The number of posts deleted for promoting online hate groups rose from 1.6m to 4m, while the number of violent or graphic posts that were actioned fell from 34.8m to 19.2m.

Other interesting bits of information shared by Rosen include removing 12 million pieces of content for sharing unsafe misinformation about the coronavirus between March and October.

This letter gives a fascinating behind the scenes glimpse into what is happening at Facebook - and all is not well.

The release comes with Facebook under rising pressure from governments and activists to crack down on hateful and abusive content while keeping its platform open to divergent viewpoints. This means that of the hate speech we removed, 23.6% of it was found before a user reported it to us.

In their open letter, the moderators demand risk payment, more flexible working opportunity and access to mental health services from Facebook and companies that employ subcontractors for Facebook. "Facebook's algorithms are years away from achieving the necessary level of sophistication to moderate content automatically". In the process, they can be confronted with the worst of the internet: imagery of child abuse, sexual violence, graphic violence, animal abuse and suicide.

The AI wasn't up to the job.

Rosen said Facebook chose this metric as a gauge of the health of the platform because "a small amount of content can go viral and get a lot of distribution".

"The lesson is clear. They may never get there".

"Controversy" is Facebook's middle name, and even when the company tries to do something good, someone somewhere rises and raises questions about its actions.

The company said it has invested billions of dollars in people and technology to enforce these rules, and has more than 35,000 people working on safety and security at Facebook.

However, some believe that Facebook is still some way off having an effective system.

"You have previously said content moderation can not be performed remotely for security reasons", they wrote. But Facebook offers users the ability to choose from over a 100 languages, and has more than 70 percent of its users based in Asia Pacific and what it calls the "rest of world".

Over the course of Q3, Facebook said it took action on 22.1m pieces of hate speech content, 95pc of which was spotted by the company before it was reported by a user.

In India, for example, Facebook has allowed Islamophobic speech to remain on its platform to avoid upsetting the ruling nationalist party in a move described as "at best, Facebook is complicit through inaction, and at worst it shows outright deference to violent ethno-nationalist forces in the region". Besides the 'psychologically toxic environment, ' content moderators had increasing targets during the pandemic with slim-to-none additional support.

Other reports by Click Lancashire

Discuss This Article

FOLLOW OUR NEWSPAPER