Facebook admits 4% of accounts were fake

Joanna Estrada
May 15, 2018

Facebook said in a written report that of every 10,000 pieces of content viewed in the first quarter, an estimated 22 to 27 pieces contained graphic violence, up from an estimate of 16 to 19 late a year ago. For years, we've had Community Standards that explain what stays up and what comes down.

The figure represents between 0.22 and 0.27 percent of the total content viewed by Facebook's more than two billion users from January through March. Rosen added that the reviewers will speak 50 languages in order to be able to understand as much context as possible about content since, in many cases, context is everything in determining if something is, say, a racial epithet aimed at someone, or a self-referential comment. "It's why we're investing heavily in more people and better technology to make Facebook safer for everyone".

Improved IT also helped Facebook take action against 1.9 million posts containing terrorist propaganda, a 73 percent increase. It said the rise was due to improvements in detection.

Facebook also managed to increase the amount of content taken down with new AI-based tools which it used to find and moderate content without needing individual users to flag it as suspicious.

Facebook only recently developed the metrics as a way to measure its progress, and would probably change them over time, said Guy Rosen, its vice president of product management.

Adult nudity and sexual activity: Facebook says.07% to.09% of views contained such content in Q1, up from.06% to.08% in Q4.

"We took down 21 million pieces of adult nudity or porn in Q1 2018, 96 percent of which was found and flagged by our technology before it was reported", said Rosen. During the first quarter, Facebook found and flagged just 38% of such content before it was reported, by far the lowest of the six content types.

We're often asked how we decide what's allowed on Facebook - and how much bad stuff is out there.

However, it said that most of the 583m fake accounts were disabled "within minutes of registration" and that it prevents "millions of fake accounts" on a daily basis from registering.

Facebook pulled or slapped warnings on almost 30 million posts containing sexual or violent images, terrorist propaganda or hate speech in the first three months of 2018, the social media giant said Tuesday, May 15.

Facebook shares slid as much as 2% Tuesday morning after it announced it had disabled 583 million fake accounts over the last three months.

The company said in the first quarter it took action on 837 million pieces of content for spam, 21 million pieces of content for adult nudity or sexual activity and 1.9 million for promoting terrorism.

Other reports by Click Lancashire

Discuss This Article