Facebook Starts to use AI to become a hostile place for extremists

Elias Hubbard
June 26, 2017

Terrorism has been a growing threat for the past few decades and as the internet is reaching out to far corners of the world, it's giving a way for people to connect beyond boundaries but also a way for perpetrators to propagate their philosophy, meet other link-minded people and recruit impressionable minds too. There's also the question of whether Facebook should target white nationalist and neo-Nazi movements as well as groups like ISIS and Al-Qaeda.

Bickert and Fishman emphasized that the company's position on the issue is simple-terrorism has no place on Facebook. This includes videos or images of horrendous beheadings, violent content and other disturbing events.

Over the past year Facebook has increased its team of counterterrorism experts and now has more than 150 people primarily dedicated to that role.

Meanwhile, tech firms have faced lawsuits from victims of terrorist attacks, including from family members of those who died in the the 2015 San Bernardino terror attacks.

The announcement of the latest moves come amid pressure on Facebook, plus other messaging platforms, to take more responsibility when it comes to curbing the spread of such content from politicians, with the post underlining its efforts to work with Microsoft, Twitter and YouTube to do so.

She claimed that the company is taking strong new measures to sniff out fake accounts that are created by recidivist offenders. It's also asking the system to analyze propaganda in order to seek out similar posts and remove them. But any goodwill earned by that post seems to have lasted less than a day, as a report revealed on Friday that a "bug" affecting more than 1,000 Facebook content moderators inadvertently exposed some of their identities to suspected terrorists.

"Certainly we've seen questions about what should social media companies be doing", Monika Bickert, Facebook's director of global policy management, told CNN Tech.

The image recognition technology allows Facebook to block previously removed videos and hence such content can never be uploaded again.

After facing criticism from European Union leaders following a string of terrorist attacks in the UK, Facebook on Thursday outlined the ways it's stepping up its efforts to curb extremist content on its social network, including its use of artificial intelligence.

To strengthen their defenses, the company has admitted to using human efforts as well. Facebook is using the technology to remove content deemed to have terrorist implications - by getting its systems to identify a variety of red flags. The content reviewing team now is made up of 4,500 people but Facebook says it will add 3,000 more staff over the next year.

For removing groups that support or advocate for terrorism, Facebook uses algorithms that identifies such groups and communities. May has opposed freely available end-to-end encryption in the past, which makes communication essentially inaccessible to third parties and is a key feature of the Facebook-owned WhatsApp.

Moreover, it counts 2 billion users and involves more than 80 languages.

Other reports by Click Lancashire

Discuss This Article

FOLLOW OUR NEWSPAPER