Technology
Facebook removed 26mn terror-related content in 2 years
San Francisco, Sep 18
Facebook removed more than 26 million pieces of content related to global terrorist groups like Islamic State (IS) and Al Qaeda in the last two years, 99 per cent of which were proactively identified and removed before anyone reported it to the company.
"We have identified a wide range of groups as terrorist organizations based on their behaviour, not their ideologies, and we do not allow them to have a presence on our services," Facebook said in a statement on Tuesday.
The social networking platform said it has banned more than 200 white supremacist organisations from its platform.
"We use a combination of AI and human expertise to remove content praising or supporting these organizations," said the company.
Facebook said some of these changes predate the tragic terrorist attack in Christchurch, New Zealand, but that attack, and the global response to it in the form of Christchurch Call to Action, has strongly influenced the recent updates to its policies and their enforcement.
"The attack demonstrated the misuse of technology to spread radical expressions of hate, and highlighted where we needed to improve detection and enforcement against violent extremist content," the company noted.
Facebook will further detail how it is enforcing its policies against terrorist organisations in the fourth edition of its "Community Standards Enforcement Report" in November.
Facebook has also co-developed a nine-point industry plan in partnership with Microsoft, Twitter, Google and Amazon, which outlines the steps it is taking to address the abuse of technology to spread terrorist content.
"We'll need to continue to iterate on our tactics because we know bad actors will continue to change theirs, but we think these are important steps in improving our detection abilities," said the company.
The video of the attack in Christchurch did not prompt Facebook's automatic detection systems because it "did not have enough content depicting first-person footage of violent events" to effectively train its machine learning technology.
"That's why we're working with government and law enforcement officials in the US and UK to obtain camera footage from their firearms training programs - providing a valuable source of data to train our systems."
"We have identified a wide range of groups as terrorist organizations based on their behaviour, not their ideologies, and we do not allow them to have a presence on our services," Facebook said in a statement on Tuesday.
The social networking platform said it has banned more than 200 white supremacist organisations from its platform.
"We use a combination of AI and human expertise to remove content praising or supporting these organizations," said the company.
Facebook said some of these changes predate the tragic terrorist attack in Christchurch, New Zealand, but that attack, and the global response to it in the form of Christchurch Call to Action, has strongly influenced the recent updates to its policies and their enforcement.
"The attack demonstrated the misuse of technology to spread radical expressions of hate, and highlighted where we needed to improve detection and enforcement against violent extremist content," the company noted.
Facebook will further detail how it is enforcing its policies against terrorist organisations in the fourth edition of its "Community Standards Enforcement Report" in November.
Facebook has also co-developed a nine-point industry plan in partnership with Microsoft, Twitter, Google and Amazon, which outlines the steps it is taking to address the abuse of technology to spread terrorist content.
"We'll need to continue to iterate on our tactics because we know bad actors will continue to change theirs, but we think these are important steps in improving our detection abilities," said the company.
The video of the attack in Christchurch did not prompt Facebook's automatic detection systems because it "did not have enough content depicting first-person footage of violent events" to effectively train its machine learning technology.
"That's why we're working with government and law enforcement officials in the US and UK to obtain camera footage from their firearms training programs - providing a valuable source of data to train our systems."
35 minutes ago
Ram Gopal Varma says, 'my true birth as a filmmaker was given by Nagarjuna'
36 minutes ago
Anupam Kher pens heartfelt wishes on Guru Nanak Jayanti
37 minutes ago
Rashmika Mandanna on ‘The Girlfriend’: Know it’s going to be spoken about for many more years to come
37 minutes ago
Shweta Tripathi on ‘Mirzapur: The Movie’: Reliving Golu’s bookworm phase is exciting yet challenging
38 minutes ago
Fatima Sana Khan & Vijay Varma's 'Gustaakh Ishq' gets a new release date
39 minutes ago
Emraan Hashmi: Playing characters which are distant from my belief system is exciting space to explore
40 minutes ago
Harshvardhan Rane to start filming for 'Force' in March 2026, calls John Abraham, ‘Angel of a man’
40 minutes ago
R. Balki: ‘Ghoomer’ was always a tribute to women’s cricket, resilience of women cricketers
52 minutes ago
TVK to go solo in 2026 TN Assembly polls; Vijay named CM candidate
52 minutes ago
‘Cong filed zero objections to Haryana voter rolls’: ECI sources on Rahul Gandhi’s ‘H-Bomb’ claim
53 minutes ago
'Brazilian model voted at 10 booths of Haryana': Rahul Gandhi's latest on vote chori
54 minutes ago
Mumbai Monorail tilts during test run: MMOCL terms incident 'minor' urging citizens not to panic
54 minutes ago
Kerala film awards controversy continues as screenwriter slams award for rapper Vedan
