A senior Twitter executive has said that there was “no doubt” that certain content on social media contributes to radicalisation.
Vijaya Gadde, who overseas the company’s legal, policy and trust and safety, spoke about the mircroblogging platform’s measures to combat radicalisation and revealed that 1.6 million accounts have been taken down related to terrorism.
“I think there is content on Twitter and every (social media) platform that contributes to radicalisation, no doubt, but I think we also have a lot of mechanisms and policies in place that we enforce very effectively that combat this,” she said.
Gadde also revealed that 90 per cent of the terror-related content was detected using Twitter’s own technologies.
Elaborating on the platform’s violent extremist group policy, Gadde said over 110 such groups have been banned.
“90 plus percent of those are white supremacist or white nationalist groups, including the American Nazi Party, the Proud Boys, the KKK,’ she explained at the Code Conference. “If you have any affiliation, if you claim any affiliation to those parties, you are not allowed on Twitter, period.
“You can’t have any accounts – I want to be very clear, that is our policy.”
Twitter has a troubled history with white supremacists.
In 2017, Twitter removed verifications from far-right figures after facing a backlash for offering blue check marks to an American white supremacist.
However, last month, the social network said it was examining if white supremacists should be banned outright from the platform or if it would be better to keep them on to de-radicalise them through “counter-speech and conversation.”
“Hiding things doesn’t automatically make them disappear,” Gadde said.
Meanwhile, Twitter’s product lead Kayvon Beykpour said that a lot of what people consider abusive on the service does not actually violate its policies. What one person finds abusive may differ from the next person, said Beykpour.
“One of the things we’ve really had to step up from a product and technology standpoint is proactively de-amplifying content that we don’t think should be amplified,” he explained.