There is a need
for more civil conversations online. (Photo by PHILIP PACHECO/AFP via Getty Images)
SUNDER KATWALA, Director, British Future
THE potential of social media to spread toxicity has been back in the headlines this year, so how can we get the social media culture that we want?
On Friday (27), Positive Twitter Day offers one small opportunity for all of us on social platforms to do something about that – deepening the public conversation about what needs to change in order to make these online platforms a more civil place.
Positive Twitter Day has become a regular annual fixture on the last Friday in August. The simple idea is to offer a nudge to social media users to think before they tweet, as a way to promote more civil conversations online.
It is not about everybody having to agree about everything, but it can be a day to work on how we could disagree better – and perhaps to try to have a conversation, rather than a shouting match, with somebody that you don’t agree with.
I conceived of this initiative in August 2012, shortly after the London Olympics, in response to the public appetite to maintain that positive spirit, along with concern at how often the incivility of social media discourse is a barrier to doing so.
Positive Twitter Day can be a day to forge unusual alliances. The blogger Guido Fawkes was an early adopter of the Positive Twitter Day message in 2012, and a consistent advocate of it over the years since. This may help it to reach some of those for whom promoting a more civil online culture – at least for one day – may be more of an effort, not just those of us who seek to do that all year round.
Users should take responsibility for their contribution to the online climate, but social media platforms must do more to play their part too.
“We condemn racism in all its forms – our aim is to lead the industry in stopping such abhorrent views from being seen on our platform,” said Twitter this month, setting out what it had tried to do to stop racist abuse against England’s footballers after Euro 2020.
But a great deal needs to change for that aspiration to become a credible claim.
Twitter rules allow an astonishing level of racism. For example, the company confirmed to me that “black goals don’t count – no blacks in the England team” does not break its rules. So Twitter’s antiracism statements to the media are clearly contradicted by pro-racism platform rules.
Twitter did introduce new rules against “dehumanising” a faith or ethnic group. It was acting in response to New Zealand’s Christchurch mosque massacre (in March 2019), demonstrating the tragic offline consequences of online hate.
Yet Twitter’s current interpretation mainly prohibits racist metaphors – calling minorities rats or viruses –rather than extreme overt racism itself. “We must deport all blacks, Asians and Jews so that white children have a future in our country” is another example that Twitter confirmed is within its rules.
Twitter has not attempted a public defence of why it defends racism of this kind on its platform. Ministers and MPs, the FA and the Premier League, the media and NGOs must keep asking this. MPs should organise an on-the-record public committee hearing – ahead of the online harms bill this autumn – so that social media executives either defend the current rules or set out what they will change.
But stronger rules against racism would not make any difference without the capacity to uphold them. This will have to involve human beings, as well as artificial intelligence (AI). A major reason why racist users are making a mockery of Twitter so easily is that AI cannot make the intuitive leaps that real people easily can.
This is exemplified by the farcical scale of Twitter’s failing with banned users. Repeat offenders – the “racist respawners” –are a major cause of the most toxic content on Twitter. Those banned around the final of Euro 2020 included 60 hardcore repeat offenders, many with dozens of previous red cards.
Yet 30 of this hardcore group had new accounts shortly after they were banned, some even using the same persona that had been banned. Twitter’s statement said it knew the identities of 99 per cent of the banned users – so why are so many back on its platform so easily?
Several simple steps would make a big difference. Adding a “previously banned user” flag to the user reporting options and cracking down on troll networks, which openly assist banned accounts to rebuild their networks, would be a start. More capacity could be unlocked by working constructively with the networks of volunteers who currently have better real-time information on these ‘racist respawners’ than Twitter does itself.
With legislation and new regulation under consideration, British Future’s research shows strong public support for action against hatred online – from 72 per cent of both ethnic minority and white British respondents, while only seven per cent disagree.
Positive Twitter Day is a chance for users to make a contribution to better social norms online. But this must be the year when Twitter, Facebook and other major platforms step up and play their part too.