England had lost the Euro 2020 final after a tense penalty shoot-out that ultimately resulted in an Italy win. Three players were targeted with online abuse, just because they were black. Marcus Rashford, Jadon Sancho and Buyako Saka are no strangers to online abuse, especially Rashford who has been singled out over his child food poverty campaigning. Abuse seeped into the real world, too, with a Rashford mural in Manchester being defaced.
As is so often the case with events that spark vocal racism, race was entirely irrelevant – abuse speaks volumes about the people who indulge in it, and next to nothing about their targets.
But while a minority revealing their hatefulness and idiocy was depressingly predictable, and the ensuing publicity gave the online abusers' "views" airtime, it highlighted what for many is an important issue: the need for social media companies to do more to stem the tide of online racism.
The Football Association said tech giants needed to "step up and take accountability", prime minister Boris Johnson called them to Downing Street to explain how the abuse had happened, while his government has pledged to take action via its Online Safety Bill (although Johnson might want to look at his own track record on jingoistic remarks).And there is a broader philosophical point, as ISBA director general Phil Smith outlines: "We believe that individuals must be accountable for their actions on social media and of course advertisers are keen to identify their audiences with confidence, but any ID system would need to consider potential impacts on civil liberties and freedom of speech."
Both sides of the argument are viable. So, Campaign asked several industry figureheads: should social media users be required to prove their ID to stamp out online abuse?
Instagram, Facebook and Twitter have also promised to remove racism abuse from their platforms — Twitter said it had used machine learning to remove more than 1,000 offending tweets.
But for the likes of Tony Burnett, chief executive of Kick It Out, nowhere near enough is being done. "When it comes to online abuse, we need better regulation and enforcement and we need social media companies to be part of the solution," he said.
"The social media companies are in the best position to have an impact – they have the financial resources, the technology and the people."
For most anti-racism campaigners, the reactive, self-regulatory moves by social giants side-step an important issue: online racists are free to abuse with impunity, because for the most part they are anonymous, hiding behind an avatar and pseudonym. They don't face the consequences of their actions because they can't be identified.
Conversely, enforcing people to post in their own, verified, name means that people who indulge in hatred can be held to account, pilloried and shamed at the least, prosecuted, banned from football clubs.
But there's a counter argument that online anonymity is necessary in some parts of the world where social media users live under oppressive regimes, which might potentially imprison them (or worse) should they be identifiable when airing views around issues such as progressive politics, race and sexuality.
Director of agencies UK&I, Facebook
No-one should have to experience online abuse, and we don’t want it on our platforms. There are, however, real inclusivity and privacy challenges with requiring ID verification, and we need to put the problem in context. The most recent research from the Electoral Commission showed that 11 million people in the UK do not have a driving licence or passport and that members of this group were more likely to be from disadvantaged backgrounds. That number would increase dramatically in poorer countries.
In contrast, the current prevalence of hate speech on Facebook is 0.05%-0.06%. Striking the right balance between a free and inclusive internet and protecting our global community is something we take incredibly seriously. While this is work that will never be finished, we do believe the steps we’ve recently announced on Instagram and our ongoing work with law enforcement will hold people more accountable for their actions on our platforms.
Chief executive, Engine Creative
Yes, is the simple answer. This is no longer an argument about privacy, as our social media platforms already collect a significant amount of personal data on their users. What we’re talking about is protecting people from activities that are illegal both under UK and many international laws. Racist or sexist abuse, discrimination and harassment are illegal. Inciting violence is illegal. Social media platforms should be doing everything they can to protect their users from these online harms and supporting law enforcement with the prosecution of perpetrators of illegal and harmful activities – ID verification would certainly help with the latter.
When it comes to the former, ID verification would be a step towards influencing behaviour and ensuring that users abide by the rules and T&Cs of the platforms, with requisite penalties for not doing so – lifetime bans would actually be effective. Additionally, ID verification would ensure that age verification was more effectively managed and provide greater protections for children and younger users, while also helping to mitigate against online fraud.However, the complication occurs where some marginalised and vulnerable groups that depend on social media don’t have access to ID, such as homeless people, illegal migrants, refugees and trans people who cannot access gender-confirming documentation. The question then is around the solutions we can put in place for these groups, while still ensuring ID verification is also a requirement to protect all users. I would hope the social media platforms also address this as part of any ID verification solution.
Global corporate strategy director, Wunderman Thompson
The privilege of internet anonymity has been so abused that social media can all too often be a delivery system for hatred, malice and bullying, so I can understand the motivation behind requiring users to provide identification. Since I personally don’t trust social media giants enough to use that information for genuine good, providing ID should not be a pre-requisite of social media use.
Restricting internet anonymity runs the risk of opening the public up to more limited privacy. Yes, one’s comments would be overtly attached to one’s identity, but so too would be their likes, dislikes, clicks, and so on. This creates an opportunity for platforms to monetise on behaviours under the guise of stamping out abuse. The onus to moderate language should fall to the social media companies who have a responsibility to invest in ending online abuse and finally recognise the power they have in society.
Chief creative officer, Wonderhood Studios
We made an ad before Christmas for Branston. There was a scene where the lead, a white lady, kissed a mixed-raced man. Within hours, the video had so many racist comments we had to scramble to switch them off. They were shockingly violent, even revealing the identity of a person on our team and racially abusing them. It left us all reeling. From memory, many of the people commenting were protected by their anonymity.I strongly believe users should be required to prove their ID. While we have a right to freedom of speech, we don’t have a right to anonymity in that freedom of speech. Words can be weapons, and people should be held accountable for them.
I understand verifying billions of accounts isn’t easy, and social media companies will lose users not wanting to give away more data. But this has to be the consequence of creating something so unbelievably powerful. After all, it’s in everyone’s interest to make social media a safer space to communicate with audiences.
Chief executive, Digitas UK
No. While racism is abhorrent, as are many other injustices found on social media – sexism, ableism etc, where do those who genuinely require anonymity go? Those who have suffered from domestic abuse, whistle-blowers and many others will be shut out from social media. I think this would further alienate parts of our society that need social media the most. The social media platforms have to do more, they have created a product and it’s their responsibility to police it. Racism is a societal issue and needs a multifaceted approach to tackling it, but giving racists a forum to roam free is legitimising it (as is our current government, in my opinion) and that has to be stopped.
Media and partnerships director, 20ten
In light of the racism and abuse faced by the England team following the UEFA Euro 2020 final, the likes of Facebook, Instagram, Twitter and TikTok must do more to tackle this endemic problem. For starters, users being forced to input their identity to social media companies will solve 90% of the problem across platforms.
This will make online abusers think twice before they act or post inappropriate comments if they are monitored by social media teams. In turn this could be directly reported to legal authorities with severe consequences waiting for them in the form of legal punishments.
Perpetrators need to be held accountable for their actions once and for all, it's time for social media companies to wake up and realise this has to stop now. The current state of monitoring, regulation and punishment is simply not enough.
Host, Eat Sleep Work Repeat; former EMEA vice-president, Twitter
Mandatory ID on social media accounts is a red herring when it comes to combating abuse – largely because access to government ID is disproportionately harder for disadvantaged groups. Poorer people are less likely to have passports or driving licences, and the consequence of making access contingent on those things might be hard for affluent readers to empathise with. You'd be denying people access to the utilities of modern life. Imagine at school, the "haves" have Instagram or TikTok, the "have nots" don't.
The truth is that stopping abuse is a little like stopping someone shouting during a minute's silence – idiots will always be heard if they want to be. However we can mandate that social media firms do a lot more. First, social media firms should have to announce how many native English speakers are based in the UK resolving abuse issues (the native speaking part is because from experience abuse is often sophisticated to understand when it refers to local TV shows or slang). The firms should have to say how this compares with their staffing numbers for other functions like sales or engineering. Second, there should be a public ombudsman, paid for by the firms, who can deal with individual issues. Getting resolution to social media abuse shouldn't depend on someone being famous: abuse affects the disadvantaged too.
Creative partner, Kookoovaya
ID stamps? Maybe. But let’s ask ourselves whether there's a wider societal failure at play. Perhaps online abuse is a reaction to the hair trigger outrage of cancel culture? Social platforms lead people to extremes of opinion and deliberate attempts to trigger those we see as the enemy.
Social media IDs won't stop racism or misogyny nor will they improve the quality of debate around transgender rights. Our energies might be better spent tackling the root causes of this horror show. In the meantime there's a very simple solution – delete your accounts and read things that matter instead.
Student, Brixton Finishing School
It is too easy for people to spout out abuse online, it takes five minute to create a social media account to troll people anonymously. Social media platforms need to take more responsibility and stop hiding behind the guise of freedom of speech. Currently trolls have the shield of anonymity to hide behind when writing hateful things. It is obvious that this level of anonymity is something that needs to be taken away. Real action can be made if identities could be found behind accounts so that they can be held responsible for their actions in the real world. Social media platforms should include identity verification barriers that are required to create an account, this way individuals would be more cautious of what they post online knowing that their reputation is on the line.