It’s Monday morning, three days after the horrific mosque attacks in New Zealand that left 50 dead, and within literally seconds of searching Twitter, I have found a link to the chilling livestream posted by alleged perpetrator Brent Tarrant that circulated the globe within minutes.
I find this link despite strong claims from Twitter, Facebook and YouTube that they are working tirelessly to take down any footage of the atrocity being shared across their platforms. Facebook removed 1.5 million copies in 24 hours, it said, and reportedly took down the original livestream, hosted on Facebook Live, shortly after it was posted.
In the first 24 hours we removed 1.5 million videos of the attack globally, of which over 1.2 million were blocked at upload...— Facebook Newsroom (@fbnewsroom) March 17, 2019
The media has been full of raging debate about the role of social media companies in stopping the spread of extremist content. Politicians and the general public are asking them to take more responsibility, but the sad reality of the situation was laid out by Australian prime minister Scott Morrison.
Australian PM Scott Morrison: Social media companies cooperated with authorities over the last 48 hours, so I'm making no comment against their willingness to cooperate. But I sadly have to say that the capacity to actually assist fully is very limited on the technology side.— Richard James (@richjamesuk) March 17, 2019
No one is debating the complexity of tackling such videos once they’re on the web. Most of the AI monitoring technology runs into difficulties with doctored footage, or the workaround of recording a web browser of the footage from a mobile phone. Yet it is challenging to hear these social media giants insist how hard such vile content is to police, while at the same time learn that Google is capable of taking down more than 6 million ‘bad’ digital adverts every day.
Of course there are big differences between advertising and user-generated content. But it gives pause that the former, which is the keystone in Google and Facebook’s multi-billion-dollar revenue model, seems to be making serious headway in tackling bad actors. Is the same level of attention being given to the latter? It’s interesting to note the speed at which YouTube manages to take down user-generated sports videos, for example an English Premier League or NBA match, that breach media rights agreements.
The fact is that videos such as Tarrant’s are downloaded and reposted on fringe forums outside Facebook or Google’s jurisdiction, making them virtually impossible to erase. It must also be noted that the social media platforms have been increasingly successful in policing Islamist extremist content, in conjunction with relevant security authorities. Perhaps most overlooked, in the rush to point fingers at social media platforms, is examining who is sharing these videos, and why. That’s a much darker and bigger societal question, that social media is only one part of.
But at the heart of the issue is that these platforms are as successful as they are precisely because they are engineered for mass reach, the more for advertisers to get their products in front of billions of consumers in just a few clicks. The entire ecosystem is designed so people can use simple tools to upload whatever they choose, to generate engagement, to be distributed widely, and ultimately advertised against. To some extent, Google, Facebook and Twitter need to break the digital model they have built to truly stop such acts from being shared so easily.
"...discussions that have to be had about how these facilities and capabilities as they exist on social media, can continue to be offered, where there can't be the assurances given at a technology level, that once these images get out there, it is very difficult to prevent them."— Richard James (@richjamesuk) March 17, 2019
These companies have spent many years, and millions of dollars, lobbying governments and insisting that they can handle oversight of this critical issue, they just need more time. But how much more time can they claim to have after Friday’s tragic events in Christchurch, and the various examples before it? When does the much-feared ‘R’ word–regulation–finally become the only viable option?
For years the mantra embedded in Silicon Valley has been ‘move fast and break things’. There’s a nagging, and rather hubristic, irony in then telling the rest of the world to slow down before doing anything as drastic as regulation. It is time for the platforms to move much faster.
Faaez Samadi is Southeast Asia editor of Campaign Asia-Pacific. As always, we welcome your reactions. Contact us on Twitter @CampaignAsia, on Facebook, via our feedback form or directly. You can follow Faaez on Twitter @faaezsamadi