Unless you spent 2017 hiding under a rock you will be aware of an emerging media and political narrative centered on Facebook, Twitter and Google as enablers of bad things.
Those bad things include:
- The use of their platforms for the publication and dissemination of offensive and / or illegal content;
- A lack of controls that allow foreign or other hostile actors to use their advertising systems to deliver messages that may have effected the 2016 US presidential election and other polls in the UK and France;
- The lack of foresight in the allowable use of targeting tools which created opportunities for abuse (in theory at least) by assorted bad actors;
- Revelations of fake accounts and manipulation by those operating those accounts of the spread of stories now known as "alternative facts".
The platforms are not exactly short of highly motivated negative commentators:
- The platforms are rich and (broadly) unencumbered by the cost of producing content. This makes them an easy target for content companies.
- A suggestion that the AI or machine learning deployed by the platforms works fabulously for minting money but less fabulously for solving these problems. This makes them an easy target for people those less awash in money.
- The trans-border status of the companies concerned and the complexity of creating any satisfactory governance or regulation which makes them an easy target for anyone who believes they have the power and jurisdiction to regulate.
For argument’s sake, let’s say that the concerns are legitimate. If many governments and media outlets are to be believed (despite their own economy with, or selective use of, the truth) the platforms do represent a moral hazard. Believing that to be true, some argue that advertisers may consider themselves culpable as the money they spend enables the hazard. After all there’s no real durable economic model in the bad stuff so the good guys keep the platforms going.
So what do advertisers do? One thing they can all agree on is to keep far far away from juxtaposition with offensive content. Terrorists, trolls and pornographers are all to be categorised and filtered out of sight and mind.
So far, however, only YouTube has been labelled sufficiently high risk to provoke high profile, if temporary, withdrawals from the platform as a whole. Here again, however, the decisions were made largely through the fear of atrocious juxtaposition rather than any broader fear of association with funding moral hazard.
The unexpected and highly unfortunate consequence of the "brand safety" debate is that platforms and advertisers alike have reduced (by algorithm or policy) investments in much digitally delivered news media which helps no one other than troll-laden deceivers.
Terrorists, trolls and pornographers are all to be categorised and filtered out of sight and mind.
Should we be surprised? Not really. The sellers and buyers of advertising have proven themselves remarkably adept at avoiding contagion on the basis of principle. The platforms vehemently oppose the suppression of content; egregiousness is in the eye of the beholder.
Equally, no advertiser wants to be seen next to Islamophobic rants in the UK Daily Mail, yet few will reject the Mail outright. Many in the US were queasy about MTV’s Jersey Shore but not about MTV. More recently, after the Ailes and O’Reilly allegations, some advertisers fled Fox News for a time but none (or at least none reported) left Fox any more than they fled News Corporation after the News of the World phone-hacking scandal. It goes further: brands did not reject fashion publishers in the era of "heroin chic" and more recently remained untouched by the issue of retouching and body image.
The view through the other end of the telescope is every bit as equivocal. Every media outlet lambasted Volkswagen and Wells Fargo over their scandals (apologies for just picking two where the malfeasance is clear) but none refused to carry their advertising – after all, one corporation’s mea culpa is another’s profit.
This leads to three observations. First; old adages are true; a principle is often only a principle when it costs you money and taking the moral high ground once can leave you exposed to accusations of hypocrisy long after the current hazard has passed. Second; there is peril at the intersection of public proclamations of purpose and the pursuit of profit for platforms, publishers and advertisers. Finally, it may be wise to err on the side of caution before assuming that either your corporate mission or its profitability buys a pulpit to preach from.
In defense of the platforms; the same tools that enabled the good guys to do good things like sell more or be read more also let the bad guys do bad things. Technology has always enabled altered behavioural states and shared responsibility for remedy. The question all the commercial actors need to ask is "are we (not you) doing enough?" The question regulators need to ask is, "are the platforms responsible for the issues, or is it a mirror of issues that bedevils us all?"
Rob Norman is a senior advisor to Group M and was previously the media agency group's chief digital officer.