Lindsey Clay
Feb 4, 2024

Is there an acceptable human cost of doing business?

We might not be able to fix the internet but we can do more to help online advertising – can’t we?

Mark Zuckerberg, chief executive of Meta, testifies before the Senate Judiciary Committee (©GettyImages)
Mark Zuckerberg, chief executive of Meta, testifies before the Senate Judiciary Committee (©GettyImages)

"Blood on your hands". That’s the chilling accusation made this week (31 January) against Mark Zuckerberg and other social media bosses at a hearing of the US Senate Judiciary Committee.

They were examining inadequate protection online for children – from enabling sexual predators to promoting unrealistic beauty standards.

That it has come to this, that such an accusation can even be made, supported by evidence, is astonishing.

The hearing followed another woeful incident online. Like most of you, I hope, I was horrified by the Taylor Swift nude deepfake scandal. The fact it can happen, the fact it can spread, and the fact it continued spreading even after it was discovered and denounced.

And, in this election year, we have every reason to fear a tidal wave of misleading deepfakes online attempting to warp political debate and outcomes. It’s ugly, it’s damaging, it’s dangerous. Some of it will hit society’s shores in advertising.

While I get that the zillion hours of user-generated content being uploaded for free to open platforms is very hard to pre-vet, advertising is different, more straightforward. We might not be able to fix the internet, but we could certainly do more to help online advertising – can’t we?

If human specialists were used to pre-clear all ads before they appeared, as they are in other media, then the scam, fake, illegal, harmful, or misleading ads that continue to see the light of day online would begin to evaporate.

People and businesses pay for advertising space. So why not charge more to cover the cost of rigorous clearance, make less profit, or don’t have an advertising-funded business model?

The automated ad reviewing systems using AI and machine learning that tech giants employ are impressive and clever beyond my comprehension. They catch a lot of the bad. But, as is frequently shown, they don’t catch all of it and there’s no suggestion they ever will.

So we have a choice. Advocate for a proper clearance system, like Clearcast, basically an upstream Advertising Standards Authority, or accept that platforms that choose automation are, in effect, allowed to show some illegal/scam/misleading ads. Just live with it being acceptable collateral damage.

I appreciate that proper ad clearance will impact on the business models and profits of companies that currently choose automation.

But, as the tech giants make significant profits, it wouldn’t bankrupt them to be more responsible. A cost to them; a boon to society and their reputations (and advertising’s reputation generally; we’re an industry suffering from an embarrassing deficit of trust).

And, to be blunt, cost shouldn’t be an issue anyway. Principles should cost something. If cost is an issue, then it suggests a (knowingly) flawed business model. No company has an innate right to make money while knowingly repeatedly causing social damage.

I know the argument against: they’ll say they do clear their ads. They invest considerably in AI and machine learning technologies to automate the review process. Human reviewers are also employed – lots of them – to handle complex cases. And they remove ads when they become aware they fall short of their standards.

Plus they’ll say there are just too many ads to manually process and it’s all happening in real time, allowing advertisers to tweak campaigns/creative. Too much is happening too quickly. Automation is the only answer.

If one of our industry’s goals is to eradicate harmful or illegal advertising then system changes have to happen upstream before any ads are seen. Removal can, by definition, only happen after some damage has been done.

How much collateral damage is acceptable in a business model? When do you accept a business model needs fixing? Where do you draw the line on what is or isn’t your responsibility as a business?

Automation benefits lots in life, the precision of robotic surgery in delicate procedures, for example. But when there is interpretation and nuance, potential criminality and social harm involved – and when money is changing hands – step forward the trained humans.

You can have a thorough ad clearance process or a convenient but flawed one; you can’t really have both.


Lindsey Clay is the chief executive of Thinkbox

Source:
Campaign UK

Related Articles

Just Published

1 day ago

The evolving OTT landscape in APAC

With a surge in digital consumption, APAC is now very much at the forefront of the OTT revolution—driven by local content and technological advancements. UM's Sharon Soh unpacks the thriving OTT landscape in the region.

1 day ago

JCDecaux grows 'above expectations' in H1 2024

All geographies showed positive organic growth, including Asia-Pacific. The gradual recovery of activity in China, which remained well below pre-Covid levels, continued with a double-digit organic revenue growth rate.

1 day ago

Tech On Me: APAC tech leaders says it's business as ...

This week, I speak to APAC tech leaders about Google deciding not to end cookies. Plus, Meta's new challenge against OpenAI, and Kakao's founder arrest among other tech headlines in the region.

1 day ago

Pfizer global CMO to depart

Rienow previously served as Pfizer’s UK managing director and country president.