Rafe Blandford
Apr 16, 2019

The age of synthetic content raises the trust bar yet again

David Beckham's 'Malaria No More' campaign has raised fresh concerns around synthetic content. How can brands navigate this landscape?

The age of synthetic content raises the trust bar yet again

The new Malaria No More campaign, featuring David Beckham speaking in nine languages about the need to stamp out the disease, was a strong example of technology’s power to do good in the world.

Yet there’s deep unease in some quarters about the automation that enabled the charity to synthesise Beckham’s voice. He isn’t, after all, fluent in all of the languages featured in the film.

This is the latest high-profile example of so-called "deepfake" technology (in this case provided by UK company Synthesia), enabling a brand or organisation to replicate reality. Politicians on both sides of the Atlantic have warned about the potential damage that could ensue if it’s deployed as a propaganda weapon. And yet, despite all the ethical and legal issues that will undoubtedly arise, the creative potential is strong for advertisers and their agencies.

There’s no doubt in my mind that 2019 is the year of synthetic content, but what are the responsibilities for brands when it comes to the use of this emerging technology – are they in danger of perpetuating the cynicism that surrounds fake news?

First, a bit of context. Synthetic content is obviously not new. After all, we’ve talked about "Photoshopping" images for decades and it has been used to satisfying effect in Hollywood films – from Brad Pitt’s ageing-process-in-reverse in The Curious Case of Benjamin Button to Carrie Fisher’s youthful appearance in Rogue One: A Star Wars Story. But its adoption has accelerated, allowing developers to burst through "the uncanny valley" and create convincing synthetic images, video and audio for far less money, and with much reduced effort, than before.

That’s because we’re in a technology race that has seen the emergence of generative adversarial networks – machine-learning systems featuring two neural networks working in competition, one as the creator and the other as the discriminator, until they produce something that is detailed and convincing. They operate like the human brain, but really fast and at an almost unlimited scale.

The resulting improvements in the quality and delivery times of simulated audio, video, sounds and text are remarkable. In audio, for instance, advances from the likes of Lyrebird and Baidu Deep Voice have led to creators being able to base highly realistic and extensive simulations of a human’s voice on just two to three minutes of captured audio (the previous standard was 30 minutes or so).

For advertisers, the applications are extensive. If you’ve got Dame Judi Dench voicing your commercial, you don’t need her in a studio for hours; you can capture a sample in just a few minutes and then do what you like in terms of creating and developing a script.

The scope will widen along with the growth in computational creative, enabling brands to issue personalised, machine-created messages from celebrities or brand mascots to individuals and create sonic branding and voiceovers much more easily and consistently.

Issue of trust

However, maintaining trust levels will be vital. The general assumption now that a great deal of news and content around us is fake presents a real challenge for marketers. But if it’s difficult for people to detect if something is real or not, the importance of a trusted platform grows in significance – advertisers using synthetic content need to own or have access to spaces on which to connect and talk while being secure.

And the rise in this content will only make the strength of a brand more impactful. Brands that are trusted, on channels that are trusted, will be the most powerful in the future, when the uncanny valley is truly closed and humans won’t be able to spot the difference between "real" and "synthetic" content.

That’s when brands must embrace one of their most important responsibilities: to acknowledge the difference between entertainment and manipulation when using synthetic content. No-one will mind if an advertiser is recreating reality for an audience to enjoy an experience, to entertain them. But manoeuvring people into parting with money or signing up for a service is a different matter. Then you have to ask the question: will people be comfortable interacting with a synthetic creation?

The answer’s already out there. It’s about being honest and upfront. We’ve seen it with Google’s Duplex assistant, which can make calls on a user’s behalf to book appointments or order products. Its initial launch provoked an ethical storm before Google announced that it would identify itself as artificial intelligence when contacting a person on the phone.

Beyond the obvious legal and anti-fraud concerns that companies are likely to face, the ethical issues are those that marketers will wrestle with, and the challenge will only grow as we enter an arms race that is set to intensify over the next decade. But the brands that have built the highest levels of trust will have the biggest permission to really push the boundaries with synthetic content.


Rafe Blandford is chief product officer at Digitas UK

Source:
Campaign UK

Related Articles

Just Published

5 hours ago

Under-appreciated, overlooked and misunderstood: ...

Research involving more than 50 female creatives shows there is a long way to go before we realise the full value of female creative talent.

5 hours ago

EBay reviews global media account

EssenceMediacom is the incumbent.

5 hours ago

Tech companies offer poor ad transparency, study finds

A new report from Mozilla and CheckFirst found that many tech companies, including most major social media platforms, offer disorganised ad data that researchers struggle to navigate.

5 hours ago

Times Power of Print throws down the gauntlet to ad ...

The work calls for entries for campaigns that will get more voters to the booths.