Campaign Asia-Pacific teamed up with Forrester to understand how marketers and agency professionals are applying AI in their work now and plan to in the future. The final results feature over 158 senior-level respondents across Asia-Pacific, mainly from marketing and advertising agencies and brands.The study focused on larger enterprises with over 100 employees, including multi-national agencies, non-for-profit companies and central government or public sector entities.
In a three-part series, Campaign unpacks the results of the survey and its key findings and speaks to experts on the challenges of investing in, adopting and using Gen AI.
Read part two: How brands and agencies are actually using generative AI
Part Three: The challenges of adopting AI
While Gen AI was undoubtedly the biggest buzzword of last year, beyond the hype the day-to-day reality of using tools like ChatGPT, Midjourney, DALL-E, and Stable Diffusion, has not gone without its fare share of hiccups—namely lawsuits and copyright concerns—which has taken some of the shine off this new tech marvel in recent months.
In the first part of a recent Forrester and Campaign Asia survey of more than 150 brands, advertising and agency respondents across the Asia-Pacific, we found that agencies and brands alike are struggling to justify investments on AI without a clear strategic imperative in place. Part two revealed why creative optimisation may still be primary use case for Gen AI in 2024. Part three of the survey tackles the question: What's still holding brands back from utilising AI efficiently and what challenges do they need to overcome to harness this tool?
In accordance with the survey findings among both brand marketers and agencies combined, the top three barriers to adoption are employee unreadiness (70%), governance and risk (69%) and lack of clear generative AI strategy (63%). On a distinct level, governance and risk emerges as the top barrier for brands (82%) while agencies cite employee unreadiness (71%) as theirs.
"I think the main challenge we face is education through trial and error," says Olivier Laude, chief technology officer at ThinkHQ. "Generative AI is not a silver bullet for any content creation. Being aware of its strengths and weaknesses is the first step in understanding what it can do well and what we should not use it for. We have spent a fair bit of time trialling new generative AI tools, with more or less success."
Using GenAI: Still a risky business?
It comes as little surprise that concerns around copyright or liability exposure, alongside privacy and data protection are top of mind for brands. In fact, 82% of brands and 76% of marketing agencies who responded to the survey, cited copyright and liability exposure as their chief concern.
It stands to reason, as in the past year alone, there have been a number of high-profile lawsuits filed for copyright infringement, leading among them The New York Times who is suing
Microsoft and OpenAI (the company that makes ChatGPT) for violation of copyright by utilising Times' articles to train their models without paying for them.
The risks associated with using GenAI are still seen as plentiful, and it's been reported
that as many as one in four companies have either banned or restricted the use of GenAI owing to concerns over copyright and privacy.
"Our short-term concerns would be around plagiarism, copyright infringement, misinformation, and harmful content," says Vinne Schifferstein Vidal, managing director of Made This. "The recent Taylor Swift viral photos are a good example."
Photos of singer Taylor Swift featuring explicit content began to circulate on X
(formerly known as Twitter) earlier this month. Swift supporters flocked to the platform shortly after, denouncing the images as fakes and generated by AI, demanding X take down the posts and ban the users that were spreading them.
Examples like the Swift incident highlight the potential dangers of GenAI tools in the hands of bad actors, and how easy it is to create and spread harmful and misleading information.
Overall, while there are some subtle differences in the concerns that brands and marketing agencies have around the use of Gen AI, overall, there are more synergies than differences when combined.
"Our clients' concerns and challenges are often, if not always, our concerns and challenges too," says Laude. "First and foremost, our clients want to ensure they maintain their compliance from data privacy, data sovereignty and IP ownership. Once they tick these boxes, they are very curious about the technology and are open to using it as long as we demonstrate the value of it."
In one recent example, Laude and his team at ThinkHQ developed a a ChatBot built for the Victorian government in Australia that helps citizens of diverse cultural backgrounds navigate day-to-day information in Victoria.
"It didn't take long for the government to see the value of providing a tool that truly assists Victorians access their content in their own language," says Laude. "We had to clear the security and data privacy endorsement but once approved, the technology was rapidly implemented, and now it truly helps Victorians with low proficiency in English, access the information they are seeking in their language through a WhatsApp chatbot available 24/7."
Marketing agencies already implementing AI policy
To ensure they are treading carefully as they continue to experiment with generative AI tools, a number of marketing agencies are implementing their own AI guiding principles.
Data from our survey finds that 31% of marketing agencies have implemented policies specific to generative AI, compared to just 18% of brands. Currently, evaluating what is needed is the primary focus for brands (36%) as opposed to marketing agencies (24%).
"Through our hands-on exploration of Gen AI, some guiding principles have emerged," says Tim Baggott, executive creative director, Amplify. "Many agencies are rushing to integrate Gen AI tools into their workflows to improve efficiency of creative asset production, but time will tell if more, quicker, cheaper actually equals greater effectiveness and better value."
Among Amplify's guiding principles is to firstly recognise that Gen AI is not the idea, and that while it's a great creative companion, an idea shouldn't be reliant on the tool.
Baggott adds that at Amplify, none of their public-facing work will use open source datasets where IP ownership comes into question. They're also determined to ensure diversity and inclusion are considered. "We’re actively working against the inherent biases that exist with Gen AI with age, race and gender," says Baggott.
Similarly, at UM Australia, Adam Krass, chief digital & technology officer, says that they are currently working through critical considerations when it comes to using Gen AI. Among those concerns is ethical considerations.
"There is a need to tackle ethical concerns and bias in generated data. This involves ensuring the training data is unbiased and taking steps to prevent any potential misinformation to clients and consumers."
Meanwhile, Krass adds that UM Australia is currently in the process of iterating and updating their data privacy policies to encompass more of the valuable use cases that Gen AI enable.
"An important consideration is also the training and support to ensure our teams know how to use and validate the outputs which will equip them with the tools to take full advantage of the opportunity."
Measurability is still a challenge
Measurability is also proving to be a challenge to tackle when it comes to the use of Gen AI for both brands and marketing agencies. Our survey found that 26% of brands and 20% of marketing agencies had found it challenging to measure the success of their generative AI use.
"While you can gather a lot of qualitative data regarding how employees use AI and where they find benefits, hard numbers are hard to come by," says Sebastian Dodds-Painter, AI implementation manager, Icon Agency. "Measuring ROI can become a guessing game as you weigh the costs of subscriptions and training against the increases in efficiency and higher quality of work that AI can bring to the table."
But aside from measurability, in general there is an excitement and curiosity surrounding the potential of GenAI which is shared equally by agencies and brands alike. Along with a hope that this new technology can be collectively adopted in a way that serves to enhance, rather than replace, human creativity and is utilised for the betterment of people and society.
"There's one giant concern that we faced," says Laude, "And that's how do we, as an agency, ensure we utilise the technology solely for the good it brings to our society and our people."
"Some risks we had to understand and openly share with our teams were about the lack of cultural representation or human touch in generative AI tools, the intellectual property on the generated work, the data privacy and data sovereignty in the process, lack of accuracy and misinformation that come from AI tools on many other topics," adds Laude. "But, at the same time, we want to ensure that we don't blindly label AI as the evil, and fall behind in understanding and utilising a transformative technology that can ultimately bring a lot of good."