AI didn’t just survive the hype cycle. It won.
While the metaverse quietly packed its bags and NFTs became dinner-party punchlines, AI is embedded into the industry’s daily operating system. It now touches everything from how ads are made, how media is bought, how clients expect answers, how fast work moves, and even the way junior talent is trained.
Yet, beyond the hype, signs indicate AI isn’t delivering on its promise. A new MIT report found that 95% of generative AI pilot projects in enterprises fail to deliver measurable profit impact. Meanwhile, several companies that replaced human workers with AI are now rehiring humans after realising AI alone cannot effectively meet all operational needs. A recent survey found 55% of companies regret AI-driven layoffs and are moving toward hybrid models combining AI and human skills to achieve better outcomes.
So, the cracks are big and clear. The signs indicate that jumping on AI without a clear strategy, proper partnerships, and an understanding of where AI creates real value can lead to disappointing results.
"One of the biggest reasons so many AI projects stall is that they’re often launched in a rush to 'do something with AI' rather than to solve a clearly defined business problem," says Jon Finnie, vice president, international sales, Yahoo. "Without clearly defined goals, generative AI pilots get stuck in proof-of-concept mode, disconnected from the real levers of revenue or cost."
Geoffrey Colon, founding partner at Visionaries, says that while many point to failure rates as technical failures, the failures are often due to a lack of alignment with business value.
"They are thinking about how to be fast rather than how to be good. The approach is very much one like a spreadsheet exercise instead of a dirty science experiment. Many don't set up a hypothesis to prove or disprove, they are poorly integrated or executed with the business workflow or technical stack, and they overemphasise AI theatre outcomes over true learnings and evolution,” says Colon.
Another common pitfall is the build-versus-buy dilemma. Smaller organisations often attempt to build proprietary AI tools without the necessary expertise or resources.
"Don't build your own damn model or platform when going out of the gate in an area that is so experimental and fluid," says Colon. "Did companies build their own first websites back in the day? No, they partnered. So why are they not doing that this time around? The illusion of control. Those who have succeeded have partnered with a software provider who can build models at a faster rate and white label if it's worth the larger investment to do something in-house over the long tail."
How should agencies develop clearer AI strategies?
A clear AI strategy isn’t about using every new tool that comes along; it’s about deciding where AI can genuinely move the needle.
"The agencies that get this right usually start with a small set of use cases that link directly to business priorities, so they know which ones to scale and which to treat as experiments," says Finnie. "Just as important is having simple rules in place for data quality, measurement, and when to stop or double down on a project."
Ori Gold, CEO & co-founder, Bench Media, believes that a strong AI strategy starts with the customer journey, not the hype.
"The question is often: Where does AI add speed, relevance or efficiency in a way that customers actually value? In our research, three in four Australians already use AI when making purchase decisions, and almost one in five do it daily,” says Gold. “That is a clear signal. Agencies should focus AI on the moments where consumers already see utility and build trust through transparent design. When AI is treated as an experiment for its own sake, it rarely delivers."
Conversely, some believe the best strategy is not to lock into a strategy at the moment.
"AI is a fascinating tool, but it evolves so quickly that the moment you lock yourself into thinking about it one way, you’re probably missing opportunities in another," says Paul Nagy, chief creative officer, VML, Asia-Pacific. "We’re trying to create a culture where we embrace the genuine usefulness of the tool now, and leave the strategy open enough to take advantage of what it will be able to do tomorrow. This mindset ensures we’re always ready to take advantage of innovation with purpose."
Andrew Gallagher, head of creative and media effectiveness, Singapore, Kantar, says that to unlock real value, media and creative agencies need to move beyond experimentation for their own sake and establish disciplined frameworks for adoption.
He says the most successful examples follow three clear steps. First, define simple and measurable success factors for each AI initiative. Second, ensure organisational alignment and sponsorship from senior leadership so pilots can scale into standard practice. Thirdly, benchmark against similar experiments to avoid operating in isolation and to gain perspective on progress.
Above all, developing clearer AI strategies to ensure AI tools are deployed where they create real value rather than as experimental buzzwords is about ruthless clarity. Agencies should identify the few places where AI can deliver a tangible edge whether that’s cutting copy turnaround from days to hours, optimising paid spend in real time, or mining cultural signals faster than competitors.
"Pick your areas to introduce AI and stick to being really great at them. Everything else is noise," says Anton Reyniers, head of strategy at We Are Social Singapore. "A good AI strategy doesn’t showcase tools where agencies tout they’re using this or that; it makes them invisible. The test is simple: if you removed the AI badge, would the work still be stronger, cheaper, or faster? If not, it’s just jargon."
‘AI can build a thousand roads, but it still needs humans to decide which one is worth travelling’
We've heard repeatedly about the gains in terms of productivity and cost savings when it comes to AI. But while AI is powerful when it comes to scale, speed and efficiency, it falls short on human skills like critical thinking, judgment, creativity and problem-solving.
"These core skills are almost impossible for algorithms to mirror," says Sima Saadat, senior director of international marketing and Singapore country manager, General Assembly. These are the skills that allow marketers to understand and connect with audiences, solve complex challenges that require empathy or nuance, and deliver that experiential factor in some of our work."
A good example of AI's weakness when it comes to context, nuance, and originality is that if you ask ChatGPT to pick a number between 1 and 10, it’ll give you 7. That’s because of predictive modelling, not the beauty of creative chaos.
"It can remix what exists, but it can’t originate the cultural spark that makes ideas really capture the human imagination," says Reyniers. "It struggles with ambiguity, irony, and the messy contradictions that make humans human. In strategy and creativity, those weaknesses matter: AI can build a thousand roads, but it still needs humans to decide which one is worth travelling.
A clear AI strategy isn’t about using every new tool that comes along; it’s about deciding where AI can genuinely move the needle.
‘As the AI content supply explodes, sameness becomes the enemy’
AI slop, a term for low-quality AI-generated content, is undoubtedly on the rise. AI-generated content, much of it considered "AI slop" (low-to mid-quality AI-generated content), has become dominant online. A large-scale analysis by Ahrefs found that 74.2% of newly published English-language web pages in April 2025 contained AI-generated content, leaving only about 25.8% classified as purely human-written. Moreover, 71.7% of new content was a mix of human and AI generation. It's estimated that up to 90% of online content could be AI-generated by 2026.
Additionally, AI hallucinations, where AI confidently generates false or fabricated information, have nearly doubled in frequency in 2025, rising from 18% to 35%. This rise exacerbates mistrust and risks.
With AI slop and misinformation on the rise, there is growing evidence that consumers are beginning to push back on forced AI adoption.
Consumers regularly express scepticism toward AI and there is evidence that even the mention of AI turns consumers off. Research published in the Journal of Hospitality Marketing & Management found that using AI terminology in product descriptions actually decreases customers’ intention to purchase. Mesut Cicek, an assistant professor of marketing and international business at Washington State University, explained that in every experiment they have conducted, including AI in product descriptions, led to lower purchasing intention. He described a study where participants received two versions of product descriptions, one mentioning AI and the other not, and consistently, the inclusion of AI reduced customers’ likelihood to buy.
"AI slop is everywhere, and consumers can smell it a mile off," says Ben Cooper, global executive director, AI Products, R/GA. "People aren’t stupid—they want to know when AI is involved. AI can run the factory floor, but if you want to move hearts, you still need humans."
Consumers increasingly reject AI slop and are quick to spot the inauthentic. But pushback can differ, with markets like Japan, India, or Australia responding differently to AI-generated content with distinct cultural nuances.
In markets like Japan, there’s a deep appreciation for craftsmanship and the human touch, leading to higher scrutiny of AI-generated content. In India, with its massive digital adoption, there’s a more dynamic and experimental attitude, though authenticity is still key. Meanwhile, Australian consumers often value directness and transparency, so they are quick to push back against anything that feels disingenuous.
"Navigating these differences requires a localised strategy, not a one-size-fits-all approach to AI deployment," says Nagy. "The core principle, however, remains universal: technology must serve the idea, and the idea must be fundamentally human first."
While AI-generated content has the potential to increase mistrust, there's an argument that the vast majority of advertising is not, and has never been, authentic.
"Marketing has been peddling the inauthentic long before AI arrived," adds Nagy. "Certainly, AI will turbocharge this and soon enough even when your mum sends you a video you’ll be looking at it sceptically… but the opportunity this creates is that authenticity becomes extraordinarily valuable."
Indeed, as AI floods the content ecosystem, authentic, human-led work could well become the true differentiator.
"It’s going to happen sooner than you think and I truly believe we’ll see the pendulum swing into the premiumisation of human creativity," says Reyniers. "As the content supply explodes, sameness becomes the enemy."
As content feeds become saturated with AI-generated slop, consumers increasingly seek authentic human signals. The content economy is already setting strict guardrails. YouTube now requires disclosure of synthetic media and removes monetisation from content deemed inauthentic. TikTok and LinkedIn have started auto-labelling AI-generated content using provenance metadata. Meta tags images on Instagram and Threads with 'Made with AI' labels, and Google’s DeepMind has introduced SynthID, an invisible watermark that persists even after editing.

This is crucial because when supply surges, prices plummet. AI floods the world with competent but predictable content, making true human qualities like judgment, taste, and instinct rare and valuable.
According to MIT’s State of AI in Business 2025 report, the highest returns come when automation removes back-office drudgery, freeing humans to focus on higher-order thinking. Supporting this, the NBER working paper 'How People Use ChatGPT' reveals that most AI use revolves around editing and modifying existing content rather than original creation—showing that AI excels more at remixing than inventing.
The brands that stand out will be the ones willing to show human fingerprints, the imperfections of what makes humans human, stories that aren’t AI-friendly and safe, and lived perspectives machines can’t fake.
And here lies the opportunity: brands that prove their originality, that show the work is genuinely human, will win attention and trust. And as the ability for the entire world, and everyone in it, to create something that ‘looks’ incredible becomes commonplace, being able to create something that means something could well become the most valuable commodity to humanity.
"Ironically, the more AI scales content, the rarer and more valuable human-led work will become," says Reyniers. "In a world drowning in synthetic efficiency, authenticity won’t just be a nice-to-have; it will be the last true differentiator."