Everyone agrees on one thing about AI: it's going to disrupt everything.
Depending on who you talk to, that disruption is either salvation (more efficiency, more output, more growth) or catastrophe (lost craft, lost roles, lost control). We've spent years arguing about whether AI is “good” or “bad” for industries, creativity, or the economy.
What we haven't talked about enough is what AI is doing to our minds.
As models get better, we're seeing a quiet shift: people are delegating not just tasks, but cognition. Not just labour, but the parts of thinking that should still be deeply, stubbornly human: how we process information, form judgments, and decide what's worth paying attention to.
AI is brilliant at automating work that doesn't deserve your full brainpower. But scroll LinkedIn, look at your inbox, sit through a few average presentations, and you'll see the side-effect: a tide of technically competent, cognitively empty content.
This isn't an anti-AI argument. At Publicis Groupe, we've been using generative models long before the current hype cycle, when most of the industry was not paying attention. So, we know that early AI was dirty, complex, and wonderfully weird – a clear new frontier.
But as we begin to use it across the full spectrum of what we do – to automate production, to support analysis, and as a high-level creative companion – the cognition debate is overdue.
Delegating the 'middle' of thinking
As marketers, we like to reassure ourselves that we're only using AI for grunt work: the first draft, the longer list, the quick translation.
Too often, we're handing over the middle of the cognitive process: the uncomfortable bit between receiving information and presenting a polished answer. It's the moment where you have to decide what the real problem is, which options are worth exploring, what language expresses the idea honestly, and how to put the argument together so it actually makes sense. That messy middle is where originality, taste, and strategic clarity live. It's where you build your ability to think.
When we let models do that for us, by default, two things happen:
Output converges. Everything starts to sound the same, from case studies to thought pieces to pitch decks. The tone is polished, the thinking is generic.
Cognitive muscles weaken. If you never wrestle with a messy thought, never rewrite your own clumsy paragraph, never sit in the discomfort of not knowing yet, you start to lose the habit – and eventually the ability – to do so.
Not all creativity should be automated
That's easier said than done. In my work, I've found it useful to frame creativity in layers. Some layers are perfect for automation: adapting thousands of assets, generating format variations, and repeating patterns that are meant to be consistent. If the job is “make more of what already works,” AI is a gift.
There is also a collaborative layer where humans and machines explore together, move faster and cover more ground.
While at the top, the layer should remain stubbornly, gloriously human. Not everyone can work at this level because it is not linear. It’s rarely taught at school. You need to know how to project random life experiences, underground references, strange connections and not-directly related topics into that process. These off-brief thoughts in late-night conversations lead somewhere no current model would suggest on its own.
I am not the first one to say this, but it needs to be understood and reiterated: the process involved in high-level creativity and originality is a total mess, making it difficult to even put a pattern against. I often play with the deep thinking mode of the best AI models out there, giving instructions and prompts which ask it to be really judgmental and strict about itself and its answers. The idea is to close that gap – models are getting better, but are still nowhere near anything that would be considered original for top-notch creatives. We will have to wait for AGI or maybe even ASI for that gap to be filled.
For now, the danger isn't using AI in the creative process. The danger is that we start using it indiscriminately, at the wrong layer, in the exact moments where what we actually need is time, friction, and human eccentricity.
To understand why I make this argument, you have to understand my background. I graduated as a software engineer a long time ago, slowly making my way towards a creative career, constantly balancing systematic thinking with weird, random, non-linear creative approaches. I genuinely enjoy automating things, cleaning the pipes, and making boring problems disappear. But I also care about art, aesthetics, and strange, sideways thinking. I love both levels of creativity.
The first models I played with were called GPT-2 and StyleGAN. They were weird, slow, not precise, but I could see that disruption was clearly coming for the lower levels of creativity through automation. Yet somehow, to create insane, never-seen-before outputs at the time on top of that automation, I also had to manually curate thousands of abstract outputs and be unapologetically human when deciding what was worth keeping and iterating on. This still applies today: my advice for creatives is to love both, master both - and continue working your cognitive muscle before you lose it.
AI as sparring partner, not a shortcut
So, how should we use AI if we care about protecting cognition?
Not like this: “Give me 20 campaign ideas for my client meeting this afternoon.”
That’s pure outsourcing.
But if you’re aiming for something different, some level of unpredictability, try prompting your AI model more like this (be patient, and use Deep Thinking mode for more thought-through results): “I’ll paste my core idea in one paragraph. Mutate it into three extreme, risk-taking versions, three brutally simple, almost primitive versions, and three oblique, sideways interpretations. Do not solve with production value or scale; attack the concept itself. Prioritise tension, surprise, and distinctiveness, and make my original look cautious by comparison.”
Prompts, no matter which level of creativity you’re playing in, should be more precise, more provocative, and much more specific to the output intended. Take the time to build your own custom models (Custom GPTs, Claude Skills, Gemini Gems, etc) with clear instructions on how each model should behave, including not agreeing with everything you write, the level of quality you’re expecting, with strong references for what constitutes good work, good writing, good output…
Non-AI time and the cognitive gym
If AI is available in every tool, all the time, the path of least resistance is to never think alone again. To protect our cognitive skills, we will have to build in “non-AI” or “low-AI” moments, exactly the way we schedule workouts or take the stairs instead of the lift.
That might look like:
Model-off first hour. For any meaningful brief, the first 30-60 minutes are AI-free. You read, scribble, argue, stare out the window.
Human frame before machine. Someone writes a one-page view: what's really going on, what we believe, the sharpest questions. Only then does AI enter.
Non-AI sessions. Short creative workshops where the rule is no models. Just humans observing, analysing, deciding.
Think of it as a gym for the mind. AI can absolutely augment you between sets. But if you never lift anything yourself, don't expect to stay strong.
The real competitive edge
AI will keep improving. More workflows will be automated. More creative and strategic tasks will be augmented.
The real differentiator won't be who has access to which tools. That gap is closing fast.
The edge will be cognitive: who still knows how to think clearly, judge wisely, and bring genuinely human insight, emotion, and taste into the work.
For individuals, that means actively working your cognitive muscle instead of letting the model atrophy it. For organisations, it means designing workflows, rituals, and expectations that protect human judgment, not just productivity.
Protect your cognition, and AI will multiply your impact. Neglect it, and the problem won't be that AI makes you redundant. It will be that you stopped thinking, diluting your ideas so much that they got lost in a sea of average AI-generated outputs.
Disclaimer: AI was sparingly used to fix some of the grammar and tone of this article, as the author is originally from Marseille, France.
Laurent Thevenet is head of Creative Technology, Publicis Groupe APAC. He is part of Publicis Groupe APAC’s AI Council and Creative Transformation ExCo.
