When Anthropic denied the US Department of Defence (DoD) complete access to its technologies, another player, OpenAI, predictably took its place. However, what wasn’t predictable was the subsequent backlash. ChatGPT uninstallations surged. Data migration guides mushroomed. Users felt betrayed and personally implicated.
The easy answer is that AI usage has become political. But technology always has been. Umberto Eco’s seminal piece identified Microsoft as Protestant and Apple as Catholic. From the beginning, we have assigned identities to AI. But this moment is different. Your choice of AI now speaks to your identity.
AI as a character certificate, not a capability
Phase one of AI differentiation was about capability. Which LLM is better for coding? How about writing? What about generating images (and video, too, please)? Beyond these features, an AI “brand” defaulted to the country of origin (US vs. China), the CEO's backstory, and tech backer (such as Microsoft for OpenAI). These became external proxies of trust and quality, substituting for a clear brand identity.
We are now in phase two. Brand persona and values are solidifying. Anthropic, in the absence of major tech backing, has picked safety over speed. Its CEO, Dario Amodei, has refused to place ads in the model (for now) and gone against the DoD’s lucrative deal. Meanwhile, OpenAI, with the recognised face of Sam Altman, has swooped in to make, in the company’s own words, an “opportunistic” deal with the DoD. Democratisation is its core brand value. The polarisation is now explicit, as you cannot simultaneously stand for safety and advancement. These companies have chosen their camps, and the public has quickly internalised these positions as identity markers.
You would have assumed both companies would have doubled down on this differentiation. Surprisingly, the opposite is happening. OpenAI is reportedly changing the terms of its deal with the DoD after media outlets wrote of ChatGPT uninstalls surging (though this claim has been contradicted by several US government officials). Anthropic’s Dario Amodei also apologised for his initial outburst. How far this softening will go is yet to play out completely, but public comments on Dario’s new interviews are telling: “Don’t apologise for doing the right thing” and “please don’t backtrack”.
This is a brand image crisis unlike most. No matter which way these companies behave, any action seems to offend large swaths of people. These ‘tribal’ identities, which people have projected onto these AI platforms, are expected to be played back. This new phase of AI differentiation proves that these tribes can be fairly vindictive. This consumer-driven, tribal discomfort has since moved into political scrutiny, as US lawmakers have questioned Sam Altman directly on the specifics of the deal.
AI is moving from platform to ideological brand
Even a low-intensity AI user, one who rewrites emails on a free tier and then drops off for a few days, has an emotional investment in AI. Somewhere in between spreadsheet analyses and travel planning, we have unknowingly poured much of ourselves into conversations with these tools. More so and faster than any other technology to date.
It is this conversation that has enabled AI to become ‘personality technology’. Not the positive affirmations, but an unapologetic mirror that reflects who you are back at you. All your beliefs, fears, and blind spots. Even when you aren’t using it, it still represents you. So, when a company, whose AI you use, is contracted to potentially enable mass surveillance tools and weapons systems with no human oversight, you feel betrayed, even if you live halfway around the world.
This is the Signal vs. WhatsApp debate amplified. A vote for Signal isn’t a choice for encryption, but about who you communicate with. While AI lets us vibe code a website, we now look at the vibe of the LLM itself. A move from “does this AI get me” to “is this AI a representation of me”?
How to map brand AI going forward
The implications of this change are becoming clearer by the day. Your choice of LLM is a signal of your values. ChatGPT, Claude, or any other LLM doesn’t just know you well; it shows your worldview to the world. Technical features, while impressive with every major update, are the backend.
For marketers, this presents a new frontier for building and protecting brand equity. After decades of brand-emotion studies for cars, beer, and much more, now we must consider AI as a representation and participant in identity-building. It’s not enough for a brand to show up on LLMs. AI has its own role in identity development that could potentially be dissonant with a person’s self-image. In the quest for relevance and new users through AI, a brand could end up eroding years of carefully built equity. Marketers would be better served by evaluating AI as a collaboration. What do I get, and what do I give back?
As these political and sociological schisms deepen across the globe, the real question to ask is: how many values and ideological positions can an AI company meaningfully sustain? At what point will it fragment the brand? If Anthropic and OpenAI bow to commercial pressures and repeatedly oscillate between their values, does this leave room for new players to enter the fray with firmer positions? Can AI really serve everyone?
These aren’t philosophical questions for a distant future. They are part of how AI will be encoded in the human subconscious. Historically, wearing, eating, drinking, using a brand, spoke for us when we didn't. With LLMs, a brand can also talk back for us even when we don't.

Sanat Sinha is the strategy director at Landor India.
Source: Campaign Asia-Pacific