When AI "borrows" public figures' faces: Creative experiment or manipulation?

The use of a person’s likeness in promotional materials without their consent is not a gray area. It is a matter of creativity that crosses ethical and legal boundaries.

Photo: Shutterstock

The use of Generative AI (GenAI) surged by a staggering 890% throughout 2024, marking an unprecedented acceleration in adoption within the corporate sector. This surge is reflected in the increase in global traffic for GenAI applications, as detailed in the State of Generative AI 2025 from Palo Alto Networks. Driven by the need for efficiency, companies initially used GenAI only for writing assistants, coding, customer support, and information retrieval.

However, technology rarely settles for the status quo. The use of GenAI to replicate the faces, voices, and even personas of public figures is now appearing in various forms of brand communication—mostly without the knowledge, let alone the consent, of the people whose likenesses are being used. When identities can be replicated with enough accuracy to deceive an audience, the question is no longer about the capabilities of the technology, but rather about who bears the consequences.

This phenomenon is not merely an evolution of tools. It is a shift in how brands build trust—and, in some cases, how they undermine it.

The blurring of the line between reality and fiction


In creative practice, GenAI offers speed and limitless exploration. "Generative AI enables the exploration of ideas, styles, and content variations in a much shorter amount of time," said Max Roza Natadjaya, Chief Creative Officer at SALVO. The adoption of GenAI is growing rapidly due to its ability to assist throughout the entire creative process, from pre-production, mood board and storyboard development, to the adaptation stage. "Moreover, there’s an element of novelty that makes the content feel fresher and gives it a greater chance to stand out amidst the information overload," added Max.

GenAI certainly opens up a vast space for creative exploration, ranging from creating imaginative fictional characters to building new visual worlds that are clearly labeled as artificial constructs. At this point, the technology remains within acceptable bounds. The problems begin when the focus of that exploration shifts to other people’s identities.

The main issue arises when the audience can no longer distinguish whether a public figure’s presence in a campaign is genuinely real or the result of AI manipulation. At that point, creativity has surpassed its own boundaries and entered an entirely different realm. According to Asyana Eka Putri, a Certified AI Ethicist by UNICRI and Legal Trainee at Gurcan Partners International Law and Consultancy Office, "When a public figure’s face, voice, or persona is replicated—especially for commercial purposes—this constitutes a violation of ethics and the law. Because someone is being harmed in terms of their persona, reputation, and even economic value, without their knowledge."

Misuse and misrepresentation


In this era of widespread AI adoption, data from Sensity AI cited by Deputy Minister of Communication and Digital Affairs Nezar Patria indicates that deepfake content in Indonesia has surged by 550% over the past five years. This figure did not emerge in a vacuum. Most of such content is used for a clear purpose: to exploit a person’s reputation for the benefit of another party.

In today’s fast-paced digital ecosystem, the credibility of public figures is a powerful asset—and, as such, highly susceptible to abuse. According to Max, physical attributes such as facial resemblance or voice are forms of identity, not free assets that brands can simply borrow. “If the audience might mistake it for an endorsement or collaboration, but no permission was ever granted, such practices are clearly unacceptable.”

This misuse often targets audiences with limited AI literacy—those who do not yet fully understand how this technology works or how to distinguish reality from fabrication. It is not technical vulnerabilities that are being exploited here, but human trust. The impact doesn’t stop at content but extends to decision-making processes. “Distorted perceptions can easily influence choices—whether as viewers or consumers—while simultaneously eroding the integrity that has long been upheld in the advertising industry,” said Asyana.

Under various legal and ethical standards of communication, such practices clearly fall into a distinct category: they are no longer a matter of creativity, but rather misrepresentation. In many cases, according to Asyana, these practices can even be classified as misleading advertising, or, more broadly, as a form of deliberate manipulation of public perception.

"Consent" as a foundation, not an option


One principle is becoming increasingly non-negotiable: consent. "Without consent, the use of a person’s identity violates their privacy and economic rights," explained Asyana. She referred to various global standards, ranging from the GDPR, which classifies facial images as sensitive biometric data, to the EU AI Act and UNESCO’s AI Ethics guidelines, which emphasize transparency in AI usage. “When the identity of a public figure is replicated for commercial purposes, this issue is no longer merely about ethics but also falls under data protection,” she continued. Last year, Denmark classified faces as intellectual property with the aim of minimizing the misuse of a person’s likeness or its use without the owner’s permission.

In Indonesia, there are currently no specific regulations regarding the use of AI and the likeness of individuals. However, Copyright Law No. 28 of 2014 on economic rights to portraits already serves as an initial reference that should be sufficient to refute the claim that “there are no rules.” At the industry level, the 2020 edition of the Indonesian Advertising Code of Ethics (EPI) is even more explicit: Article 1.11 on the Protection of Personal Rights explicitly prohibits advertisements or promotional materials from displaying or featuring a person without their consent, with the sole exception being mass displays where individual identification is not possible. AI-generated faces, however, are specifically designed to be identifiable, leaving no room for such an exception to be invoked. Article 3.13.1 requires permission for the use of synthetic representations or animations of real people, and Article 1.17, which stipulates that recommendations or endorsements are only valid if given directly by the individual concerned, form a framework that fundamentally rejects this practice. The fact that this framework has not yet been consistently enforced does not mean it does not exist.

From a practitioner’s perspective, Max emphasizes that consent must be an integral part of the process, not an afterthought tacked on at the end. “Consent and rights must be clear from the start. Without that, it can be misleading and damage the creative ecosystem." As an alternative, he offers options that are just as creatively powerful: creating unique fictional characters or composite personas, using real talent with clear and official contracts, or employing visual metaphors that remain catchy and have viral potential without having to use someone else’s identity. These approaches are legally sound while proving that creativity doesn’t require shortcuts.

Collective responsibility


Within the creative ecosystem, responsibility cannot rest with just one party, even though in practice, no party is often willing to acknowledge that responsibility. Asyana breaks it down into four layers: creators, brands, platforms, and regulators. "The ethical use of AI is not the responsibility of a single party, but rather a collective commitment of the entire ecosystem."

On the upstream side, creators—whether agencies or developers—play a crucial role in designing AI systems, managing data, and ensuring that all generated content remains ethical and accountable.

At the brand level, those who benefit from the campaign must also bear the consequences. When commercial value is generated from content that is potentially misleading or unauthorized, the legal and reputational implications cannot be entirely shifted onto the agency. Advertising platforms also play a strategic role: they are not merely distribution channels, but also gatekeepers who should develop systems to detect, flag, and even remove misleading deepfake content. Meanwhile, regulators are tasked with ensuring there are clear boundaries and real consequences when those boundaries are crossed.

Max translates this responsibility into a concrete workflow at his agency. "GenAI is used at the appropriate stages—namely, ideation, style exploration, mockups, animatics/stilomatics, and simple key visuals—not to 'trick' public figures." The process begins with defining the AI’s role as soon as the brief is issued: whether as a tool, a co-creator, or merely a technical executor. From there, "red lines" are established: no facial replication, no content that could potentially mislead the audience. All processes are documented as a trail of accountability. Approval is conducted in layers, both internally and by the client. And where relevant, disclosures are prepared so the audience knows which elements were created with AI assistance.

A system like this isn’t just a formality. It’s how the industry demonstrates that creativity and integrity don’t have to be at odds with one another.

Public trust is at stake


Asyana observes that if this situation is left unregulated and without a clear system, public trust in digital content will continue to erode. This erosion of trust not only harms public figures whose identities are misused, but the entire industry ecosystem that depends on them. Referring to the concept of the “wild’s dividend” by Bobby Chesney and Danielle Citron, the presence of deepfakes and digital manipulation actually creates an opening for certain parties to exploit public confusion. As a result, audiences begin to doubt even authentic content. In Indonesia, this pattern has repeated itself: manipulative content involving public figures triggers widespread reactions before it is eventually clarified, but the damage to public perception has already occurred and cannot be fully restored.

The misuse of AI has the potential to exacerbate disinformation, deepen polarization, and ultimately erode the trust that fuels the creative industry itself. "Robust systems must be established, and we need policies that prioritize the protection of the public’s right to identity while fostering a more responsible, transparent, and ethical digital ecosystem," Asyana emphasized.

Like Asyana, Max sees this as a challenge that must be addressed with more responsible standards. He proposes four standards that the industry must meet:

1. Consent and rights must be clear from the outset. Any use of a person’s face, voice, or identity must be based on valid consent, not on assumptions. 

2 Transparency in the use of synthetic media. Audiences have the right to know when content has been created or modified using AI. Because honesty is part of credibility, not a barrier to creativity.

3. Content audit trail or provenance. There needs to be a trail that records the origin and creation process of the content. With this system, every decision and change can be traced, enhancing accountability and security.

4. A risk management system that still involves human accountability. AI can assist, but the final decision must still be made by a responsible human, so that ethics and context are not lost in the midst of automation.

What about the parody argument?


There is one argument that is often cited as justification: that the use of a public figure’s likeness or persona can still be justified as long as it is presented as parody or satire. This argument is neither entirely correct nor entirely wrong, and that is where the complication lies.

Indonesian law does not explicitly recognize parody as a protected category. However, several provisions in Law No. 28 of 2014 on Copyright provide room for interpretation that can be utilized. Article 43(d) permits the creation and dissemination of content via digital media provided it is non-commercial and does not harm the subject. Article 44(1) allows for the use and substantial modification of works in the context of criticism or commentary, provided that the legitimate interests of the depicted party are not harmed. This loophole is narrow but real, and the creative industry has long operated within it, long before AI existed.

The more interesting question isn’t whether parody is permissible, but where the line is drawn between reference and representation. Actors who resemble famous figures, voices that sound familiar, and easily recognizable speech patterns—none of this is a new phenomenon.

As long as there is no explicit claim that the person depicted is the individual in question, and as long as there is sufficient room to argue that it constitutes impersonation or parody, a legal argument can still be made. That line isn’t drawn at the point where “the audience recognizes who is being depicted,” but much closer to “this is explicitly claimed to be that person.” Where exactly that line stands is a question for the courts, not a settled conclusion. And as long as there are lawyers skilled and knowledgeable enough to argue it, that line will continue to shift.

Section 12 of the Copyright Act governs the use of portraits or photographic works for commercial purposes; however, AI-generated content—whether in the form of animations or synthetic images—does not meet this definition because it is not an image produced by a camera.

The legal landscape surrounding the similarity of AI-generated works is not a gray area, but rather one that has yet to be mapped out. There are no explicit prohibitions, no explicit protections, and no Indonesian court precedents strong enough to serve as a guide. What exists is a space that will continue to be tested by the creative industry, just as this industry has always tested the boundaries of the law with every new technology that emerges.

The difference this time is that the technology is far more precise, far more accessible, and far harder to dispute. For an industry that builds its clients’ reputations on audience trust, operating in uncharted territory isn’t a matter of courage, but a risk assessment.

GenAI will continue to transform the way brands create content. The question is no longer whether this technology is advanced enough to replicate a human—the answer to that is already clear. The question is whether the industry is honest enough to acknowledge that capability does not justify everything, and bold enough to set standards before regulations force it to do so.

Source: Campaign Indonesia
| ai avatars , ai manipulation