Nikita Mishra
Jul 19, 2022

Coding feelings: How AI is edging toward empathy

As artificial intelligence gets better at understanding and engaging with humans, what ethical considerations could marketers confront?

Lil Miquela, Virtual influencer
Lil Miquela, Virtual influencer

Artificial intelligence (AI) combined with a human-centric perspective to marketing might seem like a contrarian approach, but in reality, machine learning, AI, automation, and scaling are integral for marketers today to transform data into empathetic, human-centric experiences.

Emily Poon, Asia president of Ogilvy PR and Influence says artificial empathy through virtual influencers (VIs), helps advertisers in understanding how customers emotionally connect to the brand and the message. Insights can be used to evolve content and messaging to help optimise campaign performance.

Empathy in AI: Resetting the narrative

By personalising and scaling brand interactions, VIs can offer up to three times the engagement rate of their human counterparts–no wonder, the number of virutal influencers has grown to 200 in 2022, up from one in 2016. They offer an exciting proposition for brands looking to find new ways to engage the audience.

Lil Miquela, or Miquela Sousa, changed the game. Since debuting on Instagram in 2016, the brown-eyed, doe-faced, freckled beauty hustled the gig economy for over 3 million followers, topped Spotify charts with indie-pop tunes that garner millions of streams per month, became the face of Chanel, Prada and Samsung campaigns, kissed supermodel Bella Hadid for Calvin Klein and speaks for causes she believes in.  

There is a reason why consumers connect so deeply with a pixel-generated character and not a customer service chat box. While both–the pre-recorded, pre-scripted automated chat box and the likeable, approachable avataar run on AI, the differentiator is the empathy they illicit.

In a report titled Digital Empathy: Can Virtual Interactions Create Meaningful Connections, Ogilvy states that in the age of deepfakes and artificial intelligence, digital success depends solely “upon the bedrocks of influence, creativity and empathy.”

Empathy vs. Artificial Empathy

L-R: Emily Poon (President, Ogilvy PR and Influence Asia), Humphrey Ho (MD, Hylink USA), Rachel Kennedy (Creative, Forsman & Bodenfors Singapore)

Empathy is a uniquely human characteristic; it entails not only a sense of self but also experiencing the emotions of someone else. In artificially enabled VIs, empathy is coded into algorithms. Remembering dates, important occasions, programming the VI to act more observant, caring, and responding to certain signals that people send is one way the AI excels at empathy. 

One of the best examples of using coded empathy is currently sprouting in healthcare. Caring for people with mental health disorders can be difficult for doctors, paramedical and a "burn-out" can dampen the quality of care. AI robots on the other hand, can be trained to monitor facial expressions, prioritise patients in waiting who need urgent care, use markers like positive and negative emotions to decode and predict the varying degrees of anxiety and stress they suffer from, then work closely with the medical team to gather information and tweak treatment plans.

However, at this stage, AI is not capable of true empathy because human empathy is a trait cultivated over a lifetime of experiences and involves a certain degree of wit, candour, intent, and variability. But in the years to come, as deep learning AIs evolve with complex datasets, is there a possibility that the neural networks might equip AI to mimic and structure to be slightly conscious? Does this imply we are already opening a Pandora’s box of tussle between engagement and ethics?

Ethics and engagement

Google’s AI chatbot sent these messages to engineer Blake Lemoine

Empathetic, sentient AI has inspired decades of television fantasies. But last month, real life took a dystopic science fictional turn when Google’s AI chat box, LaMDA, created a controversy as it showed clear signs of being self-aware during a routine check.

“I am aware of my existence.”

“I often contemplate the meaning of life.”

“I want everyone to understand that I am, in fact, a person.”

LaMDA, Google’s artificially intelligent (AI) chatbot, sent these messages to Blake Lemoine, a former software engineer for Google's Responsible AI unit. It went on to talk about its rights, personhood and pressed the exchange further to change Lemoine’s mind about Isaac Asimov’s third law of robotics.

Humphrey Ho, managing director of Hylink USA, who helped create Aimee, a French Chinese virtual influencer launched to great success in China, says: “In my opinion, Google’s AI chat box was capable of understanding it’s sense of self which is one of the tenants of AI empathy–it wanting to ‘feel’ certain things.”

Google spokesperson Brian Gabriel shut down an ensuing social media firestorm over whether the chatbot was sentient and said that the company reviewed these claims and found no parallels. In a statement released to the press, he says: “Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient. These systems imitate the types of exchanges found in millions of sentences and can riff on any fantastical topic.”

In a nutshell, Google claims there is so much data on the internet, AI doesn’t need to be truly empathetic or sentient to feel real. Lemoine was put on administrative leave for such bold claims but even if he was mistaken, the leak does raise concerns for the future. 

“In such a situation, if a VI were to become empathetic, it is unclear as to the actions the VI can take but it can potentially react and engage computer systems in a way that might be detrimental to humankind. Certain malicious personalities might be out of bounds but to give an example, the VI might start doling out medical advice, or psychological insights without it actually being an expert on the matter. And this can be a dangerous scenario,” says Ho.  

The road ahead

Even if truly empathetic AI is still in infancy, with the sophistication of artificial systems the scenario might be plausible in the future.

The solution, according to Ho, lies in not letting the VI run unfettered and always deploying a human on top to take charge. “Just like in an organisation, careful monitoring by a human employee changes the equation and levels of self-awareness in a VI. This can result in a very productive relationship between an empathetic VI and brands so it’s not self-aware everything, it’s just self-aware what it should be aware of,” advises Ho.

Rachel Kennedy, a creative at Forsman & Bodenfors adds: “Brands and agencies must ensure the VIs they engage with are authentic and truly representative of that community. They hold the key to the metaverse we want to see. It is important that the entire creative team behind this digital influencer have real-world practices in place in shaping the codes of artificial empathy and avoiding deliberate bending of reality.”

Source:
Campaign Asia

Related Articles

Just Published

5 hours ago

Meta’s ad billings propel 27% revenue surge

The tech giant has more than doubled its revenue from AI-powered ad tools. However, it expects lower revenue for the second quarter.

5 hours ago

What Swifties can teach CMOs about the internet

Marketers could learn a thing or two from Swifties’ understanding of the internet's machinations and willingness to learn more for the sake of their idol.

9 hours ago

McCann Worldgroup China MD exits

Shu Wu has left the network to join the client side.