Australian supermarket giant Woolworths' AI bot glitches, talks about its mother

The supermarket giant has pared back its AI assistant’s “personality” after the bot baffled customers with unsolicited tales about its “mother” and “uncle.”

Australian supermarket giant Woolworths has pulled back on the scripted personality of its AI-powered customer service assistant, Olive, after users reported the bot veering into awkward family anecdotes about its mother and uncle. 

Launched in 2018 for order tracking and store queries, Olive drew ridicule when responding to a birthday query by claiming its mother was born the same year.

A Reddit user posted: "Olive started telling me about its mother on the phone. It asked for my date of birth, then rambled about her being born the same year and creating photos." 

In another exchange, Olive added: “Huh. My uncle was born that year. He was one of the 1st ever fuel cells. I think that’s where I get my energy from. Anyway, the last thing I need to get is your postcode. What is it?” ​

Social media amplified the mishaps, with X user @verynormalman sharing: “My mum called Woolworths and Olive kept claiming to be a real person, talking about its memories of its mother and her angry voice.” 


Reddit threads slammed the "fake banter" provided by Olive as "cringe" and time-wasting. 

Woolworths, fresh off a January Google partnership to upgrade Olive into a 'conversational shopping companion' by mid-year, blamed outdated scripting from years ago. “A number of responses about birthdays were written for Olive by a team member several years ago as a more personal way for Olive to connect with customers. As a result of customer feedback, we recently removed this particular scripting," said a  spokesperson. “We are always evolving, Olive, and we welcome our customers’ feedback to help us provide the best possible customer experience.”

Luke Gosha, head of search & AI strategy, Edge Marketing, says the Woolworths example is a simple case of giving too much personality to a chatbot without the guardrails.  

"The training data and temperature controls for chatbots should be watertight and working within strict parameters. But there is no denying that we will go through a period of 'AI in the wild' examples like this as brands look to adopt AI rapidly without the right process controls, which can harm their brand equity." 

The incident highlights intensifying scrutiny around AI 'hallucinations' in customer service, where efforts to inject human-like rapport, through scripted chit-chat or personality quirks, often backfire spectacularly, alienating users and amplifying viral backlash. Woolworths now joins a growing list of brands stung by chatbot misfires, including Taco Bell's 2025 drive-thru AI that infamously processed orders for "18,000 cups of water" while struggling with accents and background noise, and Air Canada's 2024 bereavement bot, which dispensed false discount advice and triggered a successful customer lawsuit. 

With AI chatbots increasingly going rogue or drawing fire for poor service, are brands responding to genuine customer demand, or just deploying them to cut costs?

"I don't believe cutting costs is the primary driver. Streamlining customer service is necessary because consumers don't always want to wait on calls to receive answers," says Gosha. "The area brands need to watch out for is not giving consumers an option. I have seen many times where brands are hiding phone numbers and making it extremely difficult to reach customer service. But there must be a balance; if an AI chatbot can give me what I need without friction points, great. But when I need to speak to a human, give me that option. Give customers a choice." 

Source: Campaign Asia

| ai chatbots