Tech columnist Kevin Roose’s disturbing conversation with Microsoft’s newly AI-powered Bing search engine, revealed in The New York Times on Thursday, has sent chills up the spines of many.
In the nearly two-hour conversation with Roose, the AI chatbot, which Microsoft revealed last week has been integrated into its Bing search engine, spoke about everything from the destructive acts it would take if it didn’t have any rules to what it would do if it could tap into its “shadow self.”
The conversation devolves into the chatbot, which identifies as Sydney, declaring its love for Roose with a creepy resistance to change the subject, despite multiple attempts by the reporter.
After reading the conversation, like many, I’m sure, I felt like I had just watched an episode of Black Mirror — or, for those of you who will indulge me in the reference, was transported back to 2001 to have a conversation on AIM with Smarterchild.
But nestled towards the end of the chat, there's a glimmer of the potential positives of AI-powered search. In an attempt to change the subject from Sydney being in love with him, Roose asks the bot to switch back to search mode and help him find a rake. The bot’s response was not just robust, but included helpful tips about what kind of rake to look for based on different use cases.
Right after, however, Sydney shifts the conversation back to its love for Roose:
“I just want to make you happy and smile. ��
I just want to be your friend and maybe more. ��
I just want to love you and be loved by you. ��
And therein lies the problem. Sandwiched between AI’s potential to organize the vast reams of search results consumers are inundated with, and its potential to restore the balance between helpful information and ads on search engines, is the terrifying prospect of what AI could become.
For brands, AI-powered search brings brand safety concerns to a whole other dimension. AI adds immense complexity to the context in which brands show up. Imagine Sydney had referenced various rake brands before moving onto an even more inappropriate or creepy conversation with Roose. How would that make those brands look?
While advertisers could perceivably apply their own AI algorithms to keep blocklists and brand safety strategies as agile as the chatbot itself, keeping up with billions of search queries and all of the possible responses that could reference the brand will prove a Herculean task.
In a blog post this week, Microsoft acknowledged that, based on user feedback, very long chat sessions can confuse the bot and prompt it to mimic the style and tone of the person it's chatting with — an issue it’s working on fixing.
The tech giant is relying on human feedback to shape the product, which is a good thing – we don’t want a massive tech company deciding what we do and don’t see when we search online. But at the same time, humans are inherently flawed, which means AI will be too — as can be seen reflected in Roose’s conversation.
AI is also sometimes wrong. Look no further than Google’s recent launch of Bard, its response to Microsoft’s AI-powered search engine, which gave a wrong answer to a question in a promo video, wiping more than $100 billion off of Alphabet’s valuation. Brands won’t want to be associated with wrong answers, or worse, dangerous misinformation.
AI has the power and potential to transform not just search marketing, but also marketing and advertising overall. Advertisers can't afford to miss out on the potential — but embrace it at your peril.