The lesson from ‘Dear Sydney’? Adland and the public are miles apart on AI
Why did an ad about artificial intelligence helping a little girl make people feel sick? Research from Ipsos, Kantar and YouGov tells us advertisers may like AI, but to the public, it’s not a protagonist in the narrative. In fact, it’s barely on their radar.
Google’s ‘Dear Sydney’ campaign
The undisputed biggest misfire of the Olympics advertising blitz was Google’s ‘Dear Sydney’ campaign for the Gemini AI tool.
Google hoped to tug heartstrings with a schmaltzy tale of a young girl using AI to find her voice and write a letter to her favorite Olympian. It was standard stuff, but it should have worked.
Instead, we saw a lazy parent outsourcing a teachable moment to a piece of untested tech.
When it comes to AI, research proves the majority of the public has issues with it – certainly more than the tech clients or the people making the ads. What we have here is a business running before it walks, unable to meet the public where they actually are on AI. It’s an issue that’s plagued big tech through NFTs, crypto and the metaverse. It’s happening again and it may hinder the biggest technological leap in a generation.
Want to go deeper? Ask The Drum
Why ‘Dear Sydney’ was a dear miss
Advertisers are twice as likely than the consumers they are addressing to have positive feelings towards AI, according to research from Publicis Groupe and Yahoo. It’s a chasm that will claim many a misguided campaign. Like this one.
The first of the research companies I asked to help was Kantar.
Its senior vice-president of creative, Amanda Currell, ran the ad through a process that reveals how it landed with real people.
She said: “It’s tricky to get the tone right with an ad that has an AI-related storyline; industry commentators and keyboard warriors are on high alert and ready to jump on the smallest misstep.” And she’s right; The Drum boasts many industry commentators and a few keyboard warriors.
Advertisement
There are many reasons for a campaign failing, but the most unforgivable and maybe the easiest to avoid is an inability to understand the audience. When executing a storyline about AI in a campaign that’s supposed to make you feel positive emotions, the smallest misstep can result in categorical failure.
First, the goal is to identify: does anyone even understand what you are showing them? According to Kantar’s Global Monitor, 62% of US consumers aged 18 and over have never used Gen AI (2024 so far).
Currell said: “With consumers on average knowing little about the technology, negative opinions can take hold quickly and it is easy for brands to make a miss the mark if they aren’t seeking out feedback from real people on their ad.”
The ad was likely conceived to educate people about what genAI could do, latched to an emotive subject and a hooky global event. It should have worked. But AI is young, much younger than our saturated LinkedIn feeds or the business pages would have you believe.
As part of her research, Currell tested numerous ads where the AI was a character or vital component of the narrative. “So far [they have] struggled to convey meaning and strong branding. Our data also indicates that the general public isn’t yet ready for emotional messaging in AI-related ads.”
In the testing, the longer Microsoft Co-pilot spot had more time to tell its story and bring that audience in on a complex product. It performed slightly better.
Not quite a recipe for disaster… but
Samira Brophy, senior director of creative excellence at Ipsos UK, believes that making AI the main character of the narrative is not necessarily a recipe for disaster. “In our experience, people do not fail ads that have AI in a featuring role, they fail ads that hit bum notes with them on how AI plays a role in their lives.”
She believes that the main issue for ‘Dear Sydney’ is a lack of empathy.
People who know what genAI can do see that it can enhance the writing capabilities of some people. To the layman, it replaces the human. And there are a lot of lay people. Only 11% of the total internet population of the UK are thought to have visited Open AI. This was from Ipsos research on people’s attitudes to genAI, including an extensive piece of work for the BBC.
Advertisement
Some of the most anxiety-inducing aspects of AI are a sense of loss of creativity, humanity and control. The campaign caused “angst,” but it likely wouldn’t have if it had come out a few years later when we are further “along the adoption curve on genAI.”
Not all AI-focused ads fail.
People are interested in what it can do. Or at least they were in 2018 when the Lexus ES ‘Driven by Intuition’ ad was ‘written by AI and directed by an award-winning human.’ Tested in the US, it got a high brand attention score and a “very respectable” Creative Effect Score (CEI).
It wasn’t a heartwarming tale with AI jammed in, but it shows that audiences aren’t closed off to ads that lean into the strengths of AI. Deutsche Telekom and Nachricht von Ella’s Without Consent, tested in Germany, also scored highly.
Brophy advises that it is better to illustrate the value of genAI to consumers, put people first, show it is assistive and not replacing, as well as demonstrate transparency by signposting genAI use where possible.
These insights came from work undertaken to inform content strategy at the BBC, but Brophy believes it is highly applicable to advertising.
What people really think about AI
YouGov research adds another layer to the argument. A qualitative study of 5,000 UK adults conducted in January 2024 split people into three segments: AI abstainers, AI optimists and AI ignorant. Across these segments, only three in 10 think AI (this is broader than genAI) will have benefits that outweigh the drawbacks. People who use AI are more likely to think it can help them in daily life, and its use can invoke warmer sentiments.
But think not of how the individual uses it. What about the state, big business and bad actors? It may not surprise you that 39% distrust how AI is currently used. But what may raise your eyebrows is that 54% distrust how it will be used in the future. That’s a trajectory that will be hard to combat or argue against. This could be changed with a lot of big tech marketing capital, but it’s also a smoking gun. Right now, few will trust the AI in your emotive film to actually help the young girl.
Suggested newsletters for you
A bit broader again, but worthy of inclusion: YouGov research across 17 markets found that around half of international consumers say they are “not very comfortable” or “not comfortable at all” with brands using AI to create brand ambassadors.
The following use cases trend down on the discomfort scales.
-
‘AI Editing product images (in place of graphic designers)’ (48%)
-
AI generating advertised product images (in place of product photography) (47%)
-
Generating descriptions and taglines on what is being sold (in place of copywriters) (42%)
-
Deciding placement of advertisements in media channels (in place of advertising professionals). (41%)
There is further nuance. Some markets are more open to AI than others. What you will see below are the combined responses for “not very comfortable” and “not comfortable at all” only.
Magic or mayhem?
In 1962, Arthur C Clarke, the famous sci-fi author, published the following: “Any sufficiently advanced technology is indistinguishable from magic.” You’ll have seen this quote a lot and I don’t think it’s applicable any more.
A lot has changed since 1962. We are now immersed in powerful technology every day and AI offers a huge step – some say forward, some say back. There’s an undercurrent of fear; some think it represents a looming threat to the agency and utility of humankind. Years of sci-fi films inspired by Clarke’s work likely led us here. Rightly or wrongly.
Before advertisers can tell emotive tales about genAI, marketers have to meet us where we are. They need to be honest about the demand for the product in the first place. And that might be even harder than making us shed a tear when a girl decides to generate an email to spam an Olympian.