Experts decry adtech failures as brands’ ads appear alongside Haitian immigrant misinfo
Some of the world’s top brands have been unknowingly funding a dangerous conspiracy theory advanced by former President Donald Trump. A lack of transparency in the adtech ecosystem is partly to blame.
Ads for Mazda, AT&T, Amazon and other brands have been seen running next to a debunked conspiracy spread by Donald Trump / Adobe Stock
Ads for leading brands including Mazda, AT&T, Amazon, Adobe and Ticketmaster have appeared alongside content promoting a debunked conspiracy about Haitian immigrants stealing and eating pets in Springfield, Ohio, according to two new reports.
Fact-checking research organization NewsGuard observed ads for 36 major brands running on websites peddling the false narrative. These websites include Zero Hedge, PJ Media, Granite Grok and six others, all of which are rated poorly on NewsGuard’s barometer for trustworthiness.
The NewsGuard team conducted manual tests on sites it knew to host articles pushing the falsehood. “We repeatedly refreshed nine specific articles on these sites, averaging about 10 refreshes per page, to simulate the typical random ad experience users encounter,” explains Jack Brewster, NewsGuard’s enterprise editor.
In one case, ads for Boost Mobile, Ticketmaster and Temu popped up next to an article entitled ‘‘Can’t Take It Anymore’: Residents of Springfield Ohio Beg For Help After 20,000 Haitians Overwhelm City, Eat Local Wildlife’ on pro-Russia site Zero Hedge. NewsGuard rates the site a 15 out of 100 for credibility.
Grubhub and Temu responded to NewsGuard, saying they have taken action to prevent future ad placements on such sites, adding them to exclusion lists for their media buying partners.
Want to go deeper? Ask The Drum
The findings come just 11 days after the New York Times reported similar cases on YouTube. Citing research from corporate accountability organization Eko, the newspaper reports that ads for Adobe and Mazda ran before and adjacent to videos promoting the myth – an untruth that gained significant traction after former President and current Republican nominee Donald Trump amplified the narrative during his September 10 debate with Democratic rival Kamala Harris.
Even the Harris campaign itself has been caught up in the mess; The New York Times reported an instance in which a Harris campaign ad was served ahead of a video claiming that migrants are “going to parks, grabbing ducks, cutting their heads off and eating them.”
On the Google-owned video-sharing site, it’s no small issue. Videos pushing the false claim garnered some 1.6m views in just 72 hours after the Trump-Harris debate, Eko found, generating revenue for the creators behind the claims and sparking concerns over YouTube’s inability to prevent the spread of harmful misinformation.
Google had not responded to The Drum’s request for comment at the time of publication.
Advertisement
For the brands involved, the pattern represents a brand safety nightmare: they’re unwittingly funding misinformation in the heat of a high-stakes presidential election.
Advertisers often work with partners in the adtech ecosystem to mitigate the risk of their content appearing next to misinformation, hate speech and controversial or unsavory content. However, the findings from NewsGuard and The New York Times cast doubt on the effectiveness of the safeguards on which brands typically rely.
For Brewster, while NewsGuard’s experiment enabled the organization to observe a broad variety of ads served up programmatically, it also highlighted “the random and opaque nature of programmatic advertising.”
Indeed, it can be nearly impossible to identify a single point of failure in an ecosystem involving so many players transacting in ways that are often unclear.
While brands can set a number of their own parameters for programmatic partners – providing exclusion lists that bar specific domains and selecting keywords to avoid – things can still slip through the cracks.
What’s more, supply-side platforms (SSPs), which help publishers sell their ad space, often fail to properly vet the publishers they monetize, and maintain lax policies, according to Arielle Garcia, director of intelligence at adtech watchdog Check My Ads.
Advertisement
And every SSP has its own set of standards. Popular SSP Pubmatic, for instance, works with NewsGuard to establish minimum thresholds for credibility and transparency. Another SSP that works with NewsGuard, Magnite, prohibits content designated as disinformation by the US government and prohibits harmful disinformation in general. A third platform, OpenX, explicitly bans sites with “a pattern of false or misleading information or news.”
Meanwhile, Google – one of the biggest adtech players on earth – shapes its policies narrowly, only barring ads from appearing next to content that “could significantly undermine participation or trust in an electoral or democratic process” or “promote harmful health claims,” according to its website.
Moreover, in cases where content clearly violates an SSP’s rules, providers tend to offer “insufficient ongoing monitoring and enforcement,“ Garcia says.
It’s an observation also made by Dr. Krzysztof Franaszek, founder of ad quality firm Adalytics, who says that while SSPs often have a number of inventory quality policies – like limiting publishers with excessive ad-to-content ratios – enforcement of these rules appears to be somewhat inconsistent across the board.
Demand-side platforms (DSPs) have similar issues, where filters and inventory policies often promise more than they deliver, says Garcia. And brand safety tech too often lacks transparency, she contends, offering inconsistent blocking while withholding detailed reporting from marketers.
Suggested newsletters for you
Plus, ad verification providers may be dropping the ball – a pattern well-documented in an explosive Adalytics report published in August, which outlined instances in which ads for Microsoft, Disney and Nestlé appeared on webpages rife with slurs and explicit sexual content.
All the while, brands’ agency partners may be failing to protect their clients as they plan and facilitate media buys.
Summed up neatly, Garcia says: “Disinformation is monetized not because of one single point of failure, but because of systemic broken incentives that continue to make it profitable and opacity that often helps adtech companies avoid accountability. When they do get caught, there’s finger-pointing at other entities in the supply chain, and too often very few consequences.”
It’s a set of circumstances that can lead to widespread failures of brand safety. In an ecosystem of opaque practices and low accountability, ads may appear not only adjacent to misinformation and misleading claims, but also on spammy ‘made for advertising’ sites, next to illegal and dangerous content and in environments where they may be exposed to fraud. And the rapid proliferation of AI-generated content online is likely to exacerbate these problems. (Though it’s worth noting that some providers – like Pubmatic, an SSP – are also employing AI to help verify the quality of ad inventory and monitor brand safety issues.)
Managing programmatic buys can be “a really complicated process,” says Neal Thurman, cofounder and CEO of trade group the Brand Safety Institute and the director of industry initiative the Coalition for Better Ads.
So, for advertisers, combating the problem can feel overwhelming.
“Once an issue like this raises its head,” Thurman says, “a [media] buying team has to say, ‘Okay, I’ve processed that there is this thing that could be out there that’s bad. People are starting to make misinformation and disinformation content with it. It is sufficiently … threatening and prevalent that I want to take action against it.’ Now, [if I’m that buyer,] I've got all of these platforms that I do business with. I [probably] deal individually with all the walled gardens, and then I have a few open web paths to market and some direct relationships [with publishers] as well. I have to now adjust my settings for all of those things, right?”
Instead of working backward and playing Whac-A-Mole on every brand safety issue, Thurman suggests that brands take a more proactive approach to risk management.
“Playing defense is really hard,” he says. “That’s essentially what you're doing if you're buying everywhere and then using tools – whether they be keywords or verification vendors or what have you – to block things out reactively. My recommendation [for], ‘How do you make this job easier?’ is to proactively curate [a list of publishers] you know and trust. Focus on knowing who your partners are.” These publisher partners should be credible and highly unlikely to peddle misinformation or any other kind of undesirable content, he says.
Though taking the proactive versus reactive approach may potentially prove more resource-intensive, Thurman believes it’s worthwhile. “Financial calculus is what every brand has to do,” he says. “This is all about, ‘What do we want to spend [on] ahead of time and afterwards, related to our reputation?’ And there are so many sources of inventory, that I think you can put together a credible, cost-competitive list [of publishers] if you want to put the effort into it.”
But other experts, like Adalytics’ Franaszek, suggest that it’s possible to work within the current norm – a paradigm that is inherently more reactive than proactive – to achieve better brand safety outcomes. It just requires more intensive effort and active supply chain management.
Franaszek advises brands to scrutinize their campaign setups carefully to check that domain exclusion lists are uniformly enforced. Advertisers should also demand more granular data from all of their media supply chain partners. In particular, they should seek out the specific URLs where ads have been displayed and the number of impressions generated, which help to paint a detailed picture of ad placement.
“This allows for a brand to independently evaluate their ad delivery and make adjustments so as to be consistent with their expectations,” he says.
Of course, not all partners will offer up full page URLs and impression-level data. In such cases, he says, brands should feel empowered to advocate for access to this information – positive pressure that may encourage vendors to update their technology and provide greater transparency.
For more, sign up for The Drum’s daily newsletter here.