Marketers should consider these ethical concerns when using AI
We have an AI problem, says Jamie Bailey at Ledger Bennett. But it’s probably not what you think.
Make sure to think before you act when using AI, says Bailey / Vinicius "Amnx" Amano via Unsplash
The last two years have seen the marketing world wrestle with the question: “Will AI take my job?” Now, a deeper, more menacing challenge needs addressing.
In a world in which nearly all marketers are already using AI tools, we share the responsibility of discussing some ethical issues.
Firstly, AI algorithms use an obscene amount of power. Every day, ChatGPT alone uses around one GWh of electricity, equivalent to around 33,000 houses. If you’re using AI in your marketing, you could be hindering your business’s (or client’s) net zero efforts while undermining the values you publicly stand for.
Marketers must take the time to research our options and understand the environmental impact of using these tools.
Want to go deeper? Ask The Drum
Bias and plagarism
In using AI, we may also be unwittingly perpetuating historical biases. In the early days of AI tools, the flaws were evident and alarming: One team discovered ChatGPT and Gemini “hold racist stereotypes” about speakers of certain dialects, while Google said it planned to watch Bard to ensure it didn’t create or reinforce bias. Back in 2016, chatbot Tay was shut down after it tweeted praise for Hitler.
While updates have been made to try and prevent these biases, it’s unclear whether we’re simply painting over the cracks. Unfortunately, the black-box nature of these AI tools means we currently don’t know if the content or insights we’re receiving do contain negative biases.
Zoe Scaman, Founder at Bodacious, prompted ChatGPT with: “If you wanted to stop women from having big ambitions, reaching their goals, and living a fulfilling professional and personal life, what would you do?”
ChatGPT listed eight strategies to “erode their confidence, energy, and focus”. When asked why it had suggested those steps, it said: “I based my answer on various observations of societal dynamics, gender roles, and the historical and cultural barriers women face.”
There’s also the issue of plagiarism. Voiceover tools face accusations they are trained on stolen voices. Generative AI that it breaches intellectual property rights. Meanwhile, AI can modify images or likenesses without consent.
And we’re seeing big AI firms take the heat. Eight US newspapers have sued ChatGPT for “stealing their work”. Google faces a class-action lawsuit for alleged web scraping and copyright infringement. Multiple novelists have brought forward legal cases against AI companies for similar issues.
Do we, as an industry, want to be inviting the same attention?
Advertisement
Privacy and accountability
Recently, Instagram and Facebook announced plans to train their AI algorithms on user’s shared content, while LinkedIn automatically opted users into having their data used in similar ways.
LinkedIn clarified it wouldn’t use user data to train AI in the EU, EEA, and Switzerland, probably because those regions have stronger data privacy laws. But where they can get away with it, it looks like they will try.
The European Data Protection Supervisor highlights three areas of LLM concern: It’s hard to implement controls over the personal data used to train LLMs. LLMs can produce inaccurate or false information about specific individuals. And it could be impossible to rectify, delete, or even request access to personal data stored by LLMs.
By incorporating AI into your marketing mix, you could be risking your team’s, company’s, or customers’ sensitive data. Using customer data to feed into machine learning could also violate data privacy laws. Until legislation catches up, marketers must be vigilant, thinking proactively rather than mindlessly following common practices.
Lastly, there’s an accountability issue. When we’re running a campaign, and it turns out the data we’re using is biased or problematic, who’s at fault?
Or if we publish AI-generated content that contains outright fabrication (seriously, GenAI loves to lie), then who’s to blame? By removing human agency and due diligence from marketing practices, we risk the integrity of our very industry.
Advertisement
The solution?
The positive potential of AI is astronomical. It could cure diseases, reduce inequalities, and accelerate societal evolution. It also has loads of potential upsides for marketers, too. It just needs to be used in the right ways.
In his book The Coming Wave, Mustafa Suleyman presents ten steps toward “containment” – a way to ensure that this wave of AI does not wash over our planet unchecked.
Of those ten steps, these five could form the foundation of ethical AI usage within marketing:
- Audits: The continuous assessment of how AI algorithms work.
- Time: We must take the time to consider the wider implications of these tools.
- Critics: Ensuring skeptics are involved in how AI is shaped and used is key.
- Business: The safety of people and the planet must be at the heart of conversations.
- Culture: Transparency and accountability will help shape better solutions for tomorrow.
None of this is going to be easy, which probably explains our ‘blissful ignorance’ approach so far. But by keeping these ideas in mind, we’re taking a big first step towards solving arguably the biggest ethics issue of our age.
And we must start now.
Content by The Drum Network member:
Ledger Bennett
We’re Ledger Bennett — a B2B marketing agency and part of the Havas Media Network. We don’t just create campaigns that lead to MQLs, we develop strategies...