Trend

No AI without DEI

Hattie Matthews, Co-Founder and CEO of Uncharted on why diversity is the cornerstone of responsible AI

Hattie Matthews

Co-Founder and CEO Uncharted

Share


With each new advance in AI, headlines spotlight the issues of bias and the lack of Diversity, Equity, and Inclusion (DEI). Whether it's the amazing research by Dr Joy Buolamwini, Founder of the Algorithmic Justice League (you must watch her SXSW keynote on Youtube!) or last week’s "Queer in AI" findings on stereotypes, the recurrent theme is clear: AI systems, despite promises of impartial decision-making and heightened efficiency, are inherently flawed due to their reliance on imperfect training data.

Reflecting on my own early experiment with AI in 2019, creating an AI debate bot for SXSW, our fun but slightly flawed experiment highlighted that the bot’s effectiveness was only as good as the quality of the ‘corpus’ of training data it was fed.

Reliance on historical training data (it’s from the past, so it looks backwards) often perpetuates generalised stereotypes and biases, raising profound concerns about fairness and equity in AI-generated decisions.

While politicians and policymakers have started to acknowledge the need for safeguards against AI discrimination, the recent EU Artificial Intelligence Act has been criticised for failing to set a gold standard for human rights. And legislative efforts such as the blueprint for an “AI Bill of Rights” in the USA are only in their infancy.

So, what actionable steps can agencies and brands take today?

For brands delving into AI-driven solutions, they need to be conscious about diversity in both data collection and the decision-making process.

Diverse data: The cornerstone of responsible AI

The bedrock of responsible AI lies in diverse data sets, crucial for mitigating biases and promoting equitable outcomes in decision-making processes. Margaret Heffernan, Entrepreneur and Author, aptly notes that AI “often trusts correlations that may turn out to be irrelevant or ill-informed," highlighting the potential for discriminatory outcomes.

A poignant example is Amazon's discontinuation of an AI-based recruitment tool in 2018 due to its bias toward male candidates in technical positions. The incident underscored the importance of embedding ethical values into AI algorithms.

Brands today may want to invest in constructing digital twins or AI personas of their ideal, diverse, future customer segments to ensure AI systems are trained on representative data sets for instance.

Inclusive creators of AI: Embracing diversity in development

Embracing inclusive creators and diverse perspectives is essential in the development and use of AI. Organisations such as the Algorithmic Justice League (AJL) and the World Economic Forum focus on diversity in AI play pivotal roles in fostering equitable and accountable AI practices.

New initiatives such as “bias bounties” offer rewards to the public for finding bias in AI systems and suggesting improvements.

However, proactive involvement of a wider community at every stage of the machine-learning pipeline is necessary to ensure representative data and inclusive decision-making.

Transparent AI: Ingredients on the packet, please

Transparent AI is indispensable for fostering trust, accountability, and ethical use of artificial intelligence by brands and businesses. By providing insights into AI decision-making processes and underlying data, we can demystify the black-box nature of AI models and mitigate the risks of discriminatory outcomes. It’s likely more pressure will be put on businesses and brands to explain what ‘ingredients’ they put into their AI recipes in the future, akin to the ingredients lists found on food packaging.

Avoiding biases in our own use of AI

To avoid biases in our own use of AI, brands and creators need to think about their own ethical standards and DEI principles - what are we asking the AI to do? For example, we need to think about writing better, more detailed, more consciously crafted prompts which will enable AI to generate insightful and highly specific outputs. As Open AI themselves say “Crafting prompts for AI is an art and a science.”

Human oversight is essential in reviewing outcomes, challenging biases, and navigating the complexities of authentic representation in AI-generated content.

At Uncharted, as an advanced creative agency, aiming to be ‘deeply human-centred, deeply tech driven’ we are really conscious about the way we approach using AI and the importance of diverse, human oversight and collaboration.

As Cathy O’Neil explains in her book Weapons of Math Destruction: “Big Data processes codify the past. They do not invent the future. Doing that requires moral imagination, and that’s something only humans can provide.”

The path to responsible AI begins with embracing DEI. By prioritising diverse perspectives, challenging biases, and advocating for equitable AI practices, we can harness the transformative potential of AI while mitigating its risks. As brands and agencies, as we navigate this wave of AI across our businesses, let us remember: there can be no AI without DEI.

Guest Author

Hattie Matthews

Co-Founder and CEO Uncharted

About

Hattie recently co-founded Uncharted, an advanced creative agency, designed to give brands the confidence to set out without fear. She is an experienced business leader with a track record in marketing, growth, innovation and transformation from her time as Managing Partner at Karmarama and Accenture. Hattie has worked with leading global brands including eBay, Google, Unilever, LVMH and Diageo as well as scale-ups including Just Eat, Secret Escapes and Seat Unique. She is an active member of WACL, helping drive gender equality in the marketing and communications industry. A regular speaker at SXSW, Cog-X, BIMA and Ad Week on marketing, brands and technology. She is also an Advisor to scale up businesses and a Non-Exec Director at eargym, a hearing health app.

Related Tags

AI Diversity & Inclusion