Trend

How can brands tackle disinformation in the year of democracy?

With more than 2 billion people set to vote in elections in 2024, disinformation is rife.

Harriet Kingaby

Co-founder, The Conscious Advertising Network and Insight Lead Media Bounty

Share


With Rishi Sunak’s announcement the UK will be holding a general election on the 4th July, it’s time to talk politics. Because this is the year for democracy.

More than 2 billion people (40% of the world’s population) globally are eligible to vote in elections in 2024. That kind of scale is unprecedented. And with that comes an unparalleled level of risk regarding the spread of harmful information.

The rise of deep fakes, fake news, and disinformation online is causing real concern for democratic bodies. According to UN Secretary General António Guterres, “the ability to disseminate large-scale disinformation to undermine scientifically established facts poses an existential risk to humanity.” 

Some platforms actively try to prevent this - TikTok has a ban on political advertising and Twitter (X) promotes Community Notes to call out misleading content. Yet there are ongoing debates over their effectiveness.

Users with huge appeal over entire demographics can cause tidal waves in this space. You only need to look at the controversy surrounding figures like Andrew Tate, and the rapid rise of alt-right ideology to see the need for regulation. These kinds of harmful beliefs, often based on misinformation, spreading widely are dangerous when applied to how people think and how they are going to vote.

To respond to these concerns, the EU has published election security guidance for social media giants under the Digital Services Act. It outlines measures for Very Large Online Platforms and Search Engines to mitigate the risks related to electoral processes while safeguarding fundamental rights, including the right to freedom of expression.

Partnerships that promote reliable journalism are key to ensuring that the electorate is being supplied with good information in the lead-up to the elections.

Harriet Kingaby, Co-Founder, Conscious Advertising Network

While it’s essential to hold these platforms accountable for controlling monetised content, social media giants aren’t the only part of this ecosystem with the power to effect change. Advertisers need to step up and take responsibility, ensuring they aren’t inadvertently funding or promoting harmful content in any form. By asking difficult questions about our industry, we can help keep our brands and consumers safe.

Funding quality reporting

In order to tackle disinformation, we need to make sure there's funding for quality news.

As news publishers struggle to make news monetisable, media buyers need to support legitimate news sources. Brands need to make sure they’re not drawing money away from reputable sources. Partnerships that promote reliable journalism are key to ensuring that the electorate is being supplied with good information in the lead-up to the elections.

Within publishing sites, display ads need to be regulated. Display advertising that appears on the same page or adjacent to editorial content has to be clearly marked and contextual for the publication’s readership. The ASA has published clear guidelines on advertorial content in publishing to avoid misleading readers. It’s important in a time when people are going to be following the news a lot closer that we aren’t conflating editorial opinion with advertorial content.

Especially as more advertisers turn to new technologies like artificial intelligence (AI) for contextual targeting, systems to control where content is placed can make the difference between good brand association and bad.

Regulate AI and MFA… now

The rise of open source AI creating Made for Advertising (MFA) sites means we need to think even harder about where advertising is going and how it is impacting people.

MFAs masquerade as prime real estate for online advertising, attracting people to sites via clickbait. These amalgamations of cluttered ads and questionable content for me are the epitome of a digital nightmare and AI-generated clickbait is only making the problem worse.

These sites often feature low-quality content, which can include fake news, conspiracy theories, or dubious links. MFAs can also employ strategies like pop-up ads, auto-play videos and intrusive placements that make it hard for audiences to avoid bad content. Big brands are being caught in this web, like when Amazon and Nike ads were run on Covid conspiracy sites.

The ANA has found that worryingly 21% of overall ad impressions in the global advertising market were from MFA sites. There is a debate whether all MFAs are bad but with such a high level of risk advertisers need to be more careful. Now AI has the potential to generate MFAs as quickly as they can get taken down, the issue of quality advertising being featured alongside genuinely harmful content cannot be ignored.

We need to listen to bodies like the ANA and the Year of Democracy campaign and reevaluate long-standing programmatic buying practices, to ensure funding isn’t going into misinformation and disinformation during this crucial year for politics.

So what’s the solution?

Advertisers need to practise what they preach and make sure they’re taking the right steps to promote their quality work. Doing so breaks the cycle of funding for harmful content in a year where we need to be on it more than ever.

Guest Author

Harriet Kingaby

Co-founder, The Conscious Advertising Network and Insight Lead Media Bounty

About

Harriet Kingaby is Co-founder, The Conscious Advertising Network and Insight Lead

Related Tags

AI Politics