Why Generative AI isn’t necessarily a golden ticket in the world of advertising

Nick Breen offers a legal perspective on the use of AI in advertising, regulation and the ASA’s landmark investigation into the Willy Wonka experience

Nick Breen

Partner Reed Smith


Earlier this year, the Willy’s Chocolate Experience event that took place in Glasgow threatened to dislodge Roald Dahl’s creation in the zeitgeist. It captured national headlines, not just because of the disgruntled customers, but because the AI-generated imagery that advertised the experience led customers to feel so deceived that they even called the police.

This may well be the first time the ASA will (knowingly) be investigating ads made by generative AI, but it certainly won’t be the last. Generative AI is becoming commonplace in advertising.

While public furore has since died down when it comes to Willy Wonka, stakeholders across the industry will no doubt be eagerly awaiting the ASA decision. The investigation is set to offer a vital indication as to how the ASA will approach the future of generative AI in advertising.

Regulating AI adverts – finding the sweet spot

The ASA’s approach

As generative AI systems become more advanced and accessible, there are naturally concerns that it could be used to mislead or spread misinformation. In the context of advertising, AI systems allow businesses to create ads quickly and cost-effectively. However, if not used responsibly, businesses could be creating ads that do not accurately represent their products and mislead prospective customers into buying them. More nefariously, scammers can also use AI-generated content to deceive vulnerable individuals.

We are now also starting to see regulatory responses to generative AI at governmental levels, with different countries taking a variety of approaches.

Nick Breen, Partner, Reed Smith

In response to this growing concern, the ASA published an article last year on AI advertising, explaining that the CAP Code (which governs advertising in the UK) is largely technology-neutral. This means that the ASA will review any ad that falls within its scope and apply the same regulatory principles regardless of how the ad was created, which includes AI-generated ads. This means advertisers using AI-generated images in their campaigns will need to ensure that their ads comply with all parts of the CAP Code, just as any traditional ad would. Perhaps most importantly, advertisers must still ensure that their ads do not mislead purchasers by misrepresenting their own products, even where generated by AI.

International responses

We are now also starting to see regulatory responses to generative AI at governmental levels, with different countries taking a variety of approaches. On the “pro-regulation” end of the spectrum, the EU recently passed the EU AI Act, which imposes certain transparency and labelling requirements which may require certain users of AI-generated works to appropriately disclose the nature of such content. Little information has been provided on how the disclosure requirements can be satisfied, but it seems likely that this will affect advertisers who wish to take advantage of these technologies.

The UK, on the other hand, is following a “pro-innovation” approach. Rather than implementing laws that impose direct obligations on the use of AI, a patchwork of new regulatory principles, sector-specific best practices, and pre-existing legislation (such as the Consumer Rights Act) will be used to regulate the commercial use of AI. Various other initiatives have also been presented by the Government, which are due to come out later this year.

Other potential pitfalls – avoid getting your just desserts

Ensuring that AI-generated ads do not mislead and are properly labelled where necessary, are not the only concerns for advertisers. Rather, they must also ensure that AI generated ads do not infringe someone’s copyright, trade marks, or personality rights as well as making sure they are not in breach of any elements of the CAP Code. Advertisers would also do well to keep implicit bias and stereotyping front-of-mind when reviewing AI generated adverts.

AI systems are known to unintentionally produce content that perpetuates harmful stereotypes on matters such as race and gender. To avoid breaching advertising rules (such as rules 4.1 and 4.9 of the CAP Code) and any resulting reputational damage, advertisers must consider whether the generated content is in fact promoting or perpetuating damaging stereotypes. Further, some social media platforms already have their own generative AI policies in place which advertisers will need to follow. Many of the major players, for instance, have already put rules into place for disclosure and labelling that apply to advertising on their services.

The ASA’s position remains quite simple; continue to comply with the CAP Code when using AI-generated ads as you would for traditional ads. As a result, for advertisers best practice at the moment would be to continue ensuring human involvement at all stages to sense-check any AI-generated content for potential misleading, stereotypical or infringing content. For now, further regulatory clarifications are not necessary on the ASA’s part, and could ultimately confuse the picture. As a result, the investigation Willy’s Chocolate Experience, and indeed any other similarly substandard AI-generated adverts, will likely just reinforce the ASA’s current position.

Guest Author

Nick Breen

Partner Reed Smith


Nick is a partner in the Entertainment and Media Industry Group and a member of the On Chain: Reed Smith’s Crypto & Digital Assets Group. He focuses on digital media, music, blockchain and NFTs, advertising and video games.

Related Tags

AI Trust web3/metaverse