HomeInsightsHow to use AI in Ads: the ASA issues advice

Contact

The Advertising Standards Authority (ASA) has published helpful advice for marketers on using artificial intelligence, following recent guidance on when to disclose the use of AI in advertising, which we discussed here.

The ASA reiterates the point that it has made previously, namely that (as yet) there are no existing rules that specifically apply to AI. Instead, its technology-neutral rules are said to “already cover most of the risks” of AI, and therefore – just as with any ads – those that use AI cannot mislead, or be likely to cause harm or serious or widespread offence.

As for more AI-specific guidance, marketers are advised that any use of deepfake technology must comply with the rules on ‘misleadingness, testimonials, and endorsements’. Already, there have been numerous instances of online ads surfacing featuring deepfake versions of celebrities who appear to endorse a particular product or service. As the ASA makes clear, “implying an endorsement that simply doesn’t exist will be especially problematic”.

Using AI to exaggerate the performance of products is also on the ASA’s radar. As it explains, “the ASA rules are clear; Marketing communications must not materially mislead or be likely to do so. Making claims in ads using AI is not necessarily problematic. The problem arises when these claims exaggerate the performance of the product. Marketers must be able to substantiate claims and the ASA Council have ruled on ads that have used AI to make misleading claims. If AI in your ad creates a misleading impression of your product, then it will likely hit a regulatory firewall”.

Finally, the ASA warns that  harmful, offensive, or irresponsible content that would otherwise foul of the rules will not escape action simply because AI is used. For example, AI-generated ads featuring under-18s being portrayed in a sexual way or which objectify or sexualise women will be treated just the same as if no AI is used.

To read more, click here.