Insights Deepfakes: Ofcom publishes Discussion Paper

Ofcom has published what it describes as a “deep dive into deepfakes that demean, defraud and disinform”. In addition to revealing the results of recent Ofcom research suggesting that as deepfakes become more prevalent online, few are confident in their ability to identify them, the paper also sets out potential measures to tackle harmful deepfakes.

We have previously explored the legal challenges posed by deepfakes here. Since then, Generative AI has meant that the production and sophistication of deepfakes has increased dramatically. Added to this, in what Ofcom describes as a new ‘deepfake economy’, apps and websites have emerged that allow users to create deepfakes without the need for any particular technological skills. Whilst deepfakes are not necessarily always malicious (the paper notes that the technology can be used, for example, in post-production, for medical purposes, or merely to create satirical content), Ofcom’s research suggests that many are malign, and indeed one of the most common forms of deepfake shared online is non-consensual intimate content, often targeting women.

For the purposes of its paper, Ofcom divides harmful deepfakes into three categories: (1) those that demean by falsely depicting someone in a particular scenario (for example sexual activity) in order to humiliate them; (2) those that defraud by misrepresenting someone’s identity often as part of a scam; and (3) those that disinform by spreading falsehoods to influence opinion and sow distrust.

The discussion paper moves on to set out measures that can be taken to mitigate the creation and circulation of these forms of harmful deepfakes. It identifies “four broad categories of intervention, which can be applied by different actors at different stages in the technology supply chain”:

  1. Prevention

Ofcom explains that prevention measures consist of attempts to stop harmful deepfakes being created in the first place. For example, developers of Generative AI models can apply filters both to remove certain types of data from their training data sets and also to prevent outputs that include harmful content. Filters can also be introduced to instruct a model to reject prompt requests that indicate that the user intends to create a malicious deepfakes. However, the paper recognises that this might be challenge since it will not always be straightforward to apply these types of filters without them becoming too restrictive, removing for example the ability to make benign and satirical content. There is also the challenge of applying such filters to open-source models, where often third-party actors can circumvent them and remove preventative measures installed by the original developer of the model.

  1. Embedding

Embedding involves adding information to the content to indicate whether it is authentic. This includes, for example, labelling, watermarking, or adding information in the metadata about the content’s provenance. As Ofcom notes, online platforms are already starting automatically to label content as AI-generated or manipulated, or are requiring their users to do so when they upload content. Similarly, watermarking is employed using methods such as DeepMind’s SynthID watermarking tool, and a large number of organisations have signed up to a standard developed by the Coalition for Content Provenance and Authenticity (“C2PA”) for attaching metadata to media content. The paper acknowledges that embedding measures are not without their limitations: they require developers and users to employ them; they may be able to be removed or obscured; they offer less comfort for those who are the subject of demeaning, sexually explicit, deepfakes since in those cases prevention is more important than labelling; and there is a risk that labelling will be applied to content that is in fact genuine so as to muddy the waters and cause users to distrust any form of content, real or fake (a feature of the so-called liar’s dividend, referred to in our previous article here).

  1. Detection

Just as AI is used to create malicious deepfakes, so it is being used to detect them. As Ofcom explains, forensic techniques for detection involve the use of machine learning systems “to recognise tell-tale signs that content is wholly or partially synthetic”. Beyond these more sophisticated measures, organisations also employ humans to review content to assess its provenance, and also rely on user reporting. However, this brings inevitable challenges, not only in terms of the costs involved in having people assess content, but also the difficulties of ensuring that they are in fact able to assess what is a deepfake (Ofcom’s research suggests that only 10% of people are confident in doing so). Relying on machine learning alone also brings its challenges, particularly in its ability to distinguish between satirical, benign deepfakes, and those that are harmful.

  1. Enforcement

Ofcom notes that many online platforms have begun introducing specific rules within their terms of service and community guidelines which set out clear rules about the types of synthetic content (if any) that can be created and disseminated on their sites. The paper states that Ofcom’s review of platform policies “shows that most now include specific reference to synthetic or manipulated media, although are often agnostic as to how that content has been created. Moreover, almost all platforms hold deliberate deception as an essential component of prohibited content”. Of course, rules are only effective if they are properly enforced, and Ofcom suggests that platforms need to take action where rules are breached by, for example, issuing warnings, taking down or labelling the offending content, and suspending or removing users.

Ultimately, Ofcom states that the forms of intervention outlined above are unlikely to make significant differences on their own. Instead, it argues that a ‘multi-pronged approach’ should be adopted, and that “actors seeking to address the risks posed by deepfakes on their models or their platforms will likely find that they need to stand up a deepfake mitigation strategy that implements a combination of these interventions”. For its part, Ofcom states that it will take action to prevent malicious deepfakes where it is empowered to do so. It refers, for example, to measures in its Codes of Practice for illegal harms and the protection of children (commented upon here) and work that it will undertake in the future, including with the Government “to identify potential regulatory gaps in relation to deepfakes and generative AI”.

To read the paper in full, click here.