HomeInsightsDeepfakes: Ofcom publishes paper on ‘attribution measures’

Ofcom has published a paper exploring how different tools and techniques could be used to identify deepfakes.

We have discussed the risks posed by deepfakes previously here, and also reported here on an earlier paper by Ofcom, ‘Deepfake Defences’, published last year which set out a number of interventions that could be taken to address the sharing of deepfake content. The latest publication goes further and examines so-called ‘attribution measures’ which provide information about how AI-generated content has been created.

Four attribution measures are discussed: (1) watermarking; (2) provenance metadata; (3) AI labelling; and (4) context annotations (i.e. annotations from users of a platform or trusted organisations that provide context for particularly controversial or sensitive content). Each measure is examined in considerable detail, as the paper sets out relative strengths and weaknesses, as well as what steps platforms could take should they want to deploy them in the future.

The paper is also accompanied by eight ‘key takeaways’ as follows:

  1. Evidence shows that attribution measures can help users engage with more content critically.
  2. Users should not be left to identify deepfakes on their own – AI labels and context annotations are helpful, but users should not bear the full burden of identifying misleading content. Platforms should use these tools to inform their own content moderation policies.
  3. Striking the right balance between simplicity and detail is crucial when communicating information to users – too much information can be overwhelming, but too little can be confusing.
  4. Attribution measures need to accommodate content that is neither wholly real nor synthetic – they will be more effective if they can communicate not just whether AI has been used, but how it has been used.
  5. Attribution measures can be susceptible to removal and manipulation – for example the removal of watermarks.
  6. Greater standardisation could boost the efficacy and adoption of attribution measures – interoperable watermarking standards, for example, would make it easier for detection algorithms to operate, whilst uniformity across platforms in the use of AI labels would make it less likely that users are confused.
  7. The pace of change means it would be unwise to make sweeping claims about attribution measures.
  8. Attribution measures should be used in combination to tackle the greatest range of deepfakes.

To read the paper in full, click here.