Insights Ofcom’s 2024/25 Strategic Approach: Balancing AI Risks and Benefits

On 26 March 2024 Ofcom has published its strategic approach to AI for 2024/25 (albeit that the document link to this document came ‘bundled’ in on the webpage for its Plan of Work for 2024/25 published on the same day). The document outlines Ofcom’s plan to harness the benefits of AI whilst mitigating the risks it poses to the sectors it regulates, including telecoms, TV and radio broadcasting, on-demand services, video sharing platforms, and online safety (under the Online Safety Act 2023).

The strategy follows on from the Department for Science, Innovation and Technology’s (“DSIT”) March 2023 Whitepaper, in which DSIT outlined five cross-sectoral principles for the existing UK regulators to interpret and apply within their remits in order to drive safe, responsible AI innovation. The principles raised were safety, security, and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress. For further information on the White Paper, see here.

While Ofcom states that it recognises the potential benefits of AI, such as enhancing network planning, optimising network building, and detecting fraudulent behaviour, it also acknowledges the challenges and risks posed by AI, such as the creation and spread of illegal or harmful content, increased risk of misinformation and disinformation, as well as its enabling capabilities for more sophisticated fraud and scams.

The document highlights the work Ofcom has been doing to identify, understand and tackle AI risks, focusing on three cross-cutting issues – synthetic media[1], personalisation[2] and security and resilience[3]. Examples of the steps Ofcom has taken include:

  • publishing draft Illegal Harms Codes of Practice under the online safety regime, including proposals relating to safety metrics, accountability, and governance to mitigate harm.
  • working on projects examining malicious uses of AI and how to tackle them, including on synthetic content detection methods, the uses of AI to develop malware, and how AI driven recommender systems can be tested by platforms understand their impacts online.
  • engaging with standards bodies, stakeholders, and service providers to improve awareness, and understand how AI is both being used and regulated.

For the 2024/25 period, Ofcom’s planned AI work includes:

  • Drawing up and consulting on Codes of Practice measures to help regulated services tackle risks and protect users from illegal and harmful content under the online safety regime, as well as consulting on its information gathering powers.
  • Researching the merits of various tools and methods to detect vulnerabilities in AI models, such as red teaming, synthetic media detection, and automated content classifiers, as well as the merits of using generative AI itself for content moderation.
  • Engage with and issue guidance to broadcasters to clarify their obligations in relation to AI, while considering the implications of AI-driven recommender systems for media plurality, sustainability, and discoverability of public service media content as well as how this may lead to a decline in viewership of UK broadcasting content or impact public service media.
  • Monitoring AI-related fraud and scams, as well as cybersecurity risks, in the telecoms sector (and across its vendor supply chain) by engaging with industry.
  • Researching how AI can also be used to tackle cybersecurity risks across the vendor supply chain.
  • Engaging with domestic and international regulatory forums, carrying out horizon scanning for emerging AI developments, and monitoring markets to understand AI’s impact, including with regards to areas such as competition.
  • Continuing to build its own AI capabilities inside Ofcom in order to leverage AI’s use within their operations.

While Ofcom’s strategic approach to AI for 2024/25 outlines a number of important means to tackle issues around AI, the plan at present remains somewhat high-level, lacking detail on how much of it will actually be implemented. While it acknowledges the potential benefits and risks of AI across the sectors it regulates, specific mechanisms for translating these insights into measures are not evident throughout.

Section 5 of the document also outlines the various initiatives related to AI, both domestically and internationally, Ofcom is involved in. This includes the Global Online Safety Regulators Network (“GOSRN”), which facilitates the exchange of insights on AI-related risks, particularly in online safety for children, as well as networks like the European Platform of Regulatory Authorities (“EPRA”), which enables knowledge sharing on AI tools in broadcasting regulation. While these collaborations aim to enhance regulatory efforts and promote innovation, this can also create uncertainty regarding the harmonisation of regulatory approaches globally. This poses a separate challenge for effective governance of AI technologies, particularly where there are disparate approaches to regulation which in of itself makes co-operation difficult.

It therefore remains to be seen how Ofcom will actually carry out many of these proposals effectively.

There is also limited emphasis on raising consumer awareness (case in point being that the document did not even feature its own news item on Ofcom’s website, being instead hidden alongside Ofcom’s overarching Plan of Work). Initiatives focused on empowering consumers’ understanding of AI-related issues are crucial for fostering a resilient ecosystem. Educational campaigns or developing user-friendly resources that clearly emphasise the risks and benefits of AI technologies are important in enabling UK users (particularly those most vulnerable to the risks AI poses) to make informed choices. Importantly, this would not only enhance consumer protection but would also promote increasing responsible AI use among service providers, as enhanced end-user awareness of risks will often increase the importance placed on it by providers.

References

[1] An umbrella term for video, image, text, or voice that has been generated in whole or in part by AI.

[2] The use of AI to personalise services for UK users, which can affect the discoverability of UK and public service content and amplify illegal and harmful online content.

[3] This relates to the use of generative AI to develop malware, identify network vulnerabilities, or create guidance on how to breach security. Conversely overreliance on generative AI can lead to outages, particularly where poorly trained AI is used or where widespread use of code leads to vulnerabilities.