Contacts
August 25, 2023
In an open letter published in March 2023, technology leaders including Tesla’s Elon Musk, Apple’s Steve Wozniak and 33,000 other signatories called for a six-month moratorium on the development of advanced AI systems. The letter urged AI developers and policymakers to work together to develop robust AI governance structures and capable regulatory authorities dedicated to AI that provided appropriate oversight of AI systems to ensure transparency, and particularly to AI training data.
While no government has heeded the call for an absolute moratorium (and indeed nor have many of the industry players who called for one), in the months since the open letter there have been several significant regulatory developments in the UK, US and at EU level embodying distinct approaches to the question of implementing AI regulation. The initial headline-grabbing open letter calling for a moratorium gave way to a second open letter signed by the OpenAI CEO Sam Altman calling for joined up regulation and forward thinking.
Despite the differences in the approach of governments and regulators on either side of the Atlantic, a common theme has been to identify and foster data transparency, both in relation to data used to train AI models as well as the outputs from those models, to protect against AI systems that could perpetuate “wrong” decisions, misinformation, distortion, bias and discrimination.
Transparency has been at the forefront of AI regulatory discourse since its earliest days, so this is not news for AI developers. For example, the ‘House of Lords Select Committee: Ready, Willing & Able’ report in 2018 identified trust and transparency as going hand in hand, expressing that widescale adoption of AI systems would not occur unless they are trusted. They also recognised that while complete transparency may not always be possible, it should be required where fundamental rights are put at risk.
Italy’s short ban on ChatGPT on 1 April 2023 due to privacy concerns was one such example of a regulator addressing the impact of AI systems on fundamental rights. The Italian data protection authority said OpenAI did not have a lawful basis to store and collect individuals’ personal data to train its AI models, and highlighted concerns around textual integrity in the context of the GDPR, ePrivacy directive, and the Charter of Fundamental Rights. In late April 2023, the Italian data protection authority reversed the ban following OpenAI implementing changes to comply with the most pressing data privacy concerns.
UK developments
The UK’s Pro-Innovation Approach to AI regulation is underpinned by five guiding principles: (1) safety, security and robustness; (2) appropriate transparency and explainability; (3) fairness; (4) accountability and governance; and (5) contestability and redress. In contrast to the approach taken in the EU, UK policymakers want to establish a non-statutory framework and make use of the regulator’s domain-specific expertise to tailor any implementation of these principles to the specific contexts in which AI is used. The intent is to create a less rigid and onerous approach to AI regulation that promotes innovation and proportionate requirements.
On 29 March 2023, the UK Government issued a call for comments on its AI White Paper. We comment here particularly on the responses from the UK’s Information Commissioner’s Office (ICO) and Competition and Markets Authority (CMA).
The ICO, in its response, recognised the importance of close collaboration with the UK Government to ensure that any AI principles are interpreted and applied to ensure compatibility with and support for UK data protection law, agreeing with the government’s approach to context-specific, risk-based, coherent, proportionate and adaptable to AI governance. Due to the lack of proposed AI-specific legislation in the UK, data protection and privacy are likely to be key areas of concern for businesses providing or using AI systems in the UK market. The ICO will therefore need to play a leading role in providing clarity as the business ecosystem adopts these new technologies.
The CMA, in its initial review, explained how they would apply AI regulatory principles in line with existing competition and consumer rules by supervising how AI developers implement appropriate security and testing to ensure robust systems with transparent and explainable decision making-processes, to ensure a competitively fair market where persons are held accountable for the effects of AI systems in the market and to protect consumers. While the CMA recognises that there may be other institutions better placed to address some of these issues, the CMA’s focus is to ensure that AI systems develop in a way that benefits consumers, businesses and the UK economy. The CMA expects to intervene when consumers are not able to assess the technical functionality or security of an AI system to determine their consumer rights, or when AI systems develop in a way that harms the UK market.
On 4 May 2023 the CMA launched its own consultation regarding the development of AI foundation models, with the objectives of examining how competition in the market could evolve, exploring opportunities and risks that these scenarios could bring for competition and consumer protection, and producing guiding principles to support competition and protect consumers as AI foundation models develop. The CMA is expected to report on its findings in September 2023.
The UK’s approach could be described as leaving regulators to take the lead in regulating. The expressed views of regulators at this stage might therefore be seen as their way of establishing their roles in the emerging AI economy, albeit viewed through their own familiar lenses; it would not be surprising that a data privacy regulator sees privacy as the greatest risk arising from AI systems. Two key unanswered questions for the UK are therefore: (1) who will have final oversight and (2) what happens when a data-centric approach to AI conflicts with one concerned with market distortion?
The EU AI Act
In April 2021 the European Commission published its proposal for a Regulation laying down harmonized rules on Artificial Intelligence intended to create a horizontal, risk-based approach for dealing with AI. The Council of the European Union reached a consensus on its “general approach” to the draft legislation on 6 December 2022. On the side of the European Parliament, that body adopted its text (negotiating position) on the AI Act on 14 June 2023.
The EU AI Act will define AI systems into unacceptable, high, limited and minimal risk groups, and introduces restrictions and compliance requirements on high-risk AI systems and tools which must be submitted for review before they are placed in the market, such as ensuring:
- adequate risk assessments and having appropriate mitigation mechanisms in place;
- record-keeping of all necessary documentation to run the AI system for compliance and accountability;
- data quality to minimise risks and discrimination by using trusted and accurate data;
- appropriate human oversight measures in place to minimise risk, including ongoing monitoring and evaluation to identify and prevent potential biases and errors; and
- a high level of robustness and security to minimise cybersecurity risks or potential data breaches.
The EU AI Act is likely to be agreed by the end of 2023, setting out a transitional period for standards to be mandated and developed, with EU member states expected to give national effect to the new rules by the second half of 2024 (with likely commercial and reputational pressures on early adoption). The EU AI Act will have extraterritorial reach, meaning that any AI system placed on the market or put into service in the EU, or whose output is used within the EU (including both users and providers of such systems) would be subject to the proposed rules, regardless of the location of the user/provider.
The scope of regulated systems under the EU AI Act is also intentionally broad to address the rate of innovation in the sector. Rather than defining AI systems by reference to specific technology, it is approach-led, so as to include any software developed using machine learning approaches, logic/knowledge-based approaches, or statistical approaches. As a case in point, despite being drafted prior to the explosive growth in ‘generative’ AI, the wording captures recent advances in transformer-based models due to their reliance on deep learning architecture.
Developers and adopters of AI systems across many sectors have raised serious concerns about the proposed EU AI Act. Over 150 high-ranking executives from various companies potentially impacted by the proposed rules, including major European telecoms operators such as Deutsche Telekom and Orange, have signed an open letter expressing concerns that the proposed rules will impose prohibitive compliance costs and liability risks that they argue will likely hinder AI developments in the EU. While they recognise the need for regulation, their concern is that the prescriptive approach set out in the EU AI Act will harm the EU’s competitiveness in the AI space compared to other jurisdictions that afford relatively more leeway for participants to explore and use AI systems.
Meanwhile concerns have also been expressed by the content sector regarding the potential for AI-generated visual and audio effects being qualified as “deep fakes” and/or subject to burdensome transparency requirements which might be disruptive to the viewer experience. At the same time copyright owners have expressed concerns about the transparency of use of training data. MEPs and many rightholders argue that the opt-out right in the text and data mining exception established by Article 4 of the DSM Copyright Directive is unenforceable without a transparency obligation on AI developers. The lead committees decided to address this issue by introducing a new Article 28b(4) in the AI Act. Their report, adopted on 14 June 2023, would impose a number of obligations on providers of foundation models used in AI systems specifically intended for generative AI, and “providers who specialise a foundation model into a generative AI system”.
The EU AI Act is now subject to negotiation in the so-called trilogue between the three institutions, following which it will proceed to final approval by the Council and the European Parliament. The next trilogue meeting is currently scheduled for 3 October 2023. A number of EU member states including Germany and Spain are already taking steps to establish controlled environments for testing compliance with the proposed EU AI Act.
Standards for content provenance
The C2PA, formed out of an alliance between Adobe, Arm, Intel, Microsoft and Truepic, seeks to address the prevalence of misinformation online through technical standards that can be used to certify media content provenance. C2PA specification can support a range of media formats, including images and video, which can be published with cryptographically secured content provenance information based on the C2PA standard. In an environment where it is becoming increasingly difficult to determine content provenance particularly where AI tools can scrape publicly available content to create content at scale, initiatives such as the C2PA will be important for the AI industry to develop common standards to enable compliance and transparency.
It will be important for industry to closely monitor further legislative developments and the reaction from regulators to these proposed AI rules and any further guidance in relation to the development and use of AI systems.
The Wiggin team is continuously monitoring regulatory developments affecting telecoms operators and service providers across the world, including those regarding deploying and supplying AI driven products and services. Get in touch if you’d like to have a further discussion about your AI related projects and we’d be delighted to assist.
Expertise