European Union reached provisional agreement on the Artificial Intelligence Act.
MEPs and negotiators for the European Council have reached a provisional agreement on the Artificial Intelligence Act, concluding the Trilogue negotiations after a “marathon” round of talks.
Background
The proposal has been a long time coming. The European Strategy on AI and the Coordinated Plan on AI were published in 2018, the Commission’s White Paper in 2020 and the first proposal of the AI Act in April 2021. The agreed text still has to be formally adopted by both the Parliament and the Council before becoming law, a process that is unlikely to be smooth – French negotiators attempted to modify the provisional text during the discussion and there are indications they will look to block its adoption into law.
Summary of regulation
The AI Act has been developed around a hierarchy of risk with four levels: ‘unacceptable risk’, ‘high risk’, ‘limited risk’, and ‘low & minimal risk’. Regulation is matched to the level of risk, with the use of AI in unacceptably risky ways being prohibited, high-risk applications being regulated, limited risk applications being subject to transparency requirements and no obligations attached to low & minimally risky use. The EU suggests that the “vast majority” of AI systems fall into the latter category, but notes that companies may nonetheless commit to additional codes of conduct for these AI systems.
Should the Act be adopted in its present form, the following applications of AI would be banned for being unacceptably risky:
- biometric categorisation systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race);
- untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
- emotion recognition in the workplace and educational institutions;
- social scoring based on social behaviour or personal characteristics;
- AI systems that manipulate human behaviour to circumvent their free will;
- AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).
There are, however, some defined limited exceptions for biometric identification systems when used by law enforcement, with utilisation only when individuals are suspected of a few specific crimes named in the regulation.
Transparency requirements would be introduced for general-purpose AI (GPAI) systems. These requirements include technical documentation, complying with EU copyright law and communicating what data the models are ‘trained’ on. The final point addresses an uncertainty we previously raised in our article on Copyright & AI. The requirement to disseminate “detailed summaries about the content used for training” may work to mitigate the use of copyrighted material without consent, and thereby encourage licensing and the flow of renumeration to creators and rights holders. They may be pleased to see this requirement, but questions may also remain about whether this goes far enough to effectively prevent infringement.
A stricter regime is proposed for ‘high impact’ foundation models to mitigate the dissemination of systemic risks through the value chain. A ‘fundamental rights impact assessment’ will be required before a high-risk AI system is put on the market, and some users of high-risk AI systems will have to register in the EU database for high-risk AI systems.
Enforcement
Breaches of the regulation are to be punished by fines,
- €35m or 7% of global annual turnover for violations of banned AI applications,
- €15m or 3% of global annual turnover for violations of the Act’s obligations,
- €5m or 1.5% of global annual turnover for the supply of incorrect information.
More proportionate caps are proposed for SMEs and startups in the event of regulatory breaches.
The act establishes an AI Office and an AI Board. The Office will lie within the Commission and oversee the most advanced AI models, develop standards and enforce the regulation. Advice on foundation models will be provided by a scientific panel of independent experts. The board is to be composed of member states’ representatives and will act as a coordination platform and as an advisory body to the Commission. Technical expertise will be provided by an advisory forum of stakeholders. Citizens will be able to launch complaints about AI systems to the market surveillance authority.
Promoting innovation
Alongside restriction, the proposal also seeks to create a legal framework that is “more innovation friendly” and will promote “evidence-based regulatory learning”. AI regulatory sandboxes, outlined in the initial Commission proposal, are tests of AI products in isolated environments that mimic that of the end-user. The proposal outlines that these sandboxes should be permitted and that testing of AI in real-world conditions (ie. outside of the sandbox) should be permitted subject to specific conditions and safeguards. This is an effort to ensure that the proposed regulation does not impair the development of AI systems in the EU when compared to those in the rest of the world.
Criticism
It is notable that President Macron’s criticism of the Act focussed on reductions to EU competitiveness, noting that the UK and France are currently “neck and neck” on AI development, the UK will not be subject to regulation on foundational models and that “we are all very far behind the Chinese and the Americans.”
His criticism summarises one side of the reactions to the Act. On the other, it has so far received criticism for introducing loopholes that, for example, permit high-risk applications to move beyond the regulation’s scope, or to encourage larger companies to place the most high-risk AI projects in startups to mitigate the risk of substantial fines.
Assessment
The Act is the first effort to establish an EU-wide regulatory framework for AI and as a result the EU has taken on a challenge; being first is difficult. The Union has sought to strike a balance between protecting the general public (who will constitute the bulk of end users of AI systems) and not rendering European AI development uncompetitive against the rest of the world. The global value of the AI market in 2022 was just over $450Bn, by 2030 it is projected to be worth over $1,800Bn – that is simply too large a market for the EU to lose out on. In this regard, the risk-based approach to regulation seeks to avoid regulation on the AI technology itself and permission for sandbox and regulated real-world testing illustrates the effort at compromise and controlling risk. Attention is focussed on the datasets as the foundational element of AI systems and avoids the futile task of predicting AI development.
It remains to be seen to what extent there will be further modification through Parliament. Should the proposal progress into law, the efficacy and nature of enforcement processes will, naturally, be crucial. So too will the be the international response. Without similar regulation or collaboration elsewhere European AI development may be impeded to the detriment of EU companies. On the other hand, there is a clear effort to protect the creative industries and content producers in Europe. Work at the technical level, which will proceed in the coming weeks, will be vital to the effective forward development of the Act.