Contact Us

Disruptive Competition Project

655 15th St., NW

Suite 410

Washington, D.C. 20005

Phone: (202) 783-0070
Fax: (202) 783-0534

Contact Us

Please fill out this form and we will get in touch with you shortly.

AI Act Negotiators Shouldn’t Forget Critical Provisions, As Foundation Models Overshadow Debate

Main takeaways

  1. EU negotiators are running out of time to discuss key AI Act rules, rushing to a half-baked deal
  2. The rules and use cases to classify high-risk AI systems remain ill-defined and flawed
  3. AI bans could kill off useful services that are beneficial to European citizens

It appears the European institutions want to reach a final political deal on the Artificial Intelligence (AI) Act on 6 December at any cost. The problem, however, is that in recent weeks, negotiations have almost exclusively focussed on regulating foundation models. But those powerful AI models – on top of which many downstream applications are built – were not even part of the original AI Act proposal in the first place. 

Foundation models have been overshadowing the debate mainly because countries such as France, Germany, and Italy finally came out against the very strict requirements proposed for such models. It is never too late of course. For months, industry groups have been insisting on the need to focus on AI applications and not on the technology itself, and emphasising the disastrous effects of overly strict rules for the underlying technology rather than use cases. 

While it is encouraging to see that policymakers have embarked on an intense, but meaningful, discussion on this important issue, they should not forget all the other critical parts of the AI Act that also require thorough debate as we get closer to 6 December. 

1. Classification of high-risk AI systems

The key rules for the classification of high-risk AI systems, which are at the very heart of the AI Act and subject to strict requirements, should also top the agenda. 

Negotiators have agreed on a framework that allows companies to (exceptionally) be exempted from the requirements if they meet a restricted set of justifications. For example, by demonstrating that a system performs only narrow and uncomplicated procedural tasks. 

However, EU negotiators have unfortunately decided to exclude all systems that perform profiling from the scope of this exemption. This questionable last-minute change risks classifying low-risk AI systems, such as those used for delivery and ride-sharing services, as high-risk systems without any real justification. Furthermore, this change directly conflicts with the General Data Protection Regulation (GDPR), the EU’s overarching privacy framework.

2. List of high-risk use cases

The list of high-risk use cases is another crucial part of the AI Act, yet still awaiting further discussion. In its current shape, this list includes very worrying and unnecessary use cases that overlap with existing rules and would only create more bureaucracy and confusion. 

For example, very large online platforms’ recommender systems are already regulated by the Digital Services Act (DSA), and new EU rules for political advertising were recently agreed. The same applies to the Platform Work Directive (PWD), which covers AI and algorithms used by digital platforms to a large extent. 

Why should EU lawmakers even invest time and energy in defining sectoral rules, when others will almost immediately rewrite them anyway? How are businesses supposed to adapt and implement overlapping and conflicting rules? The EU is supposedly committed to “better regulation”, but  these amendments would clearly run counter to the most basic principles.

3. Controversial prohibitions

Last but not least, negotiators should not forget about the very controversial list of prohibitions. Bans on products and services should remain exceptional and clearly targeted at unacceptable use cases. It is as simple as that.

Nevertheless, biometric-categorisation systems – used to infer the age of internet users in order to protect children and fight against child sexual abuse – would simply be banned outright by the AI Act text currently on the table. Remote biometric-identification systems, which can be used for security but also entertainment purposes (such as voice recognition), would be banned as well. 

Such prohibitions need to be discussed in detail in order to avoid killing off useful services that are clearly beneficial to European citizens. But will there be any time left between now and 6 December?

The discussions on foundation models and general-purpose AI of recent weeks are important and can only be welcomed. However, EU negotiators should not forget all other important parts of the text, especially as it looks like they are rushing towards a deal at any price. 

Adopting the AI Act just for the sake of politically opportune headlines in the news would be a waste of time for all parties involved and seriously damage Europe’s economy. As we like to say in Brussels, speed should not prevail over quality. It might be a cliché catchphrase, but it is a mantra that negotiators should take to heart nevertheless.

European Union

DisCo is dedicated to examining technology and policy at a global scale.  Developments in the European Union play a considerable role in shaping both European and global technology markets.  EU regulations related to copyright, competition, privacy, innovation, and trade all affect the international development of technology and tech markets.