Contact Us


Disruptive Competition Project

655 15th St., NW

Suite 410


Washington, D.C. 20005

Phone: (202) 783-0070
Fax: (202) 783-0534

Close

Balancing AI Innovation and Regulation: Why the EU (Still) Needs a True Risk-Based Approach

Credit: EyeEm Mobile GmbH

Main takeaways

  1. EU policymakers should avoid a one-size-fits-all approach to regulating AI 
  2. Overlapping enforcement structures hinder innovation and damage competitiveness
  3. Fortunately, EU lawmakers have several tools at their disposal to fix these issues 

The EU’s AI Act is the first global policy framework to regulate artificial intelligence (AI) comprehensively, aiming to ensure AI systems are safe and trustworthy. It was originally supposed to introduce a risk-based approach, tailoring the strictness of rules to the risk level posed by certain AI applications. 

In reality, however, the agreed legal text suffers from shortcomings. And this could become even more apparent in 2025, with AI Act implementation taking proper shape in the coming months, including missing elements such as the code of practice for providers of general-purpose AI (GPAI) models. 

Indeed, if done properly, implementation can solve many of the Act’s flaws. This is also something that Martin Ebers explains very well in his study Truly Risk-Based Regulation of Artificial Intelligence: How to Implement the EU’s AI Act, in which Ebers calls for better risk/benefit analysis as well as a clearer and simplified approach to enforcement.

1. No comprehensive risk/benefit analysis

A central flaw in the final AI Act that was adopted last year is its failure to incorporate a proper risk/benefit analysis – a crucial component of any genuine risk-based regulation. As noted by Ebers, the AI Act focuses heavily on mitigating risks related to health, safety, and fundamental rights, without considering the potential benefits of AI in these areas. 

This one-sided approach is likely to stifle innovation and progress. For instance, in healthcare, an overly cautious approach towards AI-based medical devices will delay or block the introduction of innovative solutions that could save many lives. The explosion of use cases for general-purpose AI (GPAI) models is an example of how the Act’s universal approach to AI regulation could lead to more harm than good. 

Still in the drafting phase today, the code of practice for providers of GPAI models will have to balance risks appropriately, and give both providers and deployers the legal certainty they need to innovate and adopt AI. Additionally, the European AI Office’s forthcoming guidelines on the rules and obligations for GPAI deployers who adapt existing models to their specific needs (commonly referred to as ‘fine-tuning’) should ensure that the AI Act’s obligations do not overburden a much broader range of companies than originally intended.

2. Overlapping enforcement

The broad scope of the AI Act also comes with complex enforcement structures. As Ebers explains, the Act operates alongside existing EU laws, leading to potential regulatory redundancies. It could result in an AI application being subject to multiple regulations, enforced by different national authorities, or even new administrative bodies. This overlap will lead to confusion, inefficiencies, duplication of efforts, and ultimately act as a deterrent to AI innovation within the EU. 

This notion was also reflected in the landmark Draghi report, which notes that inconsistencies with the provisions of the AI Act create the risk of European companies being excluded from early AI innovations because of uncertainty surrounding the EU regulatory framework. Without clear mechanisms for cooperation between these bodies, businesses may face double regulatory burdens – increasing costs and slowing innovation.

One way to tackle the issue of overregulation and ensure a truly risk-based approach to AI would be the development of an adaptive and responsive regulatory framework, which would ensure that regulations remain proportionate to the risks posed by AI systems. 

What Ebers proposes is a sector-specific approach that would allow for the tailored assessment and adoption of regulations, based on the specific risks associated with different AI applications in various sectors. Such an approach would provide clarifications on which governing body should serve as the lead authority and how the various bodies should cooperate to help streamline the complex AI enforcement structure.

3. Proposed improvements

To truly reflect a risk-based approach, the AI Act needs substantial refinement. One critical solution is the introduction of a comprehensive risk/benefit analysis that assesses not only the potential risks AI poses, but also the societal benefits it can bring. 

Ebers highlights that the European Commission can address this by issuing clearer guidelines for classifying AI systems as ‘high-risk’ and revisiting the use cases listed in the AI Act. This would enable the Commission to evaluate both the possible risks that certain AI systems might cause, but also their economic and social advantages. This balanced approach is essential to weigh potential risks against the innovation AI can unlock.

Ebers also urges the Commission to carry out an in-depth analysis to identify overlaps and contradictions with other digital regulations. This would avoid duplicative regulatory burdens and excessive enforcement. Clarifying the interaction between the AI Act and existing EU laws is crucial for eliminating regulatory barriers that currently hinder AI innovation in Europe.

Conclusion

While the EU AI Act is an ambitious step towards regulating artificial intelligence, it requires substantial refinement to fulfil its promise as a risk-based framework. With 2025 being a pivotal year for the Act’s implementation, the EU can still fix its policy approach to AI. 

Ebers argues that the Commission is equipped to make these necessary adjustments through delegated acts, harmonised standards, and codes of practice, which could clarify and amend the Act’s provisions. This flexibility is essential to align the AI Act with the rapidly evolving AI landscape, ensuring it supports both innovation and public trust. Now is the time for a strategic shift that embraces innovation while safeguarding public interests.

European Union

DisCo is dedicated to examining technology and policy at a global scale.  Developments in the European Union play a considerable role in shaping both European and global technology markets.  EU regulations related to copyright, competition, privacy, innovation, and trade all affect the international development of technology and tech markets.