Contact Us

Disruptive Competition Project

655 15th St., NW

Suite 410

Washington, D.C. 20005

Phone: (202) 783-0070
Fax: (202) 783-0534

Contact Us

Please fill out this form and we will get in touch with you shortly.

A Look Into the Global AI Regulatory Landscape

· February 16, 2024

Artificial intelligence (AI) is a highly versatile technology. From one program’s application in a pastry shop to its later employment in detecting cancer cells, AI is truly integrating itself into many diverse aspects of our world. However, the rapid rise of AI has also intensified the need for governments to strengthen their capabilities to understand, leverage, operate, and when it calls for it, regulate this technology. As a result, there is a growing regulatory tension between different jurisdictions, which includes the development of comprehensive or sector-specific legislation and voluntary guidelines and standards. 

In June of last year, Josh Landau discussed AI legislation in the U.S. and the effects it could have on the industry. And in late October, the Biden-Harris Administration finally released its long anticipated Executive Order (EO) on AI. In the wake of the NIST Risk Management Framework, as well as the recently released EO and the Blueprint for an AI Bill of Rights, it appears the White House is taking a thoughtful, risk-based approach to regulating AI. While the U.S. is taking a cautiously optimistic stance when it comes to AI, other regimes are moving forward with more broad and burdensome proposals that could very well hinder this burgeoning technology’s development. In Europe, EU member states recently approved final wording of the AI Act. However, while the EU sees the potential of AI through its own competitive AI ecosystem, many of the new AI rules remain unclear and could obstruct the development and roll-out of innovative AI applications across Europe. 

Moreover, Europe and the U.S. are by no means the only two players in this regulatory rivalry. The UK, Canada, Israel, and many other countries and territories are all pursuing some form of AI regulation.


The Biden-Harris Administration is acutely aware of this technology and growing regulatory rivalries, and took action on the matter before the newest EO by publishing the Blueprint for an AI Bill of Rights and issuing an EO Advancing Racial Equity and Support for Underserved Communities Through the Federal Government. The White House EO on AI does raise some concerns, but it also represents a positive step forward in AI governance and promises a bright future. 

The EO on AI has quite a broad scope, directing a number of actions to maximize AI’s potential. Among its many provisions, it proposes new privacy, safety, and security standards, assigns agencies to develop and share reports and guidelines for the responsible use of AI, and encourages attracting global talent. Commenting on the Biden Administration’s EO on AI, Senators Schumer and Young announced they are also working on legislation that would build upon the Executive Order from the President. Furthermore, the White House recently announced that “top federal officials working to carry out the AI EO say their agencies have completed all of the 90-day actions asked of them.” And most recently, at State of the Net 2024, Congressional AI Caucus Vice Chair Don Beyer said that there is a bipartisan plan from House leaders to create an informal AI task force and pass several AI bills in 2024. These efforts spotlight the fast pace officials are moving to keep pace with the growth of the technology.

In addition to the federal government, many states are pursuing AI legislation as well. States are introducing legislation to regulate AI in a number of sectors, with pieces of legislation often falling into one of five key categories:

  1. Establishing AI development standards;
  2. Labeling various content generated by AI applications;
  3. Regulating suggestive algorithmic feeds;
  4. Pursuing studies or creating task forces to determine how best to regulate AI; and
  5. Enhancing oversight of algorithmically-informed decision-making across broad sectors via comprehensive AI Bills of Rights.

The focus of these pieces of legislation ranges from protecting children online to ensuring the integrity of our elections. Several states are taking an especially close look at this topic. For example, last year Connecticut Senator James Maroney led a multi-state AI task force, which expanded to include lawmakers from state legislatures across the country. Last year, California, which tends to be among the first states to propose regulations on technology, began contemplating legislation such as AB 331, regarding automated decision tools. Now, the California legislature has over ten proposals centered on AI, and Governor Gavin Newsom signed an executive order “to study the development, use, and risks of artificial intelligence (AI) technology throughout the state and to develop a deliberate and responsible process for evaluation and deployment of AI within state government.” As most states are only at the beginning of 2024 legislative sessions, the legislative landscape at the state level is likely to continue to evolve before most sessions adjourn this summer. 

To overregulate this sector risks AI evolution stagnating in the United States. AI governance must achieve a balance that ensures the safety and security of citizens while promoting innovation and competition, advancing U.S. AI leadership across the world.


Besides the U.S., another regulatory model contributing to the current landscape is the European Union’s AI Act. EU member states recently approved final wording of the AI Act, which was proposed by the European Commission three years ago. The AI Act aims to create risk-based rules on how AI is developed and used in the EU. Certain practices deemed unacceptable will be entirely prohibited, while AI systems deemed high-risk will be subject to strict requirements. The AI Act deviates from its risk-based approach by imposing strict obligations on developers of foundation models, powerful models underpinning all kinds of different applications, including generative AI application. The vote for the Act’s formal adoption could occur in March or April. 

Boniface de Champris has previously addressed the European AI Act, noting the rules lack clarity and could seriously damage Europe’s economy. Despite improvements to the final text, many of the new AI rules remain unclear and could slow down the development and roll-out of innovative AI applications in Europe. Europe has a highly competitive AI ecosystem; the Act’s proper implementation will therefore be crucial to ensuring that AI rules do not overburden companies in their quest to innovate and compete in a thriving, highly dynamic market.  


The UK also aspires to hold a positive, leadership role in artificial intelligence. Michelle Donelan, Secretary of State at the Department for Science, Innovation and Technology (DSIT), told a House of Lords Committee that the Government has taken a “pro-innovation” approach to AI, addressing risks transparently rather than turning away from AI. She pointed to steps to that end including the AI Opportunities Forum, the AI Safety Summit and international partnerships such as a Memorandum of Understanding with Canada around compute capacity.

The UK does not yet have a landmark AI law. However Donelan and other Ministers point to the Summit and the AI Safety Institute as firsts that show the UK is at the cutting edge in addressing emerging risks. Instead of setting out new legislation, or creating new regulators, the intent is to use existing legislation and existing regulatory bodies to respond to diverse risks as they emerge.

However this does not mean there are no challenges in the UK. The Competition and Markets Authority is being given extensive new powers to intervene in digital markets and, if the right checks and balances are not included, there is a risk that premature or overly-broad regulation might inhibit the development of new services with new business models that could benefit consumers. While the UK does have fair dealing, without a more general provision akin to the “fair use” doctrine in the U.S., UK copyright law depends on formal changes which are struggling to move forward.

More generally, it is still not clear whether the UK’s intention to provide a flexible, pro-innovation approach to AI will survive a slow accretion of restrictions from different regulations and regulators, or pressure for a larger comprehensive regulatory response. Some parliamentarians are conscious of the risks here. Baroness Stowell, Chairman of the House of Lords Communications and Digital Committee, warned that “we need to be proportionate and practical” in addressing risks and “not miss out on opportunities”. She noted that excessive caution might “exclude smaller players from developing AI services.”


In September, Canada’s Minister of Innovation, Science and Industry, François-Philippe Champagne, voiced an ambition for Canada to be the first country to have AI regulations in place to “inspire the rest of the world.” Although that goal is now unreachable after the EU’s agreement on the AI Act, Canada’s Parliament continues its work to advance what it hopes would be a template for global AI regulation in Bill C-27, the Digital Charter Implementation Act, which itself has been under consideration since June 2022.

Bill C-27, however, introduces new challenges and concerns. The bill is split into two parts—the first section overhauls the privacy rules in Canada, and the second incorporates the Artificial Intelligence and Data Act, which seeks to establish “common requirements, applicable across Canada, for the design, development and use” of AI systems. 

Artificial intelligence systems are defined with a broad brush as any technological system that, “autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions.” Many of its definitions — such as “high-impact AI systems” or “person responsible” — are left opaque or undefined, leaving interpretations that could lead to disclosure of trade secrets, excessive punishments for innovators, and restrictions on services trade for online programs.

Even more concerning, in October 2023, the government stated its intent to include AI used in the “moderation of content that is found on an online communications platform, including a search engine and a social media service” or the “prioritization of the presentation of such content” under “high-impact” AI systems. This unique inclusion is notable, as it could undermine online services providers’ activity in the Canadian market given the potential broad-sweeping applicability of such a category. 

Bill C-27 is still being studied by the House of Commons Standing Committee on Industry and Technology—once it is voted out of committee, it will require approval from the House of Commons and the Senate (which would need to hold a committee process of its own and could undergo further amendment) before it can receive Royal Assent. 

The lack of clarity in Bill C-27 and its sweeping application raises concerns that this legislation will introduce an overly burdensome regulatory framework, which would in turn endanger interoperability across the continent for services subject to its obligations. Some of the companies leading innovations in AI warned the Canadian Parliament of these concerns earlier this month, with Meta’s representative telling the Committee that the law could result in the company being unable to roll out certain services and products in Canada. As such, if it remains as is, Bill C-27 could undermine the development of a growing and innovative field by creating regulatory uncertainty and a marketplace hostile to innovative practices.


The pressures and potential of AI are pushing many regulators forward in a race to regulate, often due to ill-defined and unproven fears of AI’s harms. As such there are degrees of AI regulation being pursued in jurisdictions across the globe. Some jurisdictions are taking a more measured approach to artificial intelligence. Others are taking a more active yet still thoughtful approach. However, yet other jurisdictions are crafting AI regulations with overly broad and burdensome provisions. This may hamper the factors necessary for AI’s development and prop up legacy industries, sheltering them from the need to innovate. Hobbling AI could cement incumbents’ positions in the market, hamper new and innovative businesses, and harm global competition.

Certain regulatory actions regarding the future of artificial intelligence are of course quite reasonable and necessary as AI has the potential to reshape society in boundless known and unknown manners. The growth of AI will continue to raise questions as it reshapes society; however, any regulation must be tempered with thoughtfulness and a clear understanding of this technology to ensure that markets can facilitate competition and innovation.


New technologies are constantly emerging that promise to change our lives for the better. These disruptive technologies give us an increase in choice, make technologies more accessible, make things more affordable, and give consumers a voice. And the pace of innovation has only quickened in recent years, as the Internet has enabled a wave of new, inter-connected devices that have benefited consumers around the world, seemingly in all aspects of their lives. Preserving an innovation-friendly market is, therefore, tantamount not only to businesses but society at large.