AI Safety or Innovation Freeze? – What New York’s RAISE Act Gets Wrong
The Responsible AI Safety and Education Act (RAISE Act), recently passed by the New York State legislature, is an attempt to address fundamental risks posed by frontier AI models. Yet, by imposing obligations solely on developers of such models, it neglects the broader AI value chain –chilling innovation without meaningfully improving safety.
On 12 June 2025, the New York State Legislature passed the RAISE Act, a bill now awaiting approval from Gov. Kathy Hochul. The measure follows the One Big Beautiful Bill Act, which had originally contemplated a ten-year moratorium on state-level AI regulation but never adopted such a freeze. Against this backdrop, the RAISE Act marks the first major foray into the regulation of frontier AI in New York and across the US, as it applies not only to all frontier models developed in the state, but also to any model made available to New York residents.
Taking inspiration from the EU’s AI Act and California’s SB 1047—vetoed by Governor Gavin Newsom in 2024 over concerns of stifled innovation—the RAISE Act seeks to ensure the safety of “frontier models.” These are defined as AI models that cost more than $100 million to train and exceed a specified computational threshold. The statute aims to prevent misuse that could cause “critical harm,” defined as the death or serious injury of at least 100 people, or economic damage of $1 billion or more. Particular concern is directed towards potential use in chemical, biological, radiological, or nuclear weapons development, as well as (semi) autonomous “conduct” of frontier models that, if committed by a human, would constitute serious crimes under New York’s penal code.
Under the RAISE Act, before releasing a frontier model, a developer must:
- implement safety and security protocols, including for risks created by third-party uses of the model outside of the developer’s control,
- conspicuously publish those protocols and transmit them to the Division of Homeland Security and Emergency Services, and
- implement safeguards to prevent unreasonable risks of critical harm, without any criteria for what an appropriate safeguard is and despite the impossibility of assessing risks from speculative uses by third parties.
If unreasonable risks cannot be prevented through safeguards, the Act prohibits the release of the model altogether. Enforcement lies with the Attorney General, who may impose civil penalties of up to $10 million for an initial violation and up to $30 million for repeat violations.
A One-Sided Allocation of Responsibility
The RAISE Act imposes all obligations on frontier developers, without extending duties to other actors in the AI value chain such as those who fine-tune or modify models, integrate them into AI systems or use them as end users.
This stands in contrast with the EU’s AI Act, which—though criticized for its complexity, its heavy compliance costs and for not following a truly risk-based approach—at least distributes obligations across multiple categories of actors. By regulating only model developers and holding them liable even for third-party misuse, the RAISE Act creates an unreasonable standard of responsibility. Developers cannot anticipate, let alone control, the vast range of possible downstream applications and their potential (mis-)use. This is particularly damaging for open-source developers who have little or no contact with users and therefore limited ability to police misuse.
The Problem of “Critical Harm”
The concept of a “risk of critical harm” is equally flawed. Frontier models are inherently general-purpose; the risks they pose depend on their downstream uses, which vary widely across sector and context. Yet the Act requires developers to assess and mitigate these risks ex ante, at the time of release, when such risks are largely speculative, diffuse, and unmeasurable.
At the same time, the RAISE Act provides no meaningful criteria for determining when such risks exist or what qualifies as “appropriate safeguards.” By leaving these determinations to future interpretation, the law creates legal uncertainty and exposes developers to ex post enforcement without clear ex ante guidance.
A Step Beyond the EU—and Too Far
Some New York lawmakers are lobbying to expand the scope of the Act beyond that of the EU’s AI Act by mandating third-party audits. Under the EU’s new Code of Practice, providers of general purpose AI models with systemic risks may opt for internal assessments instead of external evaluations. However, a perpetual independent audit requirement would create rigid compliance costs without any demonstrable safety improvements, as third party auditors face the same challenges as upstream developers in detecting and mitigating critical harm ex ante.
The Need for A More Balanced Approach
Criticism of the RAISE Act should not be mistaken for a rejection of all regulation of frontier or general-purpose AI model providers. In fact, there are strong arguments for imposing baseline obligations, such as red teaming and prompt moderation to prevent outputs facilitating bomb-making, child exploitation or other clearly unlawful activities. Developers should also be required to share sufficient information with downstream actors, enabling them to meet their own obligations and implement safeguards.
But meaningful regulation must extend further. Downstream providers — including fine-tuners, deployers, and end users — are better placed to assess context-specific risks. They can amplify existing risks or create new ones by enhancing a model’s capabilities, disabling built-in safety features, or using a model in unforeseen ways. Concentrating all obligations on upstream developers alone is therefore ineffective.
Conclusion
The RAISE Act fails to establish a balanced framework for AI governance. By targeting only frontier developers, relying on vague concepts of “critical harm,” mandating perpetual audits, and overriding contractual arrangements, it disregards the differentiated capacities of actors across the AI value chain.
Rather than allocating duties according to principles like the cheapest or “least-cost avoider” or “superior insurer,” the Act centralizes liability in upstream developers, despite their limited ability to predict and mitigate downstream risks.
If enacted, the RAISE Act may achieve little in terms of safety, while substantially discouraging innovation and open-source development in New York and beyond. Far from setting a model for responsible governance, it risks becoming a cautionary tale of how well-intentioned but disproportionate regulation can hinder the very progress it seeks to protect. A more effective path would either regulate the entire value chain to ensure a fair distribution of responsibilities, or hold off on prescriptive rules altogether until risks of frontier AI models can be more clearly identified, evaluated, and legally addressed.
Martin Ebers is the President of the Robotics & AI Law Society (RAILS), Germany, and a Professor of IT Law at the University of Tartu, Estonia. He is also a permanent fellow at the Faculty of Law at the Humboldt University in Berlin, Germany. He has taught and presented at over 100 international conferences and is a member of several national and international research networks. He has published 24 books and over 120 articles in the fields of law and technology, esp. artificial intelligence, as well as commercial, private, European, comparative and international law.
His latest books include “Algorithms and Law” (Cambridge University Press, 2020), “Contracting and Contract Law in the Age of Artificial Intelligence” (Hart Publishing, 2022), the “Stichwortkommentar Legal Tech” (Nomos Publishing, 2023), “Privacy, Data Protection and Data-driven Technologies” (Routledge, 2024), “Rechtshandbuch ChatGPT” (Nomos Publishing, 2024) and “The Cambridge Handbook of Generative AI and the Law” (Cambridge University Press 2025). Since 2024, he has been Editor-in-Chief of the new open access journal “Cambridge Forum on AI: Law and Governance” at Cambridge University Press.