A $30 Million Fine for “Unreasonable” AI? Inside New York’s Radical New Law.
The battle to control the future of artificial intelligence has a new front line: Albany. New York lawmakers passed a pioneering bill last week to regulate the world’s most powerful AI, directly challenging Silicon Valley’s most prominent players and a federal push to halt such laws in their tracks.
The Responsible AI Safety and Education (RAISE) Act, which cleared the state senate in a near-unanimous 58-1 vote, now awaits the signature of Governor Kathy Hochul. If signed, it will become the first law in the nation to impose a legally mandated safety framework on the developers of so-called “frontier” AI models, the same technology that powers tools from companies like OpenAI and Google.
The legislation is backed by overwhelming public support, with one poll showing 84% of New Yorkers in favor. But it faces a perilous path, with fierce opposition from the tech industry and the looming threat of being nullified by a potential federal ban on state-level AI regulation.
A New Philosophy On AI Risk
For years, AI regulation has focused on present-day problems. New York City’s own Local Law 144, for instance, tackles algorithmic bias in hiring tools. The RAISE Act represents a dramatic pivot. Its stated purpose is to prevent speculative but potentially catastrophic future events, such as AI-assisted bio-terrorism, devastating cyberattacks, or the creation of uncontrollable autonomous systems.
The law defines the “critical harm” it seeks to prevent with precise clarity: any event enabled by an AI model that causes at least $1 billion in economic damages or results in the deaths of 100 or more people. This shift from managing known biases to preventing future disasters lies at the heart of the new legislation.
Who The Law Targets
The bill’s sponsors have designed it as a surgical strike, not a blanket regulation. It applies only to “Large Developers,” a category so narrowly defined that it exempts almost every company except for a handful of global tech giants. To qualify, a company must have spent more than $100 million on the computing power used to train an AI model or used a machine capable of more than 100 quintillion operations.
This high threshold was a deliberate political choice. “This bill is designed to not chill innovation,” said Assemblymember Alex Bores, a co-sponsor, noting it explicitly targets companies with the resources to build potentially dangerous systems while leaving startups and small businesses untouched. The law also expressly exempts academic research conducted at accredited universities.
What The Giants Must Do
Under the Act, before a company like Meta or DeepSeek can offer a new frontier model to New Yorkers, it must first implement a written Safety and Security Protocol. This isn’t just a paperwork exercise. The protocol must detail the company’s specific technical safeguards, cybersecurity measures to prevent model theft, and the results of extensive risk testing.
A redacted version of this safety plan must be made public, while an unredacted copy must be made available to state authorities upon request. This has become a significant point of contention. Industry groups, such as the Business Software Alliance (BSA), argue that publishing these protocols, even in part, creates a “roadmap for bad actors to exploit” by revealing a company’s security strategies.
Furthermore, developers must report any serious “safety incident” to the state within 72 hours. This includes not only actual harm but also events that reveal an increased risk of harm, such as a model behaving autonomously in a dangerous way or a critical failure of its safety controls.
Perhaps most powerfully, the law directly prohibits a company from deploying a model if it would create an “unreasonable risk of critical harm.” What constitutes an “unreasonable risk” is not defined, giving the state’s top law enforcement officer immense discretionary power to challenge a company’s judgment.
The Attorney General’s New Powers
The RAISE Act vests all enforcement power exclusively with the New York Attorney General (AG). It explicitly states that the law does not create a private right of action, a key concession to industry that prevents individuals and groups from filing class-action lawsuits.
The AG can levy massive civil penalties: up to $10 million for a first violation and up to $30 million for subsequent ones. Beyond fines, the AG can ask a court to issue an injunction to halt the deployment of a model deemed too dangerous.
To prevent evasion, the law also includes a rare provision that allows a court to “pierce the corporate veil.” This would allow the AG to pursue a parent company’s assets, even if the model were deployed by a shielded subsidiary —a tool designed to ensure penalties can be collected.
Protecting The Insiders
Reacting to recent high-profile resignations and warnings from employees at top AI labs, lawmakers wrote strong whistleblower protections into the bill. The Act makes it illegal for a large developer to fire, demote, or otherwise retaliate against an employee who reports a safety concern they reasonably believe could lead to critical harm.
Proponents, including some of the world’s most prominent AI scientists, argued these protections are vital. They cited accounts of “reckless” development and a culture of prioritizing profit over safety as clear evidence that internal dissent is a critical early warning system that must be protected.
A Clash Of Worldviews
The debate over the bill is a proxy war for the larger global argument about AI risk. Supporters frame it as a common-sense measure. “Would we let automakers sell a car with no brakes?” asked State Senator Andrew Gounardes, the bill’s lead sponsor. “Of course not. So why would we let developers release incredibly powerful AI tools without basic safeguards in place?“. AI pioneers Geoffrey Hinton and Yoshua Bengio have called similar legislation in California the “bare minimum” for effective regulation.
Opponents paint a very different picture. A coalition of tech industry groups and venture capitalists has lobbied hard against the bill. The Chamber of Progress, a tech-funded group, claims it will saddle companies with “excessive compliance costs” and was “rushed through without a single public hearing.” The BSA has called its reporting requirements a “vague and unworkable” scheme. Their central argument is that New York is regulating speculative “doomer worries” that will ultimately stifle American innovation.
Collision Course With Washington
The biggest threat to the RAISE Act may come from Washington, D.C. In May, the U.S. House of Representatives passed a proposal to place a 10-year moratorium on all state and local laws that regulate artificial intelligence.
If the federal ban is enacted, it would likely render the RAISE Act unenforceable for at least a decade. Proponents of the federal action argue it is necessary to prevent a confusing “patchwork” of 50 different state laws that could hobble the U.S. tech industry. Opponents argue that states are being compelled to act as policy laboratories precisely because of federal gridlock on the issue.
This federal threat turns Governor Hochul’s decision into a high-stakes calculation. Signing the bill would be a bold assertion of state power, an attempt to set a national standard before Washington does. A veto could be seen as bowing to industry pressure or a pragmatic choice in the face of a likely federal preemption. Her decision will echo far beyond Albany, marking a defining moment in how America governs its most transformative technology.
Key Takeaways:
- New York passed the RAISE Act, the first U.S. law requiring safety standards for the most advanced “frontier” AI models.
- The law targets only “Large Developers” spending over $100M on computing, exempting startups and academia.
- It requires developers to implement and publish safety protocols, report incidents within 72 hours, and empower the New York Attorney General with enforcement and fines of up to $ 30 million.
- The Act includes strong whistleblower protections for employees who report safety risks.
- The law faces intense opposition from the tech industry and could be nullified by a proposed 10-year federal moratorium on state AI laws.
