On 29 September 2025, California enacted the Transparency in Frontier Artificial Intelligence Act or SB-53 law, marking a major turning point in the regulation of artificial intelligence (AI) in the United States.
From self-regulation to legal framework
Before its adoption, tech giants relied on self-regulation mechanisms based on voluntary commitments without any real binding legal framework. However, this system has long been criticized for its lack of effectiveness and transparency.
Thus, after several attempts by Senator Scott Wiener, California has finally adopted its first legal framework to regulate powerful AI models and limit their potential abuses. This new law comes into force after the Trump administration failed in its attempt to block any regulation of AI by the United States on the grounds that it would slow down the country in its competition with China.
It comes at a time when the United States is seeking to catch up with the European Union (EU), which is already implementing the Artificial Intelligence Act (AI Act), while tens of billions of dollars are pouring into AI technologies and concerns about potential abuses are mounting.
Under the leadership of Democratic Governor Gavin Newsom, the birthplace of tech giants is becoming the first state to impose transparency and security requirements on « frontier AI » models, considered to be the most powerful systems.
By establishing a clear legal framework, California intends to move from voluntary self-regulation to a binding legislative framework imposing obligations on Silicon Valley players, thus illustrating a genuine compromise between security and innovation.
Unprecedented obligations
Although it marks a victory for transparency, it is unpopular with the sector: tech giants fear that overly strict regulation will stifle innovation. The new text imposes a number of obligations on companies developing the most advanced AI models.
This initiative aims to increase transparency by requiring the publication of a report outlining the risks identified and the corrective measures implemented. It also aims to increase developer accountability while preventing serious risks such as loss of control of the model or misleading and dangerous AI behavior by requiring serious incidents to be reported to the competent authority within 15 days.
The text also ensures the protection of whistleblowers, in particular through new guarantees to facilitate the disclosure of potentially risky practices. It therefore prohibits any corporate practice aimed at restricting reports made concerning activities that pose a serious danger to public health or safety.
An American precedent with international implications
For Senator Wiener, SB-53 goes further than the AI Act, as the latter requires the publication of security protocols, whereas the EU reserves this information for supervisory authorities. By imposing these obligations, California is positioning itself as a legal laboratory for AI regulation.
It is also attracting the attention of European companies for several reasons, notably because it foreshadows a transatlantic dialogue on the security standards applicable to these AI models, which is likely to interact with the AI Act. But it also highlights the importance for any player developing these models to put in place robust organizational mechanisms to meet future governance obligations.
A framework that inspires and divides
Although this is a Californian law, certain obligations could have implications for international developers offering their models to users located in California. SB-53 law is set to become a benchmark for regulators around the world seeking to regulate the risks inherent in AI technologies. This law marks a turning point in global AI regulation and positions the Golden State at the crossroads of innovation and law.
Sources :
- https://www.lemonde.fr/intelligence-artificielle-une-loi-majeure-de-regulation-promulguee-en-californie.com
- https://www.ddg.fr/actualite/la-loi-californienne-sb-53
- https://www.01net.com/actualites/la-toute-premiere-loi-sur-lia-a-ete-promulguee-aux-etats-unis-mais-a-quoi-va-t-elle-servir.html
