Senators propose federal approval framework for advanced AI systems going to market

Sen. Josh Hawley (R-MO) and Sen. Richard Blumenthal (D-CT) are seen at the start of a hearing on

Sen. Josh Hawley (R-MO) and Sen. Richard Blumenthal (D-CT) are seen at the start of a hearing on “Artificial Intelligence and The Future Of Journalism” at the U.S. Capitol on January 10, 2024 in Washington, DC. Kent Nishimura/Getty Images

Alexandra Kelley By Alexandra Kelley,
Staff Correspondent, Nextgov/FCW

By Alexandra Kelley

|

Bipartisan legislation from Sens. Hawley and Blumenthal would create a rigorous safety evaluation program and require advanced AI model developers to participate before being used in interstate or foreign commerce.

Sens. Josh Hawley, R-Mo., and Richard Blumenthal, D-Conn., introduced new legislation on Monday that creates a risk evaluation program for advanced artificial intelligence products, notably prohibiting a model’s consumer deployment for use in interstate or foreign commerce until it meets the bill’s required safety criteria. 

Outlined in the Artificial Intelligence Risk Evaluation Act of 2025, a new AI risk evaluation framework would be created and helmed by the Department of Energy, following a secure testing and evaluation program. The safety criteria in the program would examine multiple intrinsic components of a given advanced AI system, such as the data upon which it is trained and the model weights used to process said data into outputs. 

Some of the program’s testing components would include red-teaming an AI model to search for vulnerabilities and facilitating third-party evaluations. These evaluations will culminate in both feedback to participating developers as well as informing future AI regulations, specifically the permanent evaluation framework developed by the Energy secretary.

Risk mitigation is at the heart of establishing this advanced AI risk evaluation program. The bill stipulates that the program would protect against AI risks to national security, public safety and civil liberties, as AI and machine learning are further integrated into nearly all societal sectors.

Hawley, who has been critical of tech companies’ overreach and AI safety concerns, said the bill takes steps to verify the safety and efficacy of rapidly-growing AI technologies.

“Simply stated, Congress must not allow our national security, civil liberties, and labor protections to take a back seat to AI,” Hawley said in a press release. “This bipartisan legislation would guarantee common-sense testing and oversight of the most advanced AI systems, so Congress and the American people can be better informed about potential risks.”

Not all AI models are subject to the program. The bill requires specifically advanced AI programs –– or models that are trained with a computing power greater than 10^26 floating-point operations –– to participate in the evaluation process. 

Penalties for developers who deploy their products without having participated in the program amount to a $1,000,000 fine for every day the AI tool operates without government clearance. 

This isn’t the first time Hawley and Blumenthal have teamed up to apply regulation to the booming AI industry. In July, the bipartisan duo introduced the AI Accountability and Personal Data Protection Act to apply oversight into the sensitive consumer data tech companies use to train their AI models.