The Artificial Intelligence Risk Evaluation Act
Reviewing the status of a bipartisan bill to enact oversight on advanced AI models and how we can get it passed
The Artificial Intelligence Risk Evaluation Act (SB2938), would require AI companies to submit their most powerful systems to the Department of Energy for safety testing before they're allowed to release them to the public, and fine them $1 million per day if they don't.
It’s actually a very good bill, with a positive review on MIRI’s website. For me, it’s extra important because Hawley is one of my senators, giving me a better opportunity than most to make a difference. Here is the current state of the bill, as I understand it:
Current Status
As of today, February 16th, S.2938 is sitting at “Introduced” status, which is about 25% progression on the legislative track. It was read twice and referred to the Senate Committee on Commerce, Science, and Transportation on September 29, 2025, and nothing has happened since. No committee hearing has been scheduled, no markup, no additional action in nearly five months.
Support Level
The support picture is thin. The bill has just its two primary sponsors, Hawley (R-MO) and Blumenthal (D-CT), with no other cosponsors. On the outside, Americans for Responsible Innovation praised it, with ARI president Brad Carson calling it “a welcome show of bipartisan support for creating rules of the road to protect the public.” But formal endorsements from major AI safety organizations haven’t materialized publicly.
The Committee Problem
This is the biggest obstacle. The Commerce Committee is chaired by Ted Cruz, who has a fundamentally different AI philosophy. Cruz introduced his own AI policy framework advocating a “light-touch regulatory approach” and a SANDBOX Act focused on reducing regulatory barriers to AI innovation; essentially the opposite impulse from Hawley’s pre-deployment evaluation mandate.
Looking at recent Commerce Committee activity, Cruz has been scheduling hearings on autonomous vehicles, media ownership, FAA safety, and children’s screen time, but no hearings on AI safety evaluation frameworks. S.2938 was notably absent from the committee’s recent executive session agendas, which have focused on transportation and broadband bills.
In short: the bill is stalled in a committee whose chairman is ideologically opposed to its regulatory approach.
Strategies
This is where I need help and advice; I would love anyone’s feedback on how we can get this bill un-stuck. Here’s some ideas I had:
1. General Senate support. This bill needs co-sponsors and any other support members of the senate can throw at it. Contact your reps and use this script:
"Hi, my name is [name], and I'm a constituent from [city/town]. I'm calling about S.2938, the AI Risk Evaluation Act, a bipartisan bill introduced by Senators Hawley and Blumenthal. It would require the most powerful AI systems to undergo safety testing through the Department of Energy before they can be released to the public — covering risks like national security threats, impacts on jobs, and loss-of-control scenarios. I'm asking that [Senator's name] cosponsor this bill and support bringing it to a hearing in the Commerce Committee. I think most Americans would agree that the most powerful AI systems should be tested for safety before they're deployed, the same way we test drugs and aircraft. Could you let me know the Senator's position on this?"
2. Commerce Committee support. This is where the bill is stuck. If your rep sits on the Commerce Committee, your voice is extra important.
(Missouri friends: Our other rep, Eric Schmitt, is also on the committee so we have additional leverage!)
(Kansas friends: Kansas senator Jerry Moran is also on this committee!)
Use the same script as above but add:
I understand [Senator's name] serves on the Commerce Committee where this bill is currently sitting, so their support would be especially impactful.
3. Support for existing sponsors. Hawley and Blumenthal already support the bill, but they need to hear that their constituents really want them to keep pushing for it. Use this script for them instead:
Hi, my name is [name], and I'm a constituent from [city/town]. I'm calling to thank Senator [Hawley/Blumenthal] for introducing S.2938, the AI Risk Evaluation Act. I think requiring safety testing for the most powerful AI systems before they're released to the public is common sense, and I'm glad my senator is leading on this. I know the bill has been in the Commerce Committee since September without a hearing — is there anything constituents can do to help move it forward? And is there a staffer working on this issue I could stay in touch with?
I plan to reference the upcoming PauseCon in DC when I contact my senators, to help ensure a meeting with him at that time.
4. Pressure Ted Cruz. People in Texas are the most important constituents in this stage.
The bill’s best chance of a hearing is if it gets reframed in terms Cruz already cares about: national security competition with China, protecting American workers, and preventing Big Tech monopoly power. The bill already has these elements (the “foreign adversary weaponization” provisions and labor market protections) but they need to be emphasized over the “regulate AI companies” framing that Cruz instinctively resists.
Use this script:
Hi, my name is [name], and I’m a constituent from [city/town]. I’m calling to ask Chairman Cruz to schedule a Commerce Committee hearing on S.2938, the AI Risk Evaluation Act, introduced by Senators Hawley and Blumenthal.
I know the Chairman has said he wants a light-touch approach to AI, and I respect that — but I think this bill is actually consistent with that goal. It doesn’t create a new regulatory agency or ban anything. It just requires the Department of Energy to test the most powerful AI systems for national security threats before deployment — things like weaponization by foreign adversaries like China, and loss-of-control scenarios. That’s not red tape, that’s the same kind of due diligence we’d expect for any technology that could threaten American security.
Thank you.
4. Build the endorsement ecosystem. One reason the bill is stalling is that it lacks a visible constituency. We need organizations on-record supporting this, please use your networks accordingly to try and get this done. I can use my PauseAI and EA networks. While at the DC conference in April, I hope to connect with ARI and other DC-based groups who’ve already praised the bill to help build a formal coalition letter to the Commerce Committee.
My honest assessment
Most bills that stall in committee for five months without a hearing don’t make it. But bipartisan AI safety legislation with a national security angle is one of the categories that I think can get unstuck if there’s enough outside pressure or if a triggering event (an AI incident, a major report) creates political urgency. My goal is to make sure the infrastructure exists to capitalize on that moment.


