Will State Senator Scott Wiener Bring AI Regulation to Washington, D.C.?

California State Senator Scott Wiener talked about his legislative record for 2025 at his office’s annual pumpkin carving on Oct. 25 2025. (Anders Eidesvik/Peninsula Press)

As October came to an end, California State Senator Scott Wiener hosted his yearly pumpkin carving event. Surrounded by constituents – as well as some anti-transgender protestors who showed up for the occasion – Wiener talked through his legislative record for 2025. 

Among his achievements was SB 53, the Transparency in Frontier Artificial Intelligence Act. Signed by Gov. Gavin Newsom just a month earlier, it made California the first state to regulate frontier AI – a term used to describe the most advanced AI models available like ChatGPT and Claude.

As California is home to almost all of the major AI companies in the world, the law de facto functions as regulation for the whole country. Meaning this is as close to federal regulation the U.S. may get in the near future. 

The law requires companies like OpenAI, Anthropic, and Google DeepMind to publish safety frameworks, report critical incidents, and protect whistleblowers who raise concerns about catastrophic risks

Now, Wiener is running for Congress. He is one of the many contenders aiming to fill the seat Nancy Pelosi held for nearly 40 years before announcing her retirement on Nov. 6. For people within the AI space a question arises: Will the architect of California’s AI law attempt to bring similar regulation to Washington, D.C.?

When asked what his ambitions are for AI regulation at a national level, Wiener talked about how ideally SB 53 should be a federal national standard but acknowledged it is difficult. 

“Congress has struggled with strong comprehensive technology regulation,” he said. “I hope that changes. I hope to be a part of that change.” 

When asked what specifically he would do if elected to Congress, Wiener listed other issues than AI: protecting democracy, housing policy, healthcare access, public transportation, and clean energy. AI is also not listed among his priorities on his campaign website. 

To understand why Wiener might be cautious to embark on bringing AI regulation from San Francisco to Washington D.C., it might be useful to look at how SB 53 came to be in the first place. 

The Price of Compromise

SB 53 was not Wiener’s first attempt at AI regulation. Originally, the bill was called the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, commonly known as SB 1047. 

The Peninsula Press spoke to Seve Christian, then-legislative director at Wiener’s office, who is considered to be the brain behind SB 1047. Christian said the main difference between the bills was that SB 1047 had more tangible requirements for companies. 

“SB 53 is just a transparency measure to say, ‘We are going to believe you when you say that you are doing your homework’” they said. “We’re requiring you to publish that homework to show how you did your work.”

SB 1047 caused an uproar when it was first introduced. Proponents of the bill  – which included many AI safety groups, academics like Geoffrey Hinton, Stuart Russell and Yoshua Bengio, and surprisingly Elon Musk – argued that the bill would provide “clear, predictable, common-sense safety standards.” 

Seve Christian recently left the role of legislative director at Wiener’s office to work for the AI Safety Advocacy group Encode as their California policy director. Photo courtesy of the Office of Senator Scott Wiener

Opponents argued it was too stringent and could hamper U.S. innovation in the AI race with China. In the opposition camp were almost all of the major tech players such as OpenAI, Meta, Microsoft, Y Combinator and Andreessen Horowitz. Anthropic lobbied to water down the bill, before it ended up becoming a supporter. 

SB 1047 passed both in the State Assembly and Senate but was ultimately vetoed by Newsom in 2024. 

Differences between SB 1047 and SB 53.

SB 1047SB 53
Applied to companies with a training cost over $100 million dollars for their models.Applies to companies that have over $500 million in annual revenue.
Required developers to build in “kill-switches” into the AI models. Excludes the “kill-switch” requirement. 
Required large AI companies to develop safety plans.  Requires large AI companies to develop safety plans.  
Introduced liability if an AI company was responsible for “mass casualty event” of more than $500 million in damages in a single incident or set of closely linked incidents. Removed liability requirement. 
Included whistle-blower protections.Includes whistle-blower protections.
Included third party audits.Excludes third party audits. 
Included pre-deployment testing.Excludes pre-deployment testing.
Included creation of a public cloud computing cluster “CalCompute”Includes creation of a public cloud computing cluster “CalCompute”

The watering down of SB 1047 to SB 53 illustrates a pattern: even in California, where most AI companies are headquartered and the state has unique leverage, it is really hard to pass regulations that put in place actual restrictions for the day-to-day operations of AI companies. 

The question then becomes: if California could barely pass transparency requirements, what hope does federal regulation have?

AI and Congress

Riana Pfefferkorn, policy fellow at Stanford’s institute for Human-Centered Artificial Intelligence (HAI) said the question of AI federal regulation is a complex one 

“I would say there’s two stories going,” she said. “On the one hand, there’s the story of attempts to regulate specific applications of AI or specific harms that are caused by AI.” This type of regulation addresses specific harms of AI through targeted measures, such as watermarking AI content and banning non-consensual deepfake porn or election manipulation.

Riana Pfefferkorn is a policy fellow at Stanford’s HAI. (Anders Eidesvik/Peninsula Press)

“ The other strand is trying to regulate AI itself in a big picture [sense],” Pfefferkorn said. “More in the way that the EU AI Act is trying to be very comprehensive.”

This type of broader regulation aims to develop certain principles that should guide the development and implementation of AI. While the EU has the most extensive AI regulation – with their risk-based AI classification system  – in the world, none of the major AI labs are European.

So far, Congress has only used a limited approach when lawmakers at the Capitol this year passed the TAKE IT DOWN act, a law making it illegal to publish nonconsensual deepfakes in certain circumstances.

Another reason why federal regulation of  AI is unlikely  soon is the political landscape on the Capitol. 

“ There is a lot of resistance in the federal Congress right now for any sort of larger scale regulation around AI for fear of impeding innovation and national security competition and competitiveness with China and entrepreneurship,” Pfefferkorn said. 

Currently, there exist no comprehensive federal regulation on frontier AI models. (Anders Eidesvik/Peninsula Press)

She points out that Republicans – who currently control the House, the Senate and the  Presidency – are “extremely pro-business” due to AI’s potential to boost the economy and support national defense. 

When it comes to Democrats, views on AI regulation are more mixed. While many share concerns that heavy-handed regulation might stifle AI innovation in the race with China, there’s also significant wariness about appearing too cozy with Silicon Valley.

“In the Democratic Party, there’s a lot of hesitation about being seen as being pro big tech,” Pfefferkorn said, noting that policymakers are trying to avoid repeating what many see as mistakes from the social media era, where the lack of regulation allowed a handful of tech companies to grow unchecked despite various reported harm connected to social media

But beyond politics, Pfefferkorn points to another strong obstacle: money. With AI investments and spending driving much U.S. economic growth it is hard to convince people that one should slow down the development. 

“At least at  the moment where we’re looking at, valuations of companies like OpenAI in the hundreds of billions. It becomes a much more uphill battle to convince everybody outside of that organization that it’s worth slowing down and moving carefully in order to try and make things as safe as possible,” she said.For Christian, who recently left Wiener’s office to become California policy director for Encode, an AI safety advocacy organization, the compromise required to pass even SB 53 was an important first step, though they initially wanted the first version to pass. “ I’ll always in my heart have a soft spot for 1047,” they said.

Author

  • Anders Eidesvik

    Anders Eidesvik is a Norwegian journalist from Bergen. He graduated from the University of Exeter with a bachelor’s degree in Politics, Philosophy & Economics, and then spent spending the next four years reporting for national outlets Klassekampen, Dagens Næringsliv and NRK. In February 2022 he joined Norway’s UN delegation in New York, working on sustainability during the country’s Security Council tenure. At Stanford, he hopes to sharpen his data and investigative journalism skills to explore how artificial intelligence is reshaping society.

Scroll to Top