Is AI Now ABOVE the Law – Who Decides?

The “One Big Beautiful Bill Act” threatens state governance by blocking AI regulation for a decade, while centralizing federal power and cutting critical cybersecurity resources.

At a Glance 

  • The controversial bill includes a 10-year moratorium on state and local AI regulation, sparking opposition from lawmakers and tech leaders
  • Cybersecurity funding shifts heavily toward military applications while cutting over 1,000 positions at CISA
  • The bill’s broad definition of “artificial intelligence system” effectively immunizes AI from local oversight
  • Vice President JD Vance has expressed concerns about AI’s impact on teenagers despite the administration’s push for the bill
  • Critics argue states should serve as testing grounds for AI regulations to protect vulnerable populations

Federal Power Grab Alarms States and Tech Leaders

The “One Big Beautiful Bill Act” has created unexpected divisions even among conservative allies, with Elon Musk publicly condemning the legislation as a “disgusting abomination” due to its excessive spending and regulatory overreach. The bill represents a dramatic shift in federal digital infrastructure and AI governance, favoring large tech companies while stripping states of regulatory authority. Of particular concern is the 10-year moratorium on state and local AI regulation, which has united opponents from across the political spectrum who worry about unaddressed harms at the community level. 

The legislation has even drawn criticism from Republican lawmakers, including Rep. Marjorie Taylor Greene, who has announced opposition to the bill despite her strong support for President Trump. At issue is the fundamental question of whether states should retain their traditional role as laboratories of democracy, especially for emerging technologies that could impact everything from election integrity to community safety. By centralizing power at the federal level, critics argue the bill undermines conservative principles of limited government and federalism.

National Security Risks and Budget Concerns

While administration officials defend the bill as supporting their “core mission” and “cost-effective” approaches to cybersecurity, the legislation makes dramatic changes to national cyber defense strategies. CISA, the Cybersecurity and Infrastructure Security Agency, faces deep budget cuts resulting in the loss of over 1,000 positions and the downsizing or defunding of key civilian protection programs. Meanwhile, cybersecurity funding heavily favors military and defense applications, with significant investments directed to the Department of Defense. 

“I’m concerned about the millions of American teenagers who are talking to chatbots who don’t have their best interests at heart,” said Vice President JD Vance. 

The bill also includes a controversial clause limiting federal courts’ ability to enforce contempt rulings against government officials, raising serious concerns about the rule of law and technology oversight. These provisions have particularly alarmed those focused on maintaining constitutional checks and balances. Defense contractors and large AI infrastructure providers stand to benefit significantly, while state governments and CISA emerge as clear losers in the new funding paradigm.

Broad AI Immunity Threatens Local Control

Critics are particularly troubled by the bill’s expansive definition of “artificial intelligence system,” which covers virtually any system using AI in any capacity. This sweeping language effectively grants AI systems immunity from state and local regulation, similar to the Section 230 immunity internet platforms currently enjoy. The Senate version goes even further by offering financial incentives to states that agree not to enforce AI regulations, raising constitutional questions about federal coercion of state governments. 

“Human beings have wants and needs” that AI systems are designed to exploit and “give you a dopamine rush,” Vice President JD Vance has warned, highlighting the contradiction between his public concerns and the administration’s legislative priorities. 

Concerns about AI’s negative impacts span a wide range, from harmful social media algorithms to chatbots influencing impressionable teenagers and autonomous weapons systems. With Congress historically slow to develop comprehensive technology policies, states and localities have traditionally served as crucial laboratories for experimenting with regulations that protect vulnerable populations. The Vatican has also entered the debate, calling for regulatory frameworks to ensure accountability and transparency in AI development and deployment.

Alternative Approaches to AI Regulation

While few dispute the need for some form of national regulatory framework for AI, many experts doubt Congress’s ability to enact effective legislation on its own. The complex and rapidly evolving nature of artificial intelligence demands flexible, responsive governance approaches that can adapt to emerging challenges. States have historically served as testing grounds for new regulatory models before federal adoption, a tradition that would be halted under the proposed moratorium.

Defenders of state-level innovation point to successful models in areas like data privacy, where states like California developed frameworks that later influenced national standards. Without this experimentation, they argue, vulnerable communities could face years of unchecked harm from predatory AI applications while waiting for comprehensive federal action. As Congress continues to debate the bill, the fundamental tension between innovation and protection remains unresolved.