White House AI framework calls for preemption of state laws
Published in News & Features
WASHINGTON — The White House on Friday proposed its framework for a national artificial intelligence policy, pushing for broad preemption of state AI laws and against “open-ended liability” for AI firms.
The proposal urges Congress to take some steps to protect kids, energy costs and copyright holders, while also requesting streamlined permitting for data centers, regulatory “sandboxes” to allow exemptions to federal regulations and no new regulatory body to oversee the fast-spreading technology.
The four-page document fulfilled a request from President Donald Trump’s December executive order on state AI laws, which directed White House science and technology adviser Michael Kratsios, along with special adviser for AI and crypto David Sacks, to develop a national policy to preempt state laws.
House leadership immediately offered their support for the proposal.
Speaker Mike Johnson, R-La., and Majority Leader Steve Scalise, R-La., released a statement, along with key committee leaders Brett Guthrie, R-Ky., who heads the Energy and Commerce panel; Jim Jordan, R-Ohio, chair of the Judiciary Committee; and Brian Babin, R-Texas, who leads the Science, Space and Technology Committee, urging Congress to “take action” in order to “ensure we continue to harness (AI’s) potential and beat China in the global AI race.”
Sen. Marsha Blackburn, R-Tenn., who released her own sweeping draft AI legislation earlier this week, emphasized focusing on legislation that can pass both chambers.
“Over the last few months, I have worked diligently with the White House, conservative leaders, child safety advocates, members of the creative community, and AI innovators to develop legislation that can garner bipartisan support and accomplish the president’s goals,” Blackburn said.
Details of the framework
The framework lays out seven broad categories for Congress to address: kids’ safety, community effects from AI, copyright, indirect government censorship, federal regulation, jobs and state preemption.
Those priorities won praise from the AI industry, which has pushed Congress and the administration to move toward preemption.
Patrick Hedger, director of policy for industry group NetChoice, said in a statement that the framework shows that the White House knows “what is at stake and what it will take to win the future,” going on to add that “a light-tough regulatory environment” is required for AI innovation.
Daniel Castro, director of the Center for Data Innovation, a group whose supporters include several major tech firms, said in a statement that the framework avoids the “worst instincts in today’s AI debate” including “alarmism” about unemployment and worries that AI training infringes on copyright.
The framework, which leans toward nonregulatory solutions and away from making AI companies liable for potential harms, was less popular with groups warning of risks from AI.
Brad Carson, president of nonprofit advocacy group Americans for Responsible Innovation and a former Democratic House lawmaker, said the framework would offer “another chance for tech companies to launch harmful products with no accountability.”
Kids’ safety
The kids’ safety section roughly follows a path recently set by House Republicans in an Energy and Commerce markup of kids’ online safety legislation, calling for greater parental controls for things like privacy, content exposure and screen time.
It also asks Congress to establish “commercially reasonable, privacy protective, age assurance requirements” for AI platforms “likely to be accessed by minors,” rather than age verification.
Industry and privacy experts have pushed back against requirements that platforms, app stores or device manufacturers collect government IDs or facial scans to verify age. The framework instead suggests that parents could attest to their child’s age.
It also says Congress should require AI platforms to “implement features” to reduce risks of sexual exploitation or self-harm to minors — two issues that lawmakers in both chambers have worked to address in recent months.
The framework, however, does not offer specifics on what those features would look like, and it advises legislators to “avoid setting ambiguous standards about permissible content, or open-ended liability, that could give rise to excessive litigation.”
That would seem to contradict Blackburn’s bill, which would put a “duty of care” on AI developers and social media platforms in designing their technology to prevent harms to their users.
The Trump administration document also does not call for expansion of data privacy protections for kids, a bipartisan priority that has moved in various forms in both the House and Senate this Congress.
The framework instead suggests that Congress “affirm that existing child privacy protections apply to AI systems.” Data protections under the Children’s Online Privacy Protection Act currently limit data collection for users under the age of 13.
Energy costs and safety
The community safety section of the framework incorporates several priorities, including asking Congress to ensure that residential ratepayers don’t see increased energy costs from data center buildouts while streamlining the permitting process for those centers.
The framework also calls for Congress to “augment” law enforcement efforts against AI scams and to make sure national security agencies have the technical capability to understand potential concerns from frontier AI models.
The framework largely defers copyright questions regarding AI and the training of large language models, urging Congress not to interfere with court cases on the issue of whether AI training constitutes fair use.
It does call for Congress to consider creating a system for collective licensing for rights holders to negotiate with AI providers, but says that legislators “should not address when or whether such licensing is required.”
Deepfakes
It also asks Congress to consider a framework to protect people from unauthorized deepfakes, including those used in a commercial setting. However, it makes clear that carve-outs would be needed for satire, news reporting and “other expressive works protected by the First Amendment.”
The framework repeats Republican plans for legislation to address indirect government censorship, known as jawboning. It directs Congress to “prevent” such interference to change or remove partisan content and to allow individuals to sue if they’ve been censored.
Senate Commerce Chair Ted Cruz, R-Texas, has long promised to introduce legislation on the issue, but has been short on specifics about what lines might be drawn between appropriate contact by the government with tech companies and impermissible pressure.
The document also calls for regulatory sandboxes that would allow AI companies to apply for exemptions from federal regulations over a certain period of time.
Cruz introduced a bill last fall to do just that. The legislation would allow companies to apply for waivers of up to 10 years, with the White House — through the Office of Science and Technology Policy — designated to oversee the program.
The framework also says that Congress should make federal datasets more accessible for AI training and should not create a new rulemaking body for AI, instead supporting existing subject matter bodies and “industry-led” standards.
On jobs, the Trump administration wants Congress to use “non-regulatory methods” to ensure that education and workforce training programs include training on AI.
A February YouGov poll found that 63% of Americans think AI will lead to fewer jobs. The framework directs Congress to expand efforts to study AI-driven job trends.
The framework asks for broad preemption of state laws on AI, a long-standing priority of the AI industry and the Trump administration. That priority has fallen short of legislative backing twice this Congress; it was removed from the GOP budget reconciliation bill last summer and never officially made it into the annual defense policy bill.
The framework suggests that states should still have power over their generally applicable laws, zoning for data centers and state procurement. But it says that states should not regulate development or penalize AI developers for third-party use of their products.
©2026 CQ-Roll Call, Inc., All Rights Reserved. Visit cqrollcall.com. Distributed by Tribune Content Agency, LLC.







Comments