The White House has announced a new AI policy framework that calls for Congress to craft federal regulation that overrules state AI laws. The Trump administration has made multiple attempts to overrule more restrictive state-level AI regulation, but has failed so far, most notably in the passing of the “One Big Beautiful Bill.”
The framework focuses on a variety of topics, covering everything from child privacy to the use of AI in the workforce. “Importantly, this framework can succeed only if it is applied uniformly across the United States,” The White House writes. “A patchwork of conflicting state laws would undermine American innovation and our ability to lead in the global AI race.”
In terms of child privacy protections, the framework ask for Congress to require companies to provide tools like “screen time, content exposure and account controls” while also affirming that “existing child privacy protections apply to AI systems,” including limits on how data is collected and used for AI training. The framework also says carveout states should be allowed to enforce “their own generally applicable laws protecting children, such as prohibitions on child sexual abuse material, even where such material is generated by AI.”
The energy-use and environmental impact of AI infrastructure is a going concern, but the White House’s policy proposals are primarily worried about the cost of data centers. The framework suggests federal AI regulation should make sure that higher electricity costs aren’t passed on to people living near data centers, while streamlining the process for permitting AI infrastructure construction, so companies can pursue “on-site and behind-the-meter power generation.” The framework also calls for fewer restrictions on the software-side of AI development, proposing “regulatory sandboxes for AI applications” and asking Congress to “provide resources to make federal datasets accessible to industry and academia in AI-ready formats.”
While a recently AI bill from Senator Marsha Blackburn (R-Ten.) attempts to eliminate Section 230, a piece of a larger law that says platforms can’t be held responsible for the speech they host, the framework appears to propose the opposite. “Congress should prevent the United States government from coercing technology providers, including AI providers, to ban, compel or alter content based on partisan or ideological agendas,” the White House writes. The framework is similarly hands-off when it comes to copyright and the use of intellectual property to train AI. “Although the Administration believes that training of AI models on copyrighted material does not violate copyright laws,” the White House writes, it supports the issue being settled in court rather than by legislation. Though, the White House does think Congress should “consider enabling licensing frameworks” so IP holders can bargain for compensations from AI providers.
The clincher in the White House’s proposal is the idea that federal regulation should preempt state law, specifically so that states don’t “regulate AI development,” don’t “unduly burden American’s use of AI for activity that would be lawful if performed without AI” and don’t punish AI companies “for a third party’s unlawful conduct involving their models.” The idea that AI companies aren’t liable for the illegal or harmful uses of their products is particularly problematic because it lies at the heart of multiple intersecting issues with AI right now, including it being used to generate sexually explicit images of children and allegedly playing a role in the suicide of users.
Ultimately, though, the framework might be too contradictory to be useful, Samir Jain, the Vice President of Policy for the Center for Democracy and Technology, writes in a statement to Engadget:
The White House’s high-level AI framework contains some sound statements of principles, but its usefulness to lawmakers is limited by its internal contradictions and failure to grapple with key tensions between various approaches to important topics like kids’ online safety. It rightly says that the government should not coerce AI companies to ban or alter content based on ‘partisan or ideological agendas,’ yet the Administration’s ‘woke AI’ Executive Order this summer does exactly that. On preemption, the framework asserts that states should not be permitted to regulate AI development, but at the same time rightly notes that federal law should not undermine states’ traditional powers to enforce their own laws against AI developers. States are currently leading the fight to protect Americans from harms that AI systems can create, and Congress has twice correctly decided not to pursue broad preemption.
President Donald Trump has attempted to have an active role in how AI is developed and regulated in the US with mixed results, primarily because, as Jain notes, Congress has been unwilling to give up states’ right to regulate the technology on their own terms. Without that, its hard to say how much of the framework will actually make it into federal law.
This article originally appeared on Engadget at https://www.engadget.com/ai/the-white-house-proposes-new-ai-policy-framework-that-supersedes-state-laws-192251995.html?src=rss
