White House plan bids a farewell to fair and responsible AI
Published in Political News
“America’s AI Action Plan,” unveiled by the White House on July 23, aims to accelerate the innovation of artificial intelligence by dismantling regulations and privatizing infrastructure. What the plan does is conflate innovation with deregulation and frame AI as a race to be won rather than a technology to be governed.
President Donald Trump signed three executive orders to ensure that the federal government approves data centers as quickly as possible, promote the exporting of AI models for the sake of American dominance and guarantee that federally supported AI systems are “ideologically neutral” and reject “wokeism and critical race theory.”
In its 24 pages, the plan does not mention “ethics” at all and cites “responsibility” once, in the context of securing AI systems against adversarial attacks. The “Build World-Class Scientific Datasets” section is the only part of the action plan that explicitly mentions human rights: “The United States must lead the creation of the world’s largest and highest quality AI-ready scientific datasets, while maintaining respect for individual rights and ensuring civil liberties, privacy, and confidentiality protections.” However, without protection measures, there is no encouragement for responsible use and deployment.
For example, the plan prioritizes a narrow interpretation of national security without addressing critical ethical needs such as the protection of vulnerable populations, children, neurodivergent individuals and minorities — issues that the European Union AI Act addresses.
And the plan’s only nod to misinformation is framed as a free speech issue. Instead of trying to address it, the plan suggests that references to it should be eliminated: “Revise the NIST AI Risk Management Framework to eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change.” Placing misinformation, DEI and climate change in one bucket suggests that these very different things can be treated the same way. The implications of this policy include that Google search, now enabled by AI, might censor references to these topics.
The plan also contains significant accountability gaps. By rejecting “onerous regulation,” the administration effectively green-lights opaque AI systems, prioritizing deregulation over transparency. It does not incentivize processes to help us understand the results produced by AI, enforceable standards or oversight mechanisms.
For example, when AI systems discriminate in hiring or health care, there is no clear answer to questions such as: How did this happen? Who is responsible? And how can we prevent this in the future?
The plan delegates oversight to private corporations, relying on self-policing as a substitute for governance. This hands-off approach mirrors a broader deregulatory playbook: During a May 8 Senate hearing led by U.S. Sen. Ted Cruz, the Republican from Texas hailed “a light-touch regulatory style” as a key strategy.
This approach to data governance also raises serious concerns about fairness. While it calls “open-weight” and “open-source” AI the engines of innovation, it mandates that federally funded researchers must disclose “non-proprietary, non-sensitive datasets” used in AI research. This creates a double standard: Academic researchers and institutions should share data in the name of transparency, while private corporations are free to hoard proprietary datasets in their ever-expanding data centers. The result is an ecosystem in which public research fuels private profit, reinforcing the dominance of tech giants.
Indeed, rather than leveling the playing field, the plan risks entrenching imbalances in access, ownership, possession and control over the data that powers AI.
Furthermore, by ignoring copyright, the plan invites the unchecked scraping of creative and scientific work, which risks normalizing extracting data without attribution and creating a chilling effect on open scholarship. Researchers might ask themselves: Why publish clean and reusable data if it becomes free training material for for-profit companies such as Meta or OpenAI?
During his introductory remarks at a White House AI summit, Trump provided the rationale: “You can’t be expected to have a successful AI program when every single article, book or anything else that you’ve read or studied you’re supposed to pay for.” However, before the recent wave of deregulation, AI companies had begun forming licensing agreements with publishers. For instance, OpenAI’s two-year agreement with The Associated Press signed in 2023 showed that publishers could license high-quality, fact-checked archives for training purposes and also allow their content to be displayed with proper attribution in AI-generated outputs.
Without a doubt, the plan can turbocharge corporate American AI — but likely at the expense of the democratic values the U.S. has long worked to uphold. The document positions AI as a tool of national self-interest and a driver of global divides. While Americans have the right to want to win the AI race, the greater danger is that they might win it on terms that erode the very values the nation has for so long declared to defend.
_____
Mohammad Hosseini, Ph.D., is an assistant professor in the Department of Preventive Medicine at Northwestern University’s Feinberg School of Medicine.
_____
©2025 Chicago Tribune. Visit at chicagotribune.com. Distributed by Tribune Content Agency, LLC.
Comments