White House seeks industry input as it crafts “anti-woke” AI guidelines

Kevin Carter/Getty Images

Natalie Alms By Natalie Alms,
Staff Reporter, Nextgov/FCW

By Natalie Alms

|

Critics have said that the July executive order on “woke” AI could chill free speech, despite the administration’s stated focus on protecting it.

The White House Office of Management and Budget is hosting listening sessions with industry as it creates its anti-woke guidance for artificial intelligence. 

In July, President Donald Trump signed an executive order on “Preventing Woke AI in the Federal Government,” requiring federal agencies to only purchase large language models deemed “truth-seeking” and showing “ideological neutrality.” 

The White House says that diversity, equity and inclusion are the main target, although how exactly the government will screen for DEI among AI models isn’t clear.

OMB wants to hear from industry on how it’s approaching AI transparency, auditable risk management and how they address politically sticky topics, including whether models have any instructions about political or other sensitive issues. 

The executive order “basically says that if you want access to taxpayer money — if you want the government to buy your model — you can’t inject an ideology in it, and we don’t care which ideology, you just can’t have political ideology in it,” Sriram Krishnan, a senior White House policy advisor on AI, said at a POLITICO event earlier this month. 

He, like many Republicans, referenced an instance where Google’s AI image generator produced pictures that showed the U.S. founding fathers and Nazi soldiers as Black last year.

Still, researchers at Stanford University’s Institute for Human-Centered AI say that achieving “true political neutrality” in systems is “theoretically and practically impossible,” despite the real risks stemming from AI models’ influence on people’s opinions and actions. 

“Neutrality is inherently subjective when what seems neutral to one person might seem biased to someone else,” they write, proposing that policymakers need to recognize this as they consider potential safeguards like third-party evaluations of AI systems for political bias.

OMB has until late November to issue guidance. Whatever the White House does next on the issue could have a large-scale impact, given the size of the government’s purchasing power and the fact that the White House wants those contracting with the government to agree to comply with the administration’s “Unbiased AI Principles” of “truth-seeking” and “ideological neutrality.” 

Critics have said that the executive order could have wide-ranging consequences on free speech, despite the fact that free speech is a purported AI priority for the Trump White House.

They’ve also pointed to the fact that the administration itself is recommending in its AI Action Plan that the National Institute of Science and Technology remove references to misinformation, DEI and climate change from its AI Risk Management Framework.

Asked if this executive order was a way of regulating AI, despite the administration’s anti-regulation stance on the technology, Krishnan said that the order was only about “making sure that any model that goes into the Defense Department, goes into the U.S. government, doesn’t have a hidden agenda.”

All this comes as the General Services Administration and the Pentagon have received criticism from lawmakers, civil society groups and advocacy groups for buying Elon Musk’s xAI after the AI chatbot called itself “MechaHitler.” 

It also posted antisemitic comments following an update instructing the bot not to shy away from politically incorrect claims. xAI has since apologized and said it updated the model.

“Grok has generated racist, antisemitic, conspiratorial, and false content, along with deepfakes and other abuses,” J.B. Branch, a Big Tech accountability advocate with the nonprofit Public Citizen, said in a statement. “Grok does not meet standards of accuracy, neutrality, and risk mitigation required for federal procurement. The American people demand tools that are factual, safe, and ideologically neutral.”