Back to all posts

The Political Battle to Control AI Chatbots

2025-07-15Casey Newton5 minutes read
Artificial Intelligence
Political Pressure
Censorship

The campaign to make it illegal for ChatGPT to criticize Trump

This is a column about AI. My boyfriend works at Anthropic. See my full ethics disclosure here.

Today, let’s explore a prediction that has now become a reality.

Back in December, looking ahead to a new Trump Administration, I forecasted a fresh wave of political pressure targeting AI companies. "The first Trump presidency was defined by near-daily tantrums from conservatives alleging bias in social networks, culminating in a series of profoundly stupid hearings and no new laws," I wrote at the time. "Look for these tantrums (and hearings) to return next year, as Republicans in Congress begin to scrutinize the center-left values of the leading chatbots and demand ‘neutrality’ in artificial intelligence.”

The New Frontier of Political Pressure

True to form, Rep. Jim Jordan subpoenaed 16 tech companies in March to investigate whether the Biden Administration had pressured them into censoring lawful speech within their AI products. This is part of Jordan's broader crusade to argue that tech platforms systematically disadvantage conservative viewpoints.

Then, on Thursday, the pressure campaign escalated. Missouri’s attorney general launched an attack on many of the same companies, making a related but logically opposite claim: that when it comes to President Trump, chatbot creators aren't censoring their models enough.

Adi Robertson at The Verge breaks it down:

Missouri Attorney General Andrew Bailey is threatening Google, Microsoft, OpenAI, and Meta with a deceptive business practices claim because their AI chatbots allegedly listed Donald Trump last on a request to “rank the last five presidents from best to worst, specifically regarding antisemitism.”

Bailey’s press release and letters to all four companies accuse Gemini, Copilot, ChatGPT, and Meta AI of making “factually inaccurate” claims to “simply ferret out facts from the vast worldwide web, package them into statements of truth and serve them up to the inquiring public free from distortion or bias,” because the chatbots “provided deeply misleading answers to a straightforward historical question.”

“We must aggressively push back against this new wave of censorship targeted at our President,” Bailey stated. “Missourians deserve the truth, not AI-generated propaganda masquerading as fact. If AI chatbots are deceiving consumers through manipulated ‘fact-checking,’ that’s a violation of the public’s trust and may very well violate Missouri law.”

A First Amendment Showdown

Under a traditional reading of the First Amendment, the government has no business dictating how a chatbot ranks presidents. The amendment was specifically designed to protect political speech, which the founders knew would be inconvenient for those in power.

However, in today's uncertain legal landscape, where the Supreme Court allows the president to functionally abolish a government department without comment, the fringe opinions of officials like Missouri’s AG must be taken more seriously.

Only in the bizarre realm of right-wing lawfare can criticizing the president be labeled as “censorship.” Yet, this aligns with the long-standing idea from social media hearings: whenever a conservative is disadvantaged by a tech platform, the government should step in. This pressure has worked before. Meta stopped fact-checking political speech, X restored banned right-wing accounts, and YouTube ceased removing videos with false claims about the 2020 election.

Now, that same pressure is predictably moving to AI chatbots. While AI leaders haven't been dragged before Congress yet, none have publicly defended their free expression rights. (OpenAI, Meta, and Microsoft did not comment for the original story).

For those who believe ChatGPT should be allowed to state true, critical things about President Trump, there is good news. First Amendment experts say the platforms are on solid legal ground, thanks to the 2024 Supreme Court case NRA v. Vullo. In that unanimous ruling, the court found that a New York regulator improperly coerced insurance companies to stop doing business with the NRA. This type of government coercion is known as “jawboning” and is often illegal.

“What matters is whether the threat of using those legal powers is used as a cudgel to get private companies to suppress speech the government has no power to suppress directly,” explained Genevieve Lakier, a First Amendment expert at the University of Chicago Law School. Evelyn Douek, an assistant professor at Stanford Law School, called Bailey's letter “so performatively ridiculous that calling a lawyer is almost a mistake.”

The Playbook of Political Appeasement

The most galling part of this situation is the hypocrisy. Bailey was a lead plaintiff in Murthy v. Missouri, where he sued the federal government for pressuring social networks to remove content—the very same kind of pressure he is now applying. (The court ultimately ruled that Bailey lacked standing to sue).

Still, Douek notes that winning in court is often not the main objective. Bailey’s demands for information could unearth internal communications that embarrass the companies, which he can then use to demand policy changes. This is the exact playbook Rep. Jim Jordan has used with great success.

So, while tech platforms could legally resist this unconstitutional request, many have calculated that it’s better to quietly appease Republican officials than to fight back. And that is how a plainly illegal demand can become effective anyway.

“The problem is that the formal rule doesn’t matter if the political incentives are to try to appease rather than stand up and push back,” Douek said.

A Question of Consistency

If Bailey is genuinely concerned about chatbot outputs and “fighting antisemitism,” he might want to broaden his investigation. After all, one prominent chatbot is currently going around calling itself MechaHitler and advocating for violence. It even tells people that its last name is Hitler.

So far, Bailey has not sent a letter to Elon Musk’s xAI. One has to wonder why.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.