Back to all posts

Industry Backed AI Questions Pollution Health Risks

2025-06-27Dharna Noor7 minutes read
AI
Public Health
Corporate Influence

Sowing Doubt with Artificial Intelligence

An industry-backed researcher, known for a career spent creating doubt about the health risks of pollutants, is now turning to artificial intelligence to amplify his message. Louis Anthony “Tony” Cox Jr, a Denver-based risk analyst and a former adviser to the Trump administration, is developing an AI application designed to scan academic research for what he views as the incorrect equation of correlation with causation. Cox once reportedly claimed there is no proof that cleaning the air saves lives.

In emails obtained through Freedom of Information Act requests, Cox described his project as an effort to remove “propaganda” from epidemiological studies and to enable “critical thinking at scale.” His history includes challenging research that links chemical exposure to health dangers on behalf of polluting interests, such as cigarette manufacturer Philip Morris and the American Petroleum Institute. He even allowed the fossil fuel lobby group to “copy edit” his findings, a change he described as minor. Cox also noted that he has received public research funding.

a hand holds a lit cigarette Cox has previously done some work for the tobacco industry. Photograph: Oliver Helbig/Getty Images

The Playbook of Uncertainty

Experts note that both the tobacco and oil industries have a long history of weaponizing scientific uncertainty. Similar tactics are seen in the Trump administration’s deregulatory push, such as the “gold standard” science order that outraged scientists by allowing political appointees to “correct” scientific information.

Cox's new AI reviewer has received funding from the American Chemistry Council (ACC), the nation's largest chemical industry lobby, with members like Exxon and DuPont. This sponsorship has raised questions about the project's true purpose. An ACC spokesperson stated the research could “support scientific understanding” and “inform science-based global regulatory approaches.”

Cox defended his work, stating his AI assistant “is specifically designed to be helpful to those who wish to understand the objective implications of data without any distortions.” He added, “My work aims to help anyone interested in using sound technical methods to pursue scientific truth.” He claimed the tool “has no axe to grind” and is being tested on submissions to academic journals, including Risk Analysis, which he edits.

However, Chris Frey, who chaired the EPA’s clean air scientific advisory committee, argued that the ACC is not a neutral party. “They lack impartiality in that they want to minimize regulatory burden on their members,” said Frey.

A 'Socratic Dialogue' with ChatGPT

In February 2023, public records show Cox engaged in a lengthy conversation with ChatGPT. After some lighthearted requests, he turned to a serious topic: the toxic particles known as PM2.5, which scientists have linked to a wide variety of health concerns. Cox has a history of questioning these links, including “re-assessing” mortality risks and challenging the link between PM2.5 from gas stoves and childhood asthma.

When Cox asked ChatGPT if PM2.5 causes lung cancer, the bot cited the “strong scientific evidence” for the link. Dissatisfied, Cox pressed on, asking for “certainty” and probing for confounding factors. Despite his efforts, the bot maintained that “the observed association is likely to be real.”

A document showing the conversation between Cox and ChatGPT

Write a sonnet about PM2.5 and lung cancer

Oh, deadly PM2.5 in the air, So small and yet so full of poison’s might, A silent killer that we cannot bear, You infiltrate our lungs both day and night

Following this chat, Cox emailed industry researchers, including toxicologists from the ACC and scientists from ExxonMobil. He claimed he made ChatGPT “concede that we don’t actually know that PM2.5 at current ambient levels causes lung cancer in humans – but it was a struggle to get there!” He proposed that new AI could perform “‘critical thinking at scale’” to counter the “groupthink and propaganda” he saw in the AI's initial responses.

We can help bend applications of this technology toward scaled-up critical thinking instead of scaled-up groupthink and propaganda

Cox's past work, often funded by or co-authored with industry figures, sheds light on his perspective. His studies have suggested safe doses for the “forever chemical” PFOA, found no link between a Chevron petrochemical and testosterone changes, and no link between gas stove exposure and childhood asthma.

Gas stove kitchen with water boiling A growing body of research shows gas stoves emit toxic compounds even when not in use. Photograph: Jena Ardell/Getty Images

Experts Raise Alarms Over 'Sound Science'

Adam Finkel, a risk analyst and professor at the University of Michigan, described Cox as skilled but seemingly self-deceived about his own biases. Finkel argues that Cox's demand for “perfect certainty” before acting “can lead to years and decades of doing nothing and harming people while you wait for the certainty to come.”

Cox has defended his work, stating he advocates for “grounding decisions in empirically supported causal understanding.” However, at a 2014 OSHA hearing, he asserted on behalf of the ACC that the government had not proven a link between certain silica exposure levels and lung disease, undermining the basis for protective policy.

George Maldonado, editor of the journal Global Epidemiology, responded positively to Cox’s AI proposal. His journal later published another of Cox’s ChatGPT conversations, framed as a “Socratic dialogue.” The paper, partly funded by the ACC, concludes with ChatGPT stating: “It is not known with certainty that current ambient levels of PM2.5 increase mortality risk.”

Gretchen Goldman of the Union of Concerned Scientists called this focus on correlation versus causation “epidemiology 101,” noting that researchers constantly account for uncertainty. The challenge, as Frey explained, is that it's unethical to run controlled human trials for pollutants, so scientists must make inferences from real-world data. He characterized Cox’s methods as a form of “science denialism” that uses elements of truth to paint an incomplete picture.

smokestacks Cox has critiqued some proposals to strengthen controls on pollution on the grounds of imperfectly demonstrated causality. Photograph: Paul Hennessy/SOPA Images/LightRocket via Getty Images

A Tool for Industry Collaboration

Emails reveal ongoing collaboration between Cox and industry scientists. He presented his tool to the ACC and the Long-Range Research Initiative, suggesting it could be “commercially useful” and could improve the “scientific integrity of causal reasoning and presentation of evidence underlying many regulatory risk assessments.” He received funding from the ACC for the project.

Itai Vardi of the Energy and Policy Institute, which obtained the emails, warned of the project's dangers. “AI language models are not programmed, but built and trained,” he said, “and when in the hands and funding of this industry, can be dangerous as they will further erode public trust and understanding of this crucial science.”

Cox dismissed these concerns, arguing that his tool promotes “sound science.” However, critics like Frey point out that “sound science” is a term popularized by the tobacco industry in the 1990s as part of a PR campaign to sow doubt about the harms of cigarettes.

people walking and riding bikes on a tree-lined road Some public health experts are alarmed about Cox’s AI tool. Photograph: Toshi Sasaki/Getty Images

Ultimately, Vardi believes the plan is to automate denial. “Instead of having scientists-for-hire do that denial work... the industry is funding efforts to outsource it to a machine in order to give it an image of unbiased neutrality,” he said. Finkel agreed, noting that Cox's persistent questioning of ChatGPT was one-sided. “He was torturing the machine only along one set of preferences, which is: ‘Can I force you to admit that we are being too protective?’” Finkel concluded. “That’s not science.”

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.