AI In The City Hall Washington Officials Use ChatGPT
A recent investigation has revealed that artificial intelligence is quietly becoming a common tool within the city halls of Washington state. This is the first of a two-part series exploring how local governments are navigating the world of AI. You can find the second part here.
When the Lummi Nation sought funding for a crime victims coordinator, a letter of support was sent from Bellingham Mayor Kim Lund’s office to the Washington Department of Commerce. The letter, which read, “The Lummi Nation has a strong history of community leadership and a deep commitment to the well-being of its members,” was not written by the mayor or her staff. It was crafted by ChatGPT.
Records show the mayor's assistant prompted the AI chatbot with the funding proposal, asking it to write the letter and include facts about violence in native communities. While the final version was edited, about half of its sentences matched the AI's output. The Lummi Nation ultimately did not receive the grant.
This instance is far from unique. Public records requests submitted by Cascade PBS and KNKX unearthed thousands of pages of ChatGPT conversations, indicating widespread AI adoption in local government. Officials are using the technology for a vast array of tasks, including drafting social media posts, creating policy documents, preparing talking points and speeches, writing press releases, responding to audit recommendations, and composing replies to constituent emails.
While some uses demonstrate AI's potential for efficiency, they also raise concerns about transparency, security, and the reliability of this technology in governance. Despite state guidance suggesting AI-generated documents should be labeled, none of the reviewed records included such disclosures. Mayor Lund commented that labeling might become unnecessary due to AI's growing ubiquity, stating, “AI is becoming everywhere all the time.”
AI in Action From Emails to Policy
Since its launch, ChatGPT has become one of the world's most visited websites. To understand its role in local government, public records were requested from about a dozen Washington cities. Bellingham and Everett provided the most comprehensive responses, revealing a wide range of applications.
Many uses are routine, such as debugging code, formatting spreadsheets, and improving the tone of emails. One Everett staffer asked the AI, “Using the Mayor’s voice, can you rewrite this letter to be a little more collaborative and less aggressive in tone?”
However, officials also trust AI with more complex responsibilities, including researching enterprise software, summarizing court cases and legislation, providing feedback on policy, and synthesizing public comments.
Some of the most striking examples involve direct communication with the public. When a senior citizen in Bellingham emailed about struggling with utility bills, an official prompted ChatGPT for “a sympathetic response,” which began, “Thank you for taking the time to share your concerns so clearly and thoughtfully.” In another case, an employee used the AI to generate a neutral response to a media inquiry about unionizing efforts.
Staff also used the chatbot to explore policy questions on topics like housing supply and gunshot detection tools. One Everett staffer even uploaded a draft ordinance on tenant protections and asked, “What are policy questions the city should consider?”
A New Tool with Few Rules
Both Bellingham Mayor Kim Lund and Everett Mayor Cassie Franklin encourage staff to use AI to improve government efficiency, emphasizing that all AI-generated text is reviewed for bias and accuracy. “I think that we all are going to have to learn to use AI,” Franklin said. “It would be silly not to.”
In Everett, the city has paid for some ChatGPT subscriptions and is now directing staff to use Microsoft Copilot for security reasons. The adoption has been organic, with “early adopters” experimenting on their own. The use has grown in volume and complexity over the last year. Planners in both cities have tasked ChatGPT with updating sections of their comprehensive plans, the foundational documents for future development.
An Everett staffer, while asking for help analyzing racial disparities in housing, told the chatbot, “I want to mention that this is a government document so factual accuracy is the top priority.” In response to this widespread use, both cities are now developing formal AI policies.
The Question of Transparency and Disclosure
In July 2024, Everett’s IT department issued guidance stating that AI-generated material for public policy decisions should be clearly labeled. “I think people would want to know,” said Mayor Franklin. However, this guidance hasn't been consistently followed.
For instance, a letter from Franklin to U.S. Rep. Rick Larsen about the DRONE Act of 2025 was entirely generated by AI from a brief prompt and contained no disclosure. Franklin stated she was comfortable with her team using AI, especially since staffing reductions have strained the city's communications department. She also noted that financial pressures from Washington's 1% property tax cap make efficiency tools like AI essential. “If we don’t embrace it and use it, we will really be left behind,” she said.
AI Generated Communications and Public Trust
While records were returned from nearly every department, none came directly from elected officials. Bellingham Mayor Lund acknowledged using ChatGPT and Claude but said her chats weren't saved because she wasn't logged in, calling the work “transitory.”
In Everett, staff regularly used ChatGPT to prepare talking points and speeches for Mayor Franklin. Anna-Maria Gueorguieva, a Ph.D. student at the University of Washington researching AI ethics, worries this practice could erode public trust, which is already at historic lows. “I would not love it if my mayor released an AI-generated press release,” she said.
Jai Jaisimha, co-founder of the Transparency Coalition, added that AI-generated speeches risk losing authenticity in civic discourse because they often rely on “generalities and platitudes.” Mayor Lund defended the practice, stating she always reviews and edits the output. “I always feel like it is my word and it is an articulation of what I’m hoping to express when I put my name on it,” she said.
AI Assisted vs AI Generated A Blurry Line
The distinction between AI-generated and AI-assisted work is a major topic of conversation. Everett’s communications manager, Simone Tarver, notes that AI-generated content is often created “out of thin air” from a prompt, while AI-assisted content involves heavy human editing or using AI to refine existing text.
Many uses fall into a gray area. Franklin’s letter to Rep. Larsen appears to be a direct copy-paste from ChatGPT. In contrast, many city communications staff use AI to make technical language more accessible, a use they feel doesn't require disclosure. Often, the final product is a blurry mix of human and AI writing.
The Challenge of AI Hallucinations and Bias
A significant problem with using AI in government is its unreliability. Chatbots are known to “hallucinate,” or invent facts. Jaisimha warns, “if an overworked government employee were to use [AI outputs] as the truth, they might be in trouble.”
Records show officials frequently catching these errors. ChatGPT fabricated data about airport traffic for a Bellingham planner, referenced a non-existent state law for an Everett police officer, and made up a document for a finance official.
While working on Everett’s comprehensive plan, a chatbot made repeated mistakes analyzing racial disparities. The frustrated staffer replied, “I told you factual accuracy is paramount... Do not hallucinate.” The final, approved plan documents included paragraphs of AI output, but Mayor Franklin expressed confidence that the extensive human review process would catch any errors.
AI also exhibits a sycophantic tendency, praising government documents as “excellent” and “thoughtfully designed.” This flattery, combined with hallucinations, can create confirmation bias. When analyzing public comments, ChatGPT initially misinterpreted support for housing as concern over development. After being corrected by a staffer, the chatbot apologized and revised its analysis.
The Human Side of AI in the Workplace
ChatGPT use spans all city departments, from parks officials planning scavenger hunts to HR directors writing job descriptions and police researching license plate cameras. The logs also reveal personal uses, such as asking for advice on declining a party invitation (“Can I just say I’m too peopled out today.”).
Some prompts show an awareness of the tool's ethical gray areas. One user asked how to tell if a coworker is “addicted to AI,” while another asked it to generate arguments against a political opponent of their boss. The frustration with the technology is also evident. “Gosh how hard is it to follow instructions,” one staffer wrote to the chatbot after it repeatedly failed to use official sources.
Interestingly, both mayors noted that constituents are also using AI to write emails to them. Sometimes, Mayor Lund said, they even “forget to remove the prompt.”
This is the first article in a two-part series. Read part two here.