Washington Cities Adopt AI While Policies Lag Behind
When heavy snow hit Bellingham in February, resident Bre Garcia emailed the city, concerned that major streets weren't being plowed. The response she received felt dismissive, as if her email hadn't been read at all. As it turns out, her instincts were right. Public records revealed the city's reply was almost entirely generated by ChatGPT, with a city staffer adding only four words.
Garcia's experience highlights a growing trend in Washington's local governments: the rapid adoption of artificial intelligence is far outpacing the development of ethical guardrails and official policies. When shown that her complaint was handled by a chatbot, Garcia felt dismissed. "It makes me wonder if anybody actually sat down to seriously think about what I was saying," she said.
The Wild West of AI in Local Government
Bellingham Mayor Kim Lund confirmed that responding to constituent emails is a "high-use case" for AI, viewing it as a tool for efficiency. However, the city is still developing a formal AI policy, leaving questions about transparency—like whether the public should be notified of AI use—unanswered. This policy vacuum isn't unique to Bellingham. A nationwide survey of government IT directors found that nearly 80% were concerned about the lack of clear AI regulations.
While the state of Washington has issued interim guidelines for its employees—mandating human review, security precautions, and disclosure of AI use—local adoption is inconsistent. According to Jai Jaisimha of the Transparency Coalition, an organization advocating for AI regulation, "There’s quite a bit of work to do."
State guidance requires that AI-generated content be labeled with the model used, the prompt, and the name of the human reviewer. However, this rule does not appear to be widely followed.
Two Cities Two Different AI Strategies
Everett and Bellingham are taking notably different paths. Everett's IT department is developing a robust policy modeled on a template from the GovAI Coalition. IT director Chris Fadden described their strategy as a "very cautious approach." Going forward, Everett staff will be directed to use Microsoft Copilot, a more secure government-focused version, with other chatbots requiring special exemption. The city’s new policy will also recommend citing AI if it's used for more than simple language refinement.
In contrast, Bellingham has taken a more "permissive approach," according to IT Director Don Burdick. Staff are encouraged to explore various AI tools, including ChatGPT, while being told to remain "skeptical and humble." Burdick suggested that if an employee reviews a chatbot's output, disclosure isn't necessary because the human assumes responsibility for the content. Ironically, records show that Bellingham's own draft AI policy was written with significant help from ChatGPT, with seven of its ten guiding principles copied from the chatbot's output.
From Policy Questions to Personal Fun
While officials agree AI shouldn't be used for high-stakes decisions like screening job applicants or autonomous weapons, records show employees are experimenting widely. Staff have asked ChatGPT for advice on policy matters, such as the best location for Everett’s light-rail station. Many uses are less serious, with employees generating images, minion memes, or even asking it to, "Generate a picture of me looking awesome."
The High Risk of Sharing Sensitive City Data
A major concern is data privacy. Bellingham City Councilmember Jace Cotton expressed worry about "the privacy implications of feeding city information into algorithms that are black boxes." These concerns are well-founded. Records had to be redacted after a Bellingham IT staffer fed the city's GIS computer code for tracking homeless encampments into ChatGPT for debugging. Other redactions were made to protect employee performance reviews and information about an open police case, all of which had been entered into the non-secure platform.
Burdick stated that training clarifies that staff should not input confidential information, but acknowledged that, "Training staff to use generative AI tools well will take time."
AI Writes Grant Applications for Millions in Funding
One of the most consequential uses of AI has been in generating applications for state and federal funding. In Bellingham, a planner used ChatGPT to write a racial equity narrative for a grant application requesting funding for pedestrian safety projects. In Everett, a staffer used the chatbot to fill out large sections of a $7 million affordable-housing grant application for HUD, including generating over 20 individualized letters of support from local leaders and organizations. The final application included 23 letters, almost all of which were identical to those created by ChatGPT.
This practice is raising alarms. The National Institutes of Health recently announced it will no longer consider applications substantially developed by AI, citing fairness concerns.
Experts Weigh In on Government AI Ethics
Experts are divided on whether AI can be used ethically in government. Emily Bender, a linguist at the University of Washington and a prominent AI critic, calls large language models "synthetic text-extruding machines" that produce "text that nobody has accountability for." She argues the solution to overworked civil servants is more resources, not unaccountable technology.
Jai Jaisimha of the Transparency Coalition believes ethical use is possible but requires significant guardrails for security and transparency.
The Inevitable Future of AI in City Hall
Despite the challenges, the mayors of both Bellingham and Everett see AI as part of their future. Everett is creating a team of "AI champions" to support staff, while Bellingham's draft policy—itself copied from ChatGPT—encourages its use to improve efficiency. Bellingham's IT director envisions a future with a city website chatbot to answer resident questions.
But for residents like Bre Garcia, the focus on efficiency misses the point. As a customer service professional, she values human connection. "I don’t know why I would replace that," she said.