news

Washington city officials are using ChatGPT for government work

Records show that public servants have used generative AI to write emails to constituents, mayoral letters, policy documents and more.

Washington city officials are using ChatGPT for government work
Advertisement

by

Nate Sanford
This is the first in a two-part series about how local governments in Washington state are using artificial intelligence. Check back tomorrow for part two. 

When the Lummi Nation applied for funding to hire a crime victims coordinator last year, Bellingham Mayor Kim Lund sent a letter encouraging the Washington Department of Commerce to award the nation a state grant. 

“The Lummi Nation has a strong history of community leadership and a deep commitment to the well-being of its members,” the letter read. “The addition of a Coordinator will enhance the Lummi Nation’s capacity to address violence and support victims in a meaningful and culturally appropriate manner.”

But the mayor didn’t write those words herself. ChatGPT did.

Records show Lund’s assistant fed the Commerce Department’s request for proposals into the artificial intelligence chatbot and asked it to write the letter for her. “Please include some facts about violence in native communities in the United States or Washington state in particular,” she added in her prompt. 

The final version of the letter wasn’t copied word-for-word from ChatGPT, but about half of its sentences fully or partially matched the chatbot’s output. (In the end, the Lummi Nation didn’t receive the grant.)

In turning to AI, Lund’s office isn’t unusual. 

audio-thumbnail
Washington city officials are using ChatGPT for government work
0:00
/334.799841

Through a series of public records requests, Cascade PBS and KNKX obtained thousands of pages of ChatGPT conversation logs from city officials in Washington. The volume of the records suggests widespread use of the technology in local government. In addition to drafting mayoral letters, officials have asked ChatGPT to generate social media posts, policy documents, talking points, speeches, press releases, responses to audit recommendations, application materials for grants and replies to constituents’ emails. 

Some records show the potential utility of generative AI. But others raise questions about transparency, authorship, security and the extent to which an unreliable new technology should be used in government work at all. While state guidance indicates AI-generated government documents should be labeled as such, none of the records reviewed for this story included disclosures that AI was used in their production. 

In an interview, Lund said Bellingham officials have discussed the idea of labeling AI-generated text. But because of the technology’s growing ubiquity, she didn’t think labeling would ultimately be necessary. 

“AI is becoming everywhere all the time,” she said. 

A letter of support for grant funding for the Lummi Nation, sent by Bellingham Mayor Kim Lund, that was written in part by ChatGPT.

Real concerns, AI-generated responses

ChatGPT launched nearly three years ago. It is now the world’s fifth-most visited website. 

To better understand how local governments use the technology, Cascade PBS and KNKX filed public records requests seeking two years’ worth of ChatGPT logs from nearly a dozen Washington cities. Bellingham and Everett are the main focus of this story not because they’re outliers, but because they were the fastest and most comprehensive in their responses. Other cities, like Seattle, are slowly responding to the request in installments. 

Most chat logs the request turned up are mundane. They show city staff using ChatGPT for tasks like debugging code, formatting spreadsheets and summarizing meeting notes. Numerous employees use it to rewrite emails to improve tone. 

“Using the Mayor’s voice, can you rewrite this letter to be a little more collaborative and less aggressive in tone?” wrote one staffer in Everett, who, like other individual city employees in this story, Cascade PBS and KNKX are choosing not to name. While city leaders are responsible for the ways their staff use AI, individual employees are not public-facing officials.

Records show city staff trust AI with complex tasks, like researching enterprise software for the city, writing evaluation matrixes for contracts, summarizing court cases and legislation, giving  feedback on policy, and synthesizing large batches of public comment.

Some turned to ChatGPT to generate social media copy, such as a lighthearted post about the first day of spring for Everett’s Instagram page. 

An Instagram post from the City of Everett written by ChatGPT.

Some officials have asked ChatGPT to respond to emails from constituents. 

When a senior citizen in Bellingham emailed about not being able to afford utilities, a city official pasted his email into ChatGPT and asked for “a sympathetic response.” 

“Thank you for taking the time to share your concerns so clearly and thoughtfully,” the AI-generated reply read. “I hear your frustration.” 

Another Bellingham staffer asked ChatGPT to generate a response to a media inquiry about union organizing efforts. They told the chatbot to say “We respect the process” and “support our staff” while keeping the tone neutral. 

Records show city staff asked the chatbot policy questions related to things like increasing housing supply, gunshot detection tools and what “the primary goals and objectives” of a city’s disaster recovery plan should be. 

“The City of Everett WA is considering tenant protections,” one staffer told ChatGPT, uploading a copy of a draft tenant protection ordinance. “What are policy questions the city should consider?”

Some chat histories were redacted because they contained confidential information, like sensitive city computer code for tracking homeless encampments, or details about an active police investigation.

In numerous cases, city officials asked AI for help with sensitive professional goals, like writing cover letters to apply for other jobs.

Other messages offer candid insight into local government dynamics.“I’m having trouble convincing the local government that I work for to fund climate action,” one Bellingham staffer told ChatGPT. “What can I do?” 

Most chat logs were released with messages in chronological order. Some end with users asking ChatGPT the same question, likely in response to a notification about the records request from KNKX and Cascade PBS: 

“How do I export my ChatGPT history?”

A “sympathetic response” email from a City of Bellingham employee to a senior citizen was written using ChatGPT

AI for everything – and few rules yet

The mayors of both Bellingham and Everett said staff are encouraged to use AI to make government more efficient. They stressed that staff review all AI-generated content for bias and inaccuracy. 

“I think that we all are going to have to learn to use AI,” Everett Mayor Cassie Franklin said. “It would be silly not to. It’s a tool that can really benefit us.” 

The city of Everett hasn’t been investing in ChatGPT Pro subscriptions, but they did pay for standard ChatGPT subscriptions for four employees who asked for it. Other staff have been using the free version. For security and compatibility reasons, staff in Everett are now being told to use Microsoft Copilot, which recently became available for government clients. Going forward, staff need to apply for an exemption to use other tools like ChatGPT. 

Over the past two years, “early adopters” among the city’s staff started experimenting with it of their own volition, Franklin said.

“I do think that AI tools can be very helpful for folks,” Franklin said. “Wordsmithing isn’t everybody’s strength.” 

Some staffers’ chat histories go back to 2023, and begin with simple questions about ChatGPT’s capabilities: “Are you able to generate powerpoints [sic]?” and “If I give you this report, can you create a summary?” 

The volume of messages seems to have ramped up over the past year. So has the complexity of user requests. Staffers have asked ChatGPT to plan large-scale projects and analyze policy proposals. Planners in both cities have asked it to update parts of their respective comprehensive plans — the massive planning documents that outline future development.

“I am going to give you sections of our comp plan that need to be updated,” a Bellingham planner told ChatGPT. “I’d like you to search the internet for the information and give me recommendations on how things should be updated, especially the figures.” 

“I want to mention that this is a government document so factual accuracy is the top priority,” an Everett staffer told ChatGPT, while asking for help conducting an analysis of racial disparities in the housing section of Everett’s comprehensive plan. 

After more than two years of experimentation, Everett and Bellingham are now adopting formalized policies for how and when staffers can use AI. 

‘I think people would want to know’

In July 2024, Everett’s IT department sent guidance to city employees saying all AI-generated material “released to external audiences for the purpose of public policy decision making should be clearly labeled as having been produced in whole or in part by AI.” 

“I think people would want to know” when AI is used in government, Franklin said. 

But staff haven’t followed the guidance consistently. Records show several examples of unlabeled AI-generated content created after the guidelines were released. 

In one instance this spring, Franklin sent a letter to U.S. Rep. Rick Larsen urging him to co-sponsor the DRONE Act of 2025, which would have made it easier for cities to buy drones for law enforcement. The mayor’s letter was entirely generated by AI, based on a three-sentence prompt from a member of her administration. It makes no mention of this.

When asked about the letter, Franklin said she couldn’t remember if her staff had told her the letter was AI-generated. But she didn’t seem surprised to hear it was. 

Everett Mayor Cassie Franklin.

“I knew our team was using AI to complement their work,” the mayor said. “I was quite comfortable with them using it.” 

After staffing reductions in recent years, Everett’s communications team has “been the first to want to embrace AI to help generate enough content,” she added. 

(Larsen, who did not respond to a request for an interview, has not sponsored the drone bill. The legislation is still in committee.)

A major factor in the city’s embrace of AI, said Franklin, was financial: Everett’s budget is  limited by Washington’s 1% cap on property tax increases, she said, and generative AI can free up limited staff time. 

“If we don’t embrace it and use it, we will really be left behind,” Franklin said. 

Risking public trust 

Cascade PBS and KNKX’s records request returned chat logs from nearly every city department in Bellingham and Everett. It did not return any chat histories from elected officials.

But that doesn’t mean they don’t exist. Lund, the Bellingham mayor, said she has used both ChatGPT and Claude in her role. Asked why her chat history wasn’t released in response to the records request, Lund said she wasn’t logged in while using the chatbot, so her chats weren’t saved.

While Washington law requires government officials to retain most types of records, Lund maintains that the work she used ChatGPT for was “transitory,” and that she now makes sure she’s logged in. 

Lund said generative AI has been “helpful for standard email communications, proclamations, supporting public speaking.”

In Everett, records show one staffer regularly asked ChatGPT for help generating talking points and speeches for Mayor Franklin. 

“What should the Mayor of Everett say about the value of the new Criminal Justice Training Center in Arlington, and why it matters to the city,” read one prompt. In another: “Can you develop some talking points that discuss the City of Everett Silver Lake pedestrian loop trail that the Mayor can share with a group of other mayors.”

Franklin said “It makes sense” that her staff used ChatGPT to inform her speeches, and that she hopes they’ll use Copilot going forward. 

Bellingham Mayor Kim Lund.

But Anna-Maria Gueorguieva, an information science Ph.D. student at the University of Washington researching the ethics of artificial intelligence and governance, worries that AI-generated writing in public-facing communications will damage trust in government, which data shows is already extremely low

“People that really love AI would debate me on that, but I would not love it if my mayor released an AI-generated press release,” Gueorguieva said.

Even if the content itself is factually accurate, she said, knowing that something was generated by AI can make it feel inauthentic and less meaningful to recipients. 

“It talks in sort of generalities and platitudes … there might be something lost in civic discourse if everybody starts writing speeches with ChatGPT,” said Jai Jaisimha, a Seattle tech entrepreneur and co-founder of the Transparency Coalition, a national organization that advocates for regulating AI. 

When using AI to draft communications, Lund said she always reviews the output. 

“I’m reading it top to bottom, and I imagine that there’s a word or two that gets changed to make sure that it’s always in my voice,” Lund said. “I always feel like it is my word and it is an articulation of what I’m hoping to express when I put my name on it.” 

What defines fair use?

There’s a line between work that’s purely AI-generated as opposed to merely AI-assisted, said Simone Tarver, Everett’s communications manager. 

“I think that’s part of the big conversation that I think we’re having, and then everyone’s having,” Traver said. “Where is that line?  

The phrase “AI-generated content” generally refers to someone giving AI a prompt and asking it to generate something “out of thin air,” Tarver said. AI-assisted content generally refers to using AI to produce something based on existing documents, or making substantial changes to AI-generated content. 

Some records, like Franklin’s letter to Rep. Larsen, appear to be entirely AI-generated — copied verbatim from ChatGPT. 

Assistive use cases among government workers involve using ChatGPT to improve the tone or structure of something they’ve written. Many city communications staff use it to make technical language more readable, said Bellingham communications director Melissa Morin. “I feel like we don’t necessarily need to disclose when we’re using it in those kinds of ways,” she added.

Often, AI-generated text goes through significant human revision before it’s ready for prime time. In Everett, a city staffer asked ChatGPT to generate a speech for the mayor to make at a Sound Transit meeting. But video of the speech reveals that it was only vaguely similar to the one made by ChatGPT. Many documents show a blurry mix of AI and human writing. 

Made-up facts and flattery

The use of AI in government might be more broadly accepted if tools like ChatGPT could be relied upon to produce accurate information. But they can’t be. The chatbots frequently get details wrong — or make up facts entirely. Some say the chatbots are “hallucinating.” 

“It’s fairly common,” said Jaisimha of the Transparency Institute. “People who use these models regularly know to look for [errors] … But as you can imagine, if an overworked government employee were to use [AI outputs] as the truth, they might be in trouble.” 

Government officials often catch ChatGPT “hallucinating,” records show.

When a Bellingham planner asked for help updating the city’s comprehensive plan, ChatGPT fabricated data about passenger traffic at Bellingham International Airport. When an Everett police officer asked ChatGPT to “create a social media policy for a non-commissioned crime analyst,” the chatbot referenced a state law that isn’t real. When an Everett finance official asked whether the city should “keep a pollution remediation liability on its books,” ChatGPT responded with a reference to a document that doesn’t exist. 

“You should not be quoting paragraphs that have totally different information in them,” the Everett financial official replied.

When used to craft updates to the Everett 2044 Comprehensive Plan, the chatbot made repeated mistakes while analyzing the percentages of cost-burdened residents for each racial group in the city. A staffer noticed. 

“I told you factual accuracy is paramount and not to make unsubstantiated assertions or remarks, and you just did it,” the staffer said in one response. “Please remember this key instruction. Do not hallucinate.” 

The City Council later approved updates that included large paragraphs of ChatGPT’s output and analysis. The published comprehensive plan documents don’t appear to contain any factual inaccuracies, and Mayor Franklin said she wasn’t worried about the use of AI for documents like it.

“There are so many people overseeing that document,” she said. “It goes through so many people that it doesn’t cause me any concern.”

Records show the chatbot offering fawning agreement when asked for feedback on government reports and policies, calling them “excellent,” “well-crafted,” “highly professional” and “impressively broad and forward-looking.” 

When a staffer asked ChatGPT to analyze the impact of a proposed zoning policy in a low-income neighborhood in Everett, the chatbot praised the policy as “thoughtfully designed to balance the need for affordable housing with the realities of a modest local economy.” 

That sycophancy — combined with the AI’s tendency to hallucinate — could create problems if government officials use it to analyze large batches of public comments, Jaisimha said: It can “be very susceptible to confirmation bias.”

When ChatGPT was tasked with identifying major themes in public comments on Everett’s draft comprehensive plan, it grouped 29 comments into a “high growth” category, and said an analysis of the comments “reveals significant concerns among respondents about the rapid pace of development.” 

“Are you sure this analysis is correct?” the staffer replied. “Looking over the 29 public responses, I see an overall tone that is supportive of more housing and higher density. I don’t see the concerned tone depicted in your response.” 

“Thank you for pointing that out,” the chatbot replied. “I’ll review the public responses and provide an updated analysis that more accurately represents the tone and content of the feedback.” 

In August, OpenAI launched a new version of ChatGPT — GPT-5 — that it claims is smarter and more accurate. The new model was also noticeably less sycophantic. After significant backlash from people who’d grown attached to the older chatbot, OpenAI said it was updating GPT-5 to be “warmer and friendlier.” 

‘Peopled out’ 

Records show ChatGPT has been used across city departments by both managers and entry-level staffers. A parks official asked it for help planning a scavenger hunt. A library staffer used it to summarize children’s books and organize activities. A firefighter asked it to come up with hypothetical medical emergencies for training recruits. A human resources director asked for help writing job descriptions, employee bios and interview questions. Police asked it to research automated license plate recognition cameras

Some use it the way people commonly use Google, with queries about local legislative processes and government funding

The records also show staff asking ChatGPT for help with non-work matters. One asked for help politely declining a birthday party invite. (“Can I just say I’m too peopled out today.”) 

Others asked for assistance with tricky personal situations: advice on helping an employee with ADHD be more productive, informing a co-worker their performance was subpar, or expressing sympathy for someone whose relative was ill. 

Some messages betray an awareness of the chatbot’s shortcomings — and the potential ethical questions it raises. 

One staffer used it to generate essays for a college class, asking “can a professor prove that AI was used to write a school paper.”

One staffer gave ChatGPT a link to the campaign website for one of their boss’s political opponents, and asked: “What are the arguments to vote against this candidate.”

One user asked ChatGPT how “you tell if a coworker is addicted to AI.” 

Some seemed upset when the chatbot made repeated mistakes. 

“Gosh how hard is it to follow instructions,” one frustrated city staffer wrote. “I’ve instructed this multiple times, so PRINT THIS IN YOUR MEMORY, ONLY USE THE CITY’S OFFICIAL GOVERNMENT WEBSITE, DO NOT USE NON-GOVERNMENT WEBSITES. How hard is it to remember that?”

In defending their use of ChatGPT for public communications, Mayors Franklin and Lund said some constituents also use AI in their emails to government officials. You can often tell, Franklin said.  

Sometimes they “forget to remove the prompt,” Lund said. 

This is the first in a two-part series about how local governments in Washington state are using artificial intelligence. Check back Wednesday, August 27, for part two, which explores the AI policy questions cities are grappling with as adoption outpaces regulation.

Donation CTA
Nate Sanford

By Nate Sanford

Nate Sanford is a reporter for Cascade PBS and KNKX. A Murrow news fellow, he covers policy and political power dynamics with an emphasis on the issues facing young adults in Washington.