Should your city embrace GenAI? It’s complicated
Betsy Montiel is a policy analyst for the League of California Cities. She can be reached at bmontiel@calcities.org. Additional contributions by Brian Lee-Mounger Hendershot, Western City managing editor.
Is generative artificial intelligence — or GenAI — merely a dubiously useful, or worse, arguably harmful product? Or is it a profound breakthrough that will revolutionize how we work, live, and co-create?
Many local government officials are finding that it’s somewhere in between. The Harvard Business Review found that cities in the United States use artificial intelligence to help automate tasks, make data-driven decisions, and engage residents. But as with any new tool or policy, there are benefits and costs to using AI.
“With any use case, there needs to be a real problem and a real need before we immediately jump to a tool or a solution,” said Omar Moncayo, a data analyst for the city of Long Beach.
How are cities starting to use GenAI?
Traditional AI models comb through data to make decisions or predictions. GenAI uses data to produce, synthesize, or reorganize text, images, music, and even computer code. It’s the latter that has sparked widespread interest and investment in recent years.
According to Chris McMasters, the chief information officer for Corona, many cities already used traditional AI for routine tasks before GenAI took off. Corona uses AI to capture images from city-owned vehicles to detect and grade potholes. The system generates automated work orders for street and road repairs, which city staff use to review and prioritize projects. Some of the city’s fire hydrants also have AI sensors to help staff identify leaks.
Corona, like many other cities, leaned into GenAI to generate staff reports, which helps increase efficiency across departments. “When you’re doing a staff report, AI can go years back and pull up that data, cite it, and help you build out a staff report — or at least 80% [of it],” McMasters said.
Long Beach has also long used AI for routine tasks, such as monitoring trending topics on social media. According to Ryan Kurtzman, the city’s technology partnerships officer, Long Beach mostly uses GenAI for elementary tasks, such as writing emails or developing memos.
In fact, many cities have just now started building the groundwork for GenAI-powered workflows. Ed Miranda is Newark’s first-ever IT director and a former president of the Municipal Information Systems Association of California. Newark is using state and federal dollars to grow its technical infrastructure and cybersecurity. According to Miranda, AI allows the city to prevent cyberattacks with a relatively small department.
Newark is also part of the Government AI coalition, which promotes responsible and purposeful AI in the public sector. Miranda said that sharing knowledge and opportunities with other cities, especially through associations, is critical to success.
“There is an abundance of knowledge available, and we aim to share our experiences,” he said. “I’ve worked in the private sector, and it’s common for companies to keep their information private, often not sharing it with their competitors.”
Other cities use AI to help create agendas, take notes, process public records requests, translate documents, search databases, and communicate with residents through chatbots.
Experts — including those interviewed for this story — agree that it’s crucial for cities to have guardrails over anything AI produces and have humans verify it. “There are times when I review certain outputs from platforms like ChatGPT, and it’s evident that the quality is lacking,” Miranda said.
What do GenAI guardrails look like?
When implementing GenAI, cities need to think externally and internally. UNESCO found that engaging residents is crucial to ensuring equal access to the potential benefits and opportunities of AI. The Long Beach Technology and Innovation Department has been hosting community workshops on data privacy and engaging stakeholders to better understand their perceptions of the city’s data privacy practices and share practical tips for securing personal information.
“Because of the nature of AI and potential privacy impacts, it landed on our desk to be the ones to steward the safe usage and responsible usage of it,” said Omar Moncayo, the city’s data privacy analyst.
The city also created a Digital Rights Platform that discloses how it uses public data collected by smart technologies, some of which involve AI. It translated the platform into multiple languages using GenAI and the city’s language access translators to further promote accessibility and visibility.
Melanie Chaney, managing partner with Liebert, Cassidy, Whitmore, recommended that cities ensure that AI companies do not use their data to train other models. Cities, like other employers, must comply with privacy and confidentiality laws that are already in place. Many GenAI models are trained on proprietary information without the owner’s or user’s knowledge. When a city contracts with an AI company, it may open itself up to the large language models that the company deploys, which could pose both privacy and safety concerns.
However, employers should also recognize that people are already using AI in their work. “We need to at least make sure that we are having human eyes on things,” Chaney said.
What other challenges do cities face when implementing GenAI?
When it comes to adopting any sort of AI tool — but especially GenAI — cities must contend with two other sets of challenges. The first set should sound familiar. “Anytime you introduce new technologies in government, everyone freaks out … so you’re always dealing with sort of, resistance to change,” McMasters said.
Similarly, a city’s budget is often the biggest challenge to adopting and implementing new technologies, along with data management. Cities may have decades of data. Cataloging this data is not a task for the weak. This is where GenAI can either help or harm cities.
Many GenAI models can clean and sort through data at the speed of light — or perhaps faster than the new intern. But those tools are only as good as the data they’re trained on. If your city has poor data management practices and poor oversight of any new AI model, said cleanup could cause more harm than good.
“I think the other component, is understanding when [Gen]AI is making its own opinion … and where it is citing the information from so that it doesn’t hallucinate,” McMasters said.
Hallucinations — erroneous AI responses that seem credible — are the tip of the iceberg. Public agencies sometimes move notoriously slow, often for good reason. But many tech startups have long embraced a model of “move fast and break things.” And GenAI is no exception. It’s crucial that cities avoid an “overeager” approach to GenAI, argued Kurtzman.
“I’m hearing new use cases every day, but I think there is a real risk, not only to city data but also to public trust,” he said.
Unique to GenAI is its massive power costs. The rapid increase in AI is straining existing grid infrastructure and creating more demand for data centers. The Los Angeles Times reported that data centers consume over 60% of one California city’s electricity. Goldman Sachs found that one ChatGPT query uses nearly 10 times as much energy as a typical Google search.
“Nothing is free,” Chaney said. “Everything comes with a cost, and it’s pretty remarkable how much energy these things take up.”
What’s the best way to implement GenAI?
When it comes to implementing GenAI into your workflow, trust your bureaucratic instincts: Pilot, pilot, and pilot some more. Every person interviewed for this story stressed the importance of piloting AI and GenAI tools before using them long-term.
“I know it’s not going away and it’s just going to become bigger,” McMasters said. “You need a proof of concept to show results, versus just unleashing AI into an organization and letting it be a free-for-all. Keep it small, keep it concise, keep it cheap, and then produce a result.”