Harnessing AI’s power for cities while combating misinformation
Doug Levy is a communications strategist and author of “The Communications Golden Hour: The Essential Guide to Public Information When Every Minute Counts.” He can be reached at doug@douglevy.com. David Oro is the vice mayor of American Canyon; he can be reached at david.oro@americancanyon.gov. Brian Baker is the president and founder of Big Sky Crisis Communications; he can be reached at brian@bigskycrisiscommunications.com. Alexa Davis is the assistant city manager for Rolling Hills Estates; she can be reached at alexad@rollinghillsestates.gov.
Just as public officials had to master social media, today’s government officials are in the same position with artificial intelligence. The public launch of ChatGPT in November 2022 put generative AI in almost everyone’s hands. However, while we can use it for good, those with darker intentions can use it to advance nefarious goals. Generative AI is already helping foreign agents, internet trolls, and misguided political advocates spread misinformation faster and more convincingly than ever.
As with other technological advances, we must approach generative AI with an open mind. Many people say we hate automated customer service. But the data shows that AI-driven chatbots improve customer — and staff — satisfaction. They can answer questions faster, spot hidden trends, and improve overall efficiency. However, few would argue that AI chatbots can take the place of all human agents. It’s a question of balance and knowing which roles are essential for humans to do.
For government officials, the potential for generative AI to create misinformation on a large scale is only part of the calculus. AI can distribute and amplify information, especially if it is trained to follow previously established patterns.
Just what can generative AI do?
Looking at AI strategically, government officials must know the pros and cons. We can use AI tools to proactively produce high-quality content, organize outlines, search for context or contacts, and model possible reactions from different audiences. AI tools also can help city officials spot topics on social media important to constituents, as well as find and respond to misleading or inaccurate content.
“It’s almost a superpower that we can use to enhance our thinking,” says Christine Townsend, the founder of PIO Toolkit. Townsend teaches weekly sessions about ChatGPT. She has had to update her material every week over the past year because the technology changes so fast.
“The absolute speed with which it can evolve and change and provide new perspectives on things that you can balance out with your own knowledge, skills, and intuition is powerful,” she said.
Perhaps the single most useful way to integrate generative AI into everyday government operations is to create content — lots of it. Instead of spending several minutes creating a social media post, city officials can use AI to create 10 posts tailored to specific audiences in seconds.
Be more creative, not just because you can, but because you must: If there is an emergency, the faster and better you get your content out, the harder it is for misinformation to dominate and confuse.
Generative AI also can help sort, analyze, and find content faster. Do you have comments from 300 people about an issue? AI can help figure out the common points and draft a summary. Is it hard for residents to find city council minutes? Put them into an online database so that anyone can search without knowing specific keywords or meeting dates. The part that makes this somewhat tricky is that you must be sure that your AI tool knows it can only search within a specific set of files.
While we would love to say that there’s a single tool that everyone can use, that’s simply not the case. Microsoft Copilot may be the right tool if your organization uses the Microsoft Office environment. Google Gemini is a good solution if your organization uses Google Workspace. However, there are many others — and variations within each one. Also, if you give the same prompt twice, you will almost certainly get two different answers.
For example, let’s say you want some help thinking of a headline for this article. We used the exact same prompt in each case. There were differences from platform to platform, but all were in the ballpark. ChatGPT 4 suggested “Navigating the AI Frontier: Essential Strategies for Public Information Officers and Government Officials.” A ChatGPT 4 model trained on Doug Levy’s writing proposed headlines like “Mastering AI Before It Masters Us: What PIOs Need to Know Now.” Microsoft Copilot and Google Gemini recommended similar variations.
Creating an AI policy
Before you start incorporating generative AI into your workflows, it’s important to know about several potential drawbacks. For starters, AI sometimes “hallucinates.” It can generate information that looks and sounds convincing but is inaccurate or made up. It’s like when you dream about something that isn’t real, but you are certain, even momentarily, that it happened.
ChatGPT has created fake quotes attributed to real people using words relevant to their expertise. There are even examples of AI tools fabricating citations. If you ask AI about an event that is not in its database, it may write that the event did not occur.
“We have no real regulatory framework for AI — nothing about privacy, nothing about transparency, nothing about trustworthiness … nothing,” says former New York City Chief Digital Officer Sree Sreenivasan, who leads workshops he calls “A Non-scary, Practical Guide to Generative AI.”
This is why human oversight remains critical and where policies matter: You need to know exactly what data your AI tool uses and carefully review AI-generated content before publishing it.
Before adopting generative AI, establish a policy that specifies who can use AI, when, and how. AI policies should permit the use of AI within prescribed boundaries and establish clear, ethical standards. These policies should enable a managed and responsible use of AI that promotes awareness of both its risks and benefits. Such policies should also consider privacy, public records, copyright, and other permissions. Some organizations require labeling of any content that was produced using AI. By establishing an appropriate AI policy, agencies can responsibly enhance public services, improve efficiency, and drive innovation while maintaining standards, values, and legal compliance.
Another fear raised about AI is the absence of any standardized method to evaluate it. Not only does this complicate side-by-side comparisons, but it also makes detecting bias or other flaws harder. In this case, AI tools are only as good as the data used to train them.
We have seen many examples of AI tools generating content that reflects stereotypes, bias, or other improper language. For example, one official recently asked ChatGPT to create a fake news article about a fictional emergency for a staff drill. The article reflected stereotypes we see too often in other settings: mayors and CEOs were white men, nurses were women, and immigrants lived in squalor. The official rewrote the item himself.
Finally, generative AI programs use a lot of energy. An AI prompt uses ten times more power than traditional search queries. Experts expect that number to grow. Cities with ambitious climate goals may want to take that into account when crafting their AI policies.
Keeping track of the almost daily advances in AI technology isn’t easy, but there are good resources. You can even use AI to point in the right direction.
“You need to be curious; you need to seek out information,” says PIO Toolkit’s Townsend, a former police officer. “First thing every Monday, go into ChatGPT and type, ‘What’s new in the AI world?’ And you’ll get the answers.”
Just be sure to verify everything before believing it.
To learn more about whole community preparedness, attend “Battling Misinformation in the Age of Artificial Intelligence” at the League of California Cities Annual Conference and Expo, Oct. 16-18. Be sure to check out the expo hall, which includes over 240 service providers.