Federal agencies are exploring ways to extract and standardize unstructured data, analyze high volumes of text-based feedback and improve customer experience
by Intelliworx
Federal agencies spend more than $100 billion annually on technology, including products and services, according to reporting by Jory Heckman. About 80% of that is spent maintaining the sprawling, and in some cases, dated IT infrastructure that already exists.
Historically, modernizing those systems tends to be a necessarily slow and methodical process. This is because changes to the IT infrastructure can have unintended consequences. Yet, Heckman, in a story for the Federal News Network, notes, artificial intelligence (AI) may offer a better approach – a continuous, iterative and agile approach.
At the time of this writing, 41 federal agencies are exploring 2,133 artificial intelligence (AI) use cases that have been publicly disclosed. As a software company that serves the federal government, we are experimenting with AI as well, so we thought it would be useful to take a closer look at some of those use cases.
1. Standardizing data fields analysis
Health and Human Services (HHS) has a number of use cases in play. One that stands out is the “Manufacturer Name Standardization” by the HHS’ Supply Chain Control Tower (SCCT). SCCT is a private-public partnership intended to help protect the healthcare supply chain.
In a sense, it’s a crowdsourced model. Stakeholders from across the government, commercial and private sectors input data about companies and products. The problem is that many brand names have variations. Analysts need these data fields to be standardized to properly analyze them, which is painstaking work to be done manually.
SCCT is using “an advanced grouping algorithm [that] groups similar manufacturer names together, and a Large Language Model recommends a standardized name based on all variants.”
2. Chatbot for customer service
Another example at HHS is a chatbot to improve customer service. Earlier this year, Edward Graham wrote about it for Government Executive:
“HHS reported include an initiated effort to create a chatbot that can help with applying for grants, an initiated chatbot to help Division of Global Migration Health personnel “with developing an initial draft response to inquiries where the response could have been found on our website” and a chatbot to help researchers find data sets for environmental health research efforts, which is in the acquisition and/or development phase.”
This project holds promise to be expanded in other domains as well. The HHS inventory of AI use cases shows there are at least four AI initiatives at HHS related to chatbots.
3. Screening out PII from digital records
Generative AI solves a unique problem for the National Archives (NARA): unstructured data, according to Matt Bracken in a report for FedScoop. NARA Chief Technology Officer Gulam Shakir “says the agency has gone all in on the technology, with pilots on auto-filling metadata, PII redaction, FOIA processing and more.”
Among the AI use cases the National Archives (NARA) is piloting is a project to flag personally identifiable information (PII) for redaction from digitized historical records. The agency describes the project this way:
“NARA is using AI to automatically find and remove sensitive personal information from its digitized historical records. This will make the process faster, more accurate, and better protect privacy while allowing for greater public access to these records.”
The agency is testing this project on two separate cloud platforms:
“The AI-powered PII detection and redaction project is underway, with a custom AWS model currently in development and being prepared for user acceptance testing. A parallel effort is evaluating Google Cloud Platform’s out-of-the-box PII detection service.”
The project may hold promises for separate, but similar, uses as well. For example, NARA is also piloting this style of functionality to support Freedom of Information Act (FOIA) requests.
4. Synthesizing feedback from veterans
The Department of Veterans Affairs (VA) serves more than 9 million veterans across 1,380 facilities around the country. The agency is continuously seeking feedback from its constituents in order to improve the services it provides.
A significant volume of that feedback comes in the form of text commentary from a variety of channels the VA has put in place. Like the NARA example above, that data is unstructured, rendering it hard to analyze.
To solve this challenge, the agency “is using AI to synthesize veteran feedback on the agency’s services to identify performance trends and issues for detailed analysis,” according to a paper published by Mark Fagan of the Harvard Kennedy School.
Indeed, the project is listed in row 67 on the VA’s AI Use Case Inventory, which describes the project this way:
“The objective is to utilize Natural Language Processing (NLP) with comment reviews for App feedback, specifically to identify named entities (NER), profanity, and stop words, and provide an automated approach to pre-processing and cleaning text for downstream analytics tools.”
The objective is to transform this massive stream of text into tangible measures of feedback that “developers and other internal stakeholders” can use to improve the services offered.
The VA has more than 200 use cases in its inventory as of this writing. When the inventory was published, Charles Worthington, the agency’s chief technology and chief AI officer, noted in a LinkedIn post:
“We’re piloting an on-network generative AI chat interface approved for use with VA data that employees are using to assist with basic administrative tasks (drafting emails, summarizing documents, summarizing meeting notes, etc.). More than 72% of users agree or strongly agree that the tool has made them more efficient.”
5. Extracting unstructured data for analysis
The U.S. Department of Agriculture (USDA) has 89 use cases listed on its AI inventory. In an interview with FedTech Magazine last year, Christopher Alvares, the department’s chief data and artificial intelligence officer, noted the experiments range from strengthening crop estimates to text mining.
“Our focus has been on developing predictive or classification models, but there are also examples of text mining in our inventory,” he said. “With generative AI tools, we’re seeing a lot of interest in applying those models to make our business functions more consistent and efficient, and improve how we modernize legacy IT systems.”
One such example of text mining that improves efficiency is a project titled “Automated PDF Document Processing and Information Extraction.” According to the inventory description:
“This use case takes program and workforce-related information stored in thousands of PDFs and converts the information into data tables that can be used for analytics and dashboards. This makes information that is difficult to find available in real-time to support decision making and saves large amounts of time compared to previous methods used.”
Although not noted in the description, one inherent benefit of this AI experiment is unlocking data that’s buried on a network. Many organizations have all sorts of data that’s effectively locked up in documents hosted on shared drives and collaboration platforms. This provides a way to surface that data so that it can be used.
6. Improving visitor experience for national parks
The National Park Service (NPS), which is part of the Department of the Interior, is aiming to improve customer experience (CX) for visitors who are planning a trip. The agency implemented a proof of concept to curate relevant data as visitors are planning a trip.
The agency’s AI use case inventory explains it this way:
“This is a proof of concept to use generative AI to extract data from NPS.gov and the NPS API to bring forth relevant content to visitors based on topics of interest, allowing for improved trip planning. This proof of concept enriches structured data without requiring parks to create new content and reduces the time and labor cost of re-creating content.”
A new report by the DOI Inspector General (IG), published in July 2025, also notes that the project may improve employee experience as well:
“In addition, NPS reported that the prototype reads through unstructured, publicly available NPS data to provide recommendations to content authors to help ease the burden of data entry.”
This could prove to be one of the most publicly visible use cases across the federal government, given the NPS reported a record-breaking 331.9 million visits in 2024.
Safety is paramount
Safety is a common thread throughout the government’s AI documentation. That’s important because as a recent Pew Research Center survey shows, about one-third of U.S. adults have reservations about AI.
While we need to assuage public concerns – and take tangible steps to ensure the veracity of information generated by AI – we also can’t afford to put off experimentation. Indeed, a separate survey of government Chief AI Officers (CAIO) found 100% believe the benefits of AI outweigh the risks.
What’s even more promising is the creative application of new technologies: 85% of CAIOs believe AI will transform agency operations “in ways they haven’t yet imagined.”
* * *
Intelliworx serves federal agencies, large and small, with a range of solutions including application management, government workflow and financial disclosure. As a FedRAMP-authorized solution, we welcome the opportunity to show rather than tell – request a no-obligation demo.
If you enjoyed this post, you might also like:
SBA certifies Intelliworx as a Service-Disabled Veteran-Owned Small Business (SDVOSB)