Close

Learn from Your Peers

What can you glean about security from other IT pros? Check out new CDW research and insight from our experts.
Aug 06 2024
Software

Federal Agencies Experiment with Generative AI While Incorporating Safeguards

U.S. Air Force, GAO and OPM are testing chatbots and other generative artificial intelligence applications.

The Air Force has embraced artificial intelligence for years, from using analytics to perform predictive maintenance on aircraft to testing and developing autonomous fighter jets. Now, with generative AI bursting onto the scene, it’s no wonder the Air Force is exploring ways to benefit from the emerging technology.

The military service branch is piloting an AI-powered chatbot that allows Guardians, Airmen, civilian employees and contractors to experiment with generative AI. Through natural language, users can ask the tool to quickly summarize reports, get answers on topics it’s trained on and even write software code.

“Think about the volume of data we have. Policies can be thousands of pages, so to be able to upload content, search through it, ask questions to find out exactly what information is in it and do more in-depth analysis is powerful,” says Chandra Donelson, the Air Force’s acting chief data and AI officer.  

Click the banner below to learn more about continuous app modernization.

 

With generative AI going mainstream — thanks to popular tools such as OpenAI’s ChatGPT, which is a form of AI that creates new content, including text, images, audio and video — federal agencies are testing and beginning to deploy the technology.

Organizations in every sector, including government, are developing policies to manage risk and ensure they get good, accurate results from their queries, says Matt Kimball, a data center analyst with Moor Insights & Strategy.

“Everybody gets that generative AI can change their world, but now they are grappling with how to actually implement it,” Kimball says. “Gen AI is only as good as the data you feed it. So, they’re wrestling with a number of questions: What’s the right data to use? What tools and infrastructure do I use? How do I make sure I set up the right guardrails? How do I make sure my data stays within my organization? That’s where a lot of enterprises are.”

Agency AI leaders say governance is a work in progress. As they pursue their initial generative AI implementations, they are learning to improve policies and ensure appropriate safeguards are in place, they say.

 

71 percent federal decision makers el punto

 

Air Force Pilots Generative AI to Improve Analysis

 The Air Force’s CIO Office collaborated with the Air Force Research Laboratory to build its AI chatbot by using an open-source large language model in an on-premises data center.

The generative AI application, called NIPRGPT, is housed inside the Non-classified IP Router Network, or NIPRNET, environment, enabling Department of Defense users to work with controlled unclassified information. So, it’s a secure, closed environment. No data is reaching the internet to train other models, Donelson says.

The Air Force began developing the application — called NIPRGPT — last year and launched it in June with two capabilities. The first offers ChatGPT-like features that let users search and ask for information, with results based on the data the LLM is trained on.

The second capability takes advantage of retrieval-augmented generation, which allows users to upload their own content so they can ask questions about it.

 

Chandra Donelson
We see use cases that can help us save time and increase productivity across the board.”

— Chandra Donelson Acting Chief Data and AI Officer, U.S. Air Force

“Think about an intel analyst being able to identify where a particular document is, and then being able to home in and do further analysis,” Donelson says. “We see use cases that can help us save time and increase productivity across the board.”

The Air Force has made the chatbot available to a limited number of users and will slowly roll it out to more over time. Prospective users get online training when they sign on to use the service and can experiment and discover what the possible use cases might be, she says.

The armed service has put safeguards in place to protect data. Information that users upload and query is available only to them. Users won’t be able to pull information from other users’ workspaces, Donelson says.   

The Air Force will take lessons learned to improve the system’s guardrails, determine what the appropriate use cases are and make sure users have the right data to answer their questions, she says. The project will also help the Air Force figure out how it wants to use generative AI in the future so it can make acquisition and investment decisions.

“It’s new, and we are learning a lot,” Donelson says. “We are very early in the process.”

DISCOVER: USDA has big plans for AI.

GAO’s Project Galileo Is a Knowledge Management Tool

When it comes to generative AI, the Government Accountability Office has a dual mission: explore how to use the technology for its benefit, but also evaluate it to provide oversight into its usage across government.

GAO’s own generative AI pilot project helps inform the agency on its role as government watchdog, and vice versa, says Taka Ariga, the agency’s chief data scientist and director of the GAO Innovation Lab.

“The more use cases we develop, the more of a practitioner’s point of view we bring to audits. And on the flip side, the more evaluations we do, the more conscientious we can be with our own use cases,” he says.

Last October, GAO launched Project Galileo, an initiative to use generative AI as a knowledge management tool. In three months, the agency created a protype of a chatbot that allows employees — and possibly external users, such as Congressional staff, in the future — to more easily find GAO’s internal reports on specific topics, such as nuclear safety, Ariga says.

The generative AI application can provide comprehensive results that are much more detailed and accurate than an internet search, he says.

EXPLORE: Government should brace for AI disruption.

GAO takes a multicloud approach, but for Project Galileo, the agency uses a pre-trained LLM inside GAO’s own secure cloud environment on Amazon Web Services. The agency applied RAG, which points to GAO’s trusted data, such as published GAO reports and internal policy documents, as the primary trusted sources for outcomes.

To ensure accuracy, GAO tweaked the LLM’s behavior so it would not be creative in its responses. If internal GAO data doesn’t have an answer, the LLM won’t make up an answer, Ariga says. GAO has also done preliminary red-teaming exercises to refine security controls.  

GAO hopes to launch the chatbot during the 2025 fiscal year. In the meantime, the agency will test a new version with 100 users this summer. New features in the test version include the ability for staff to upload their own documents so they can summarize content.

It will also allow users to query the internet in limited use cases; for example, to get the current date. The tool will cite the internet source, so users can validate the information to GAO’s audit standards, Ariga says.

LEARN MORE: Agencies’ AI plans are coming into focus.

OPM’s GenAI Process Introduced a Phased Approach to Pilots

Office of Personnel Management CIO Guy Cavallo says he expects generative AI will have a big, positive impact on HR capital planning and operations. To make it happen, OPM has created a steering committee to vet projects and is hiring a team of AI experts to oversee implementation.

The agency will deploy generative AI in two ways, Cavallo says: by using its own proprietary data on open-source or custom-built generative AI models and through commercial software with embedded AI capabilities, such as Microsoft Copilot.

In December, an AI steering committee — led by OPM Acting Director Rob Shriver and including Cavallo, the agency’s chief data officer and the heads of its larger program offices — developed a three-phased process for testing and approving generative AI initiatives.

The first phase is a proof of concept that tests ideas in a sandbox environment. If an idea is successful, it moves to a pilot phase, during which the agency provides more resources and puts more data into AI models. If the pilot is successful, the steering committee may choose to go live or conduct some more testing.

“We feel these three phases allow us to measure whether it’s safe and effective to use,” Cavallo says.

RELATED: Government can make AI work for its mission in six ways.

So far, OPM employees have suggested about 18 generative AI projects, ten of which the agency is testing. All are in the proof-of-concept phase, including a chatbot for citizens and federal workers.

Employees can get answers to HR-related questions, such as: “I’m looking to retire soon. What are the benefits, and what do I do to get ready?” Citizens can ask about benefits of working for the federal government and how to apply for jobs, he says.

OPM is also testing a generative AI proof of concept to help hiring managers across government create new job listings, including writing job descriptions and classifying jobs.

OPM is experimenting with open-source LLMs and building its own, but to ensure results are accurate, the agency is using only internal OPM data on the AI models, Cavallo says. It is testing its generative AI solutions primarily within the OPM cloud on Microsoft Azure.  

Overall, Cavallo believes generative AI will make a huge impact. “It will allow us to provide better services across government and will impact all of the areas we serve,” he says.

Gary Landsman