May 02 2024

6 Ways Agencies Can Make AI Work for Their Missions

Implementing artificial intelligence effectively requires managing models and infrastructure. That’s a tall order, but a pragmatic approach helps.

Consensus on where agencies should start their artificial intelligence journeys is shifting as federal officials continue to evaluate how the technology can work for them.

Some agencies already use AI to fight fraud. Work with AI at the Department of Defense ranges from identifying cybersecurity threats to analyzing data from drones.

Current AI use cases are “flattening” as the technology evolves, and increasingly, the ability to scrape data from documents and databases for analysis is table stakes, says Jeff Winterich, distinguished technologist for HPE Federal.

“Almost every day, I talk to people — from special operations organizations to local governments — about using technology to do knowledge-based search and chat with their data,” Winterich says. “The challenge is how to make it easy.”

Winterich says he wants to show agencies how to use AI to empower “people who put their lives on the line every day.” He offers federal officials six recommendations for putting AI to work in support of their missions, including steps to take and pitfalls to avoid.

DISCOVER: How HPE solutions can support your agency.

The Don’ts: Taking on More Work Than Necessary

Avoid models that try to do everything. Generative AI technologies such as OpenAI can have as many as 1 trillion parameters. That’s not consumable for most entities; it uses a lot of computing resources, and it’s a lot more than many predictive models will need, Winterich says.

Fortunately, a groundswell of open-source models with fewer but more specific variants has lowered the barrier to entry for organizations looking to put generative AI into practice.

“That makes it easy to say, this model is good for finance, translation, search and so on,” Winterich says. “That’s where the rubber hits the road.”

Approach infrastructure with caution

Along the same lines, the evolution of purpose-built predictive models means agencies don’t need to max out their infrastructure deployments. For example, large language models such as Mistral 7 can run on a single graphics processing unit.

“You don’t need to rent a collocated data center or a room full of computers to do knowledge-based search,” Winterich says.

With agencies increasingly interested in training models on their own data sets, on-premises infrastructure can provide privacy and peace of mind, he adds. The scalability of exascale supercomputers such as HPE Cray can further help agencies leverage the right resources at the right time.

Click the banner to learn why platform engineering improves efficiency in your organization.


Perfect can’t be the enemy of good

Winterich’s background is in engineering, so he’s familiar with the tendency to get hung up on a model’s performance and accuracy metrics. That’s especially true when comparing AI with agents or soldiers who have decades of experience in the field, he says.

Agencies may benefit from focusing less on building a perfect model and more on building a model that alleviates a user’s burden.

“Think of all the data they need to look at that’s not contextually relevant,” Winterich says. “If we can filter the data, and they can pick the right things out of the analysis, then we can take those 16-hour workdays away from them.”

The Do’s: Exploring Partnerships That Go Beyond Generative AI

Get comfortable with experimentation 

Agencies and private entities alike are “hesitant to peel back the onion” when it comes to AI, Winterich says. This may be an unintended consequence of the government’s focus on AI risks and ongoing work to develop AI safety principles.

READ MORE: Why agencies must pay attention to AI governance.

While valid, those efforts shouldn’t stop agencies from trying new things.

“We know people can use AI responsibly and safely,” Winterich says. “They just need to weigh the risks; they need to get comfortable experimenting with AI models before they’re going to be able to operationalize them.”

Build knowledge through partnerships if necessary

Because many of today’s models can run on a single server with a single accelerator, acquiring affordable infrastructure poses less of an obstacle. The challenge lies in keeping pace with open-source models as they’re released and updated, along with the latest advances in accelerators from different vendors.

Jeff Winterich
Pick and choose the models that make sense for your organization. That way, you’re not using AI to replace that person in the field.”

Jeff Winterich Distinguished Technologist, HPE Federal

This takes on added importance as agencies do more sophisticated work with AI. Here, agencies benefit from partnerships with universities and federally funded research and development centers, Winterich says.

Focus on workflows beyond generative AI

Much like their private sector counterparts, agencies have latched on to generative AI. For instance, the State Department envisions customer experience improvements to help citizens abroad solve problems such as obtaining a visa or reporting the birth of a child.

Though these are valuable use cases, agencies should also look at smaller workflows that may benefit from automation, such as those that get contextual insights into the field.

“Pick and choose the models that make sense for your organization,” Winterich says. “That way, you’re not using AI to replace that person in the field; you’re getting them to the point where they find the detail that’s extremely hard to find otherwise.”

Brought to you by:

Just_Super/Getty Images

Learn from Your Peers

What can you glean about security from other IT pros? Check out new CDW research and insight from our experts.