Close

See How Your Peers Are Moving Forward in the Cloud

New research from CDW can help you build on your success and take the next step.

Jul 03 2023
Data Analytics

How the Federal Government Is Using Artificial Intelligence So Far

The Department of Defense is an early adopter, and agencies that follow would benefit from leveraging popular, crowdsourced AI models.

Agencies should leverage foundational artificial intelligence models being crowdsourced by top universities and large tech companies so that they’re not starting from scratch as they embrace the technology.

Open source paved the way for crowdsourced AI development, which means, aside from bespoke use cases, agencies simply need to determine whether they can use and scale the best models on the market.

Transformer deep learning models — which learn context, and thereby meaning, by tracking relationships in sequential data like words in a sentence — are producing increasingly sophisticated AI. For instance, rather than representing language as text to feed these models, biomedical researchers can train them with genetic code.

An agency committed to defining an advanced AI use case can train such a model with data, which it first needs to inspect, to be intelligent within its specific domain in 18 to 24 months.

Click the banner below to learn about the benefits of hybrid cloud environments.

Early Department of DOD AI Use Cases

The Department of Defense is maturing AI models for predictive maintenance through reinforcement learning to catch equipment failures before they occur and be proactive with maintenance. Eventually, humans no longer will need to review sensor data in aircraft, for example; AI will be able to recommend when to take a plane out of service and when it can stand more hours of flight time before being repaired.

AI can get even more granular and inform mechanics of the likelihood a component will fail within a set time frame, after a series of events occur. Such analysis paves the way for self-healing, where AI will automate repairs before a problem manifests.

In the cybersecurity space, the military is interested in using AI to identify insider threats and zero-day vulnerabilities faster than the inexperienced, underpaid contract personnel it often relies on to scour log data. The goal is to reach a point where the system can use a news feed to identify vulnerabilities a foreign adversary such as Iran is exploiting, look for them across the network and suggest security controls.

There are also AI applications for autonomous unmanned aerial vehicles. A UAV could be positioned over a target so that the model can process the situation on the ground and recommend a course of action. Another model could prompt the drone to start monitoring mobile traffic if a subject picks up a cell phone. The current method sees human-piloted UAVs collecting all data all the time, which is too much information for analysts to digest.

AI could be used for geospatial analysis of images to assess building damage or the landscape of an unknown area. The Pentagon also wants AI to compare images and determine whether aircraft have moved in a way that indicates takeoffs or landings, something the human eye might miss but a machine analyzing images pixel by pixel would not.

READ MORE: Check out this expert discussion of what AI is and the technology's benefits.

Civilian Agencies Also See the Need for AI

Healthcare agencies find that catching Medicaid fraud is easier for AI than for a human because the breadcrumbs are often buried within disconnected data sets.

Similarly, AI can analyze disconnected data sources for agencies to make recommendations on rerouting resources ahead of natural disasters.

The Department of Justice is using AI to predict events in which a law enforcement presence might be needed. The challenge is the sheer volume of data that agencies must parse through; the National Security Agency’s Utah Data Center alone grows its volume by a petabyte a month.

While some agencies might want their websites to be discoverable by ChatGPT and other generative AI models, others will no doubt take issue with handing over their data for third parties to profit. Those agencies likely will develop their own language models to track metrics such as citizen engagement with their programs and services.

This article is part of FedTech’s CapITal blog series.

CapITal blog logo

Johannes Bluemel Photography/Getty Images