Early Department of DOD AI Use Cases
The Department of Defense is maturing AI models for predictive maintenance through reinforcement learning to catch equipment failures before they occur and be proactive with maintenance. Eventually, humans no longer will need to review sensor data in aircraft, for example; AI will be able to recommend when to take a plane out of service and when it can stand more hours of flight time before being repaired.
AI can get even more granular and inform mechanics of the likelihood a component will fail within a set time frame, after a series of events occur. Such analysis paves the way for self-healing, where AI will automate repairs before a problem manifests.
In the cybersecurity space, the military is interested in using AI to identify insider threats and zero-day vulnerabilities faster than the inexperienced, underpaid contract personnel it often relies on to scour log data. The goal is to reach a point where the system can use a news feed to identify vulnerabilities a foreign adversary such as Iran is exploiting, look for them across the network and suggest security controls.
There are also AI applications for autonomous unmanned aerial vehicles. A UAV could be positioned over a target so that the model can process the situation on the ground and recommend a course of action. Another model could prompt the drone to start monitoring mobile traffic if a subject picks up a cell phone. The current method sees human-piloted UAVs collecting all data all the time, which is too much information for analysts to digest.
AI could be used for geospatial analysis of images to assess building damage or the landscape of an unknown area. The Pentagon also wants AI to compare images and determine whether aircraft have moved in a way that indicates takeoffs or landings, something the human eye might miss but a machine analyzing images pixel by pixel would not.
READ MORE: Check out this expert discussion of what AI is and the technology's benefits.
Civilian Agencies Also See the Need for AI
Healthcare agencies find that catching Medicaid fraud is easier for AI than for a human because the breadcrumbs are often buried within disconnected data sets.
Similarly, AI can analyze disconnected data sources for agencies to make recommendations on rerouting resources ahead of natural disasters.
The Department of Justice is using AI to predict events in which a law enforcement presence might be needed. The challenge is the sheer volume of data that agencies must parse through; the National Security Agency’s Utah Data Center alone grows its volume by a petabyte a month.
While some agencies might want their websites to be discoverable by ChatGPT and other generative AI models, others will no doubt take issue with handing over their data for third parties to profit. Those agencies likely will develop their own language models to track metrics such as citizen engagement with their programs and services.
This article is part of FedTech’s CapITal blog series.