“The U.S. government is already using off-the-shelf technology integrated with AI tools and systems. A widely used example includes the use of AI-enabled spam filters, which can enhance email security by identifying unwanted emails and protecting users from phishing attempts and other malicious content,” the white paper notes.
The AI-enabled spam filters improve efficiency, conduct high volumes of repetitive tasks on behalf of humans, increase accuracy, pinpoint data through clearly defined criteria and reduce false positives, the report says.
Federal agencies will continue to amplify AI use for threat detection, but agencies aren’t yet ready to harness it for threat response, Dunn says.
“The problem is that there’s too much of a trust factor for allowing automatic response. It’s not like a firewall. A firewall is an open door or a closed door. When you start allowing AI to manipulate systems, you are allowing it to manipulate data. And many organizations do not have mature data management,” he says. “If there’s a threat detected by an AI system, the AI system can perhaps trigger notifications. It can bring people in disparate locations together in a collaborative way in response.”
READ MORE: This is how agencies should select a next-gen firewall.
Upskill Employees and Foster Interagency Cooperation
The biggest things any federal agency can do to improve its capacity to harness AI power is to invest in AI skills and to hold robust AI conversations with other agencies, Dunn recommends.
“Federal agencies must keep investing in AI talent. They’re not going to fully leverage AI-powered systems unless they upskill their workforce, whether that’s internally or externally,” he says.
“Federal agencies must foster interagency AI collaboration. Many government organizations operate in a vacuum. Each organization across the board deploys an individual solution. But shadow IT arises when an organization wants to do something and they cannot wait due to time or money or communications, and so they build it themselves. To avoid some of these historical challenges, federal organizations must foster interagency collaboration.”
The Center for Cybersecurity Policy and Law agrees with Dunn’s outlook. It suggests that one area of interagency cooperation should be agreement and synchronization of the roles and responsibilities of a chief AI officer to help set AI priorities and synchronize AI applications across the federal enterprise.
UP NEXT: Shadow AI presents real threats to agencies.