Close

See How Your Peers Are Moving Forward in the Cloud

New research from CDW can help you build on your success and take the next step.

Mar 04 2025
Artificial Intelligence

AI-Powered Threat Detection Takes the Burden Off Agencies’ Cyber Analysts

Simple behavioral analysis has given way to network modeling to monitor and predict threats.

Artificial intelligence-powered threat detection allows agencies to “stop the bleeding” when they’re under cyberattack by automating responses to anomalous behavior.

The technology not only identifies where data is coming from and heading, but what that information implies before responding by, say, blocking and flagging an account for a cyber analyst.

Security operations centers have long relied on behavioral analysis tools that reduce the ground their analysts have to cover. This limits false positives when dealing with massive data sets. In the past, analysts just used Splunk User Behavior Analytics or Phantom Analytics to examine a person’s network activity to determine if they were an insider threat; now, it’s possible to model an entire network with AI.

AI monitoring can then identify if changes have been made to the network or if someone has plugged in, and machine learning can produce a predictive threat analysis for analysts to examine.

Click the banner below to begin developing a comprehensive cyber resilience strategy.

 

AI Is a Useful Tool for Analysts and Adversaries Alike

AI-powered threat detection tools adhere to both the National Institute of Standards and Technology’s Cybersecurity Framework and the MITRE ATT&CK Matrix. Tools monitor the network, host or users, but no single tool does it all because threat vectors have significantly increased.

As useful as AI is, it has also increased the number of threats coming at cyber analysts. The technology is readily available and, in most cases, free to advanced persistent threats, giving their capabilities a significant boost.

Add to that the millions of devices on some agencies’ networks and resulting traffic — billions or trillions of packets per second — and analysts are bound to have their hands full creating rules for older devices, firewalls and security orchestration, automation and response.

Machine learning can train itself on network behavior patterns to look for certain events, such as a user logging on at 2 a.m. and accessing data they shouldn’t.

DISCOVER: Operationalizing cyber defense is as important as zero trust.

Agencies Have AI Options Even in Legacy Environments

Agency leaders shouldn’t fear AI, but that requires education.

For instance, Microsoft Copilot consistently looks at what users are doing on a machine to determine how to assist or provide a real-time interface — which is both a great help and a significant threat. A hacker could reverse engineer that capability to see everything a user is doing and capture all the key loggers.

Analysts need to understand the issues and concerns with AI to reduce cyber incident response time and limit threats. They also need to accept that the Trump administration’s requirement that all executive branch employees return to the office is increasing attack vectors due to the amount of legacy equipment and hardware onsite.

Racing to replace legacy technologies will only introduce more vulnerabilities, but targeted procurement of a digital twin from a company such as Forward Networks can enable teams to quickly respond to threats in legacy environments.

Agencies should create a plan for AI-powered threat detection before making major changes, but improper implementation can lead to more problems. CDW Government offers workshops that educate users on properly adopting the technology.

UP NEXT: Cyber resilience hinges on user intelligence.

This article is part of FedTech’s CapITal blog series.

CapITal blog logo

Dragos Condrea/Getty Images