Close

See How Your Peers Are Moving Forward in the Cloud

New research from CDW can help you build on your success and take the next step.

Dec 20 2024
Security

Why Agencies Must Be Proactive in Securing Data from AI Threats

Lacking comprehensive federal guidance, public and private sector employees are dumping their organizations’ data into unvetted artificial intelligence models.

Government can’t afford to have the same delayed response to artificial intelligence cyberthreats that it did to recent compromises because the emerging technology is both rapidly improving and easier for bad actors to deploy.

AI-powered bots are aggressors that can stage brute force phishing attacks on agency emails and infiltrate targets that previously took years to breach by adapting to their environments, using machine learning to pass information between agents and plan their next moves.

A prime example of the AI threat is the flood of deceptive text messages that hit cellphones during the 2024 election. Couple those tactics with new deepfake technologies producing realistic memes and videos, including one of presidents Biden and Trump going on a fun outing, and things can get confusing — and dangerous.

Click the banner below to begin developing a comprehensive cyber resilience strategy.

 

Understanding the Threat AI Poses to Agency Data

Some agencies lack AI protections because they fail to understand the technology, how their employees are using it and the data it puts at risk. Industry is just as guilty of this, with workers in both sectors dumping their data into ChatGPT or one of the many other generative and augmented AI solutions available.

These are glorified data mining operations, where a model is created to assist with a task that lures people into feeding large-scale databases collecting their information. Agencies face the risk of sensitive data going public.

Controlled unclassified information (CUI) isn’t simply tied to national security but can also reveal the locations of critical assets.

Cybersecurity around AI is no different from dealing with any other threat; at-risk data touched by the technology must be protected, and some agencies are being more proactive about this than those awaiting clearer federal guidance.

DISCOVER: Artificial intelligence may augment diplomatic data security.

Properly Vetting AI Solutions

Agencies must vet the AI solutions their employees can use the same way they assess threat vectors and validate vulnerabilities: by leveraging their intelligence network.

The Department of Defense went so far as to create NIPRGPT to secure CUI while still allowing personnel to query the Non-classified Internet Protocol Router Network, or NIPRNet.

While an executive order around AI security was rumored for the fall, the election result may have changed plans, as Trump already has vowed to repeal Biden’s AI executive order.

In the interim, it’s best for agencies to watch the National Institute of Standards and Technology and the larger agencies with rapid capabilities it works with — the departments of Energy, Health and Human Services, and Homeland Security — for guidance and best practices.

While it remains to be seen what shape NIST will take in the new administration, the creation of a national AI center of excellence to push out vetted models to agencies and industry, which face the same threats, could dramatically improve collaboration around security.

UP NEXT: The USDA has big plans for artificial intelligence.

This article is part of FedTech’s CapITal blog series.

CapITal blog logo

Robert Way/Getty Images