Close

See How Your Peers Are Moving Forward in the Cloud

New research from CDW can help you build on your success and take the next step.

Jan 10 2025
Security

Shadow AI: Shining Light on a Growing Security Threat

While not top of mind for agencies, the unauthorized use of artificial intelligence presents real cybersecurity risks that require action now.

Agencies need to confront the growing problem of shadow artificial intelligence through policy updates and the implementation of new security measures.

Microsoft and LinkedIn found 78% of AI users bring their own tools to work, and 52% are reluctant to admit using it, according to the 2024 Work Trend Index Annual Report.

This is the reality of shadow AI: Many workers already use the technology covertly.

Click the banner below to begin developing a comprehensive cyber resilience strategy.

 

Defining Shadow AI and Identifying Its Causes

“Workers, in the absence of structured advice or oversight, are looking to gain access to something that has a strategic benefit to them being able to perform their jobs every day,” says Barracuda CIO Siroui Mushegian.

The challenge that agency leaders face is that AI use is advancing rapidly, and governance can’t keep up.

“Different teams across an organization may adopt AI tools independently to enhance productivity, analyze data or drive innovation without going through formal IT approval processes,” says Cristian Rodriguez, field CTO for the Americas at CrowdStrike. “This can stem from pressure to stay competitive, a lack of awareness about security protocols or insufficient enterprise AI solutions. Without visibility into these deployments, organizations lose control over how data is accessed, processed and stored.”

Mitch Herckis
If you don’t supply the right tools for people to do their job, they will seek alternatives, and you will end up with an unfavorable risk posture.”

Mitch Herckis Global Head of Government Affairs, Wiz

Why Shadow AI Is a Unique Security Problem

An additional challenge is the speed at which AI development is advancing; it’s a new technology that is constantly improving with few established norms relating to security.

“We’ve been dealing with cloud security for a long time, so we know what secure looks like with that,” says Mitch Herckis, global head of government affairs at Wiz. “With AI, we are much less certain. The vulnerabilities and risks aren’t as well known. There’s not that common understanding of the risk it presents.”

The newness of AI in the workplace also means that unsanctioned use of the technology may not be on the radar of agency leaders and security teams.

“The understanding of shadow AI as an issue has not broken through at the C-level,” Herckis says. “It hasn’t received the attention it deserves because people are still adopting it; it's still novel. Leaders are busy struggling with many of the traditional problems.”

Unnecessary Risks and Costs Stemming From Shadow AI

When onboarding new technologies, agencies follow a thorough vetting process by IT and procurement teams that considers many factors, including security. But this doesn’t happen when staffers, unaware of the risk, make use of their own AI technology on agency projects. Any time staff members use an unapproved AI tool for work, they are introducing potential vulnerabilities.

“These tools may lack encryption, secure data storage or compliance with regulatory standards, exposing sensitive information,” Rodriguez says. “With adversaries increasingly targeting AI models and the sensitive data they process, shadow AI can accelerate risks of breaches and leaks.”

In addition to the security risk, the use of unapproved AI tools can lead to a wasteful duplication of efforts across teams and unnecessary expenditures.

“This duplication not only increases licensing fees and support costs but also complicates the organization’s ability to standardize AI operations,” Rodriguez says. “Moreover, disparate AI systems may produce conflicting results, requiring additional effort to reconcile outputs. This inefficiency hampers productivity and delays decision-making.”

Click the banner below to follow the IT professionals who had the biggest impact on government in 2024.

 

Tools and Strategies for Managing the Shadow AI Threat

Having some familiarity with the threat of shadow IT and how to defend against it, security teams may look to deploy similar tactics to combat shadow AI. To start, a complete inventory of the agency’s IT environment is needed, including AI.

“Getting a technical inventory of AI models and technology through automated means is critical for understanding the situation,” Herckis says. “There are AI security posture management tools available, such as Wiz’s AI-SPM, that can help agencies with this.”

Once a technical inventory has cataloged all AI technologies in the environment, security teams then need to establish controls around them — just as they would for any other technology.

LEARN MORE: Agencies must proactively secure data from AI threats.

“Agencies can then set up correct permissions, giving the right people access to the right data sources,” Herckis says. “There is sensitive data that they may not want moving into those AI environments and mixing in with other data or other outcomes. Ensuring that you identify AI technologies, and then appropriately isolating them, is critical. AI security posture management is a way to continuously do that.”

“By automating threat detection, vulnerability management and policy enforcement, AI-SPM solutions like CrowdStrike Falcon Cloud Security AI-SPM allow organizations to remediate risks in real time, while preventing unauthorized tools from entering the ecosystem,” Rodriguez says. “As highlighted in our recent 2024 State of AI in Cybersecurity survey, the vast majority of cybersecurity professionals believe that integrating safety and privacy controls is essential for unlocking generative AI’s full potential, underscoring the importance of governance in creating a secure and innovative AI environment.”

Establishing AI Use Policies and Understanding Use Cases

In addition to deploying security tools to monitor and manage the environment against AI threats, agencies should put policies and governance in place that will assist in managing its use. The goal should not be a blanket ban on artificial intelligence but should establish guardrails for AI access and use to gain the most benefit from this powerful technology.

“Agencies should have an AI policy that is written and socialized within the organization so people understand the framework they should be operating in,” Mushegian says. “This should be written with a risk-based approach from the agency’s legal head outlining how AI should be used in the organization. Furthermore, the policy should stipulate that to access AI tools, users have to follow the agency’s supply chain policy.”

Agencies should also consider setting up a steering committee on AI to get a better understanding of the use cases and concerns that exist in the organization.

“Introducing an AI council lets you gather the thoughts and feelings of people around the agency, allowing leaders to understand what teams and employees need,” Mushegian says. “You’ll want to include leadership from the security and product teams, as well as general counsel.”

In considering how to manage shadow AI, agency leaders should keep in mind that AI is a valuable resource as well as a potential threat. Whether through proper channels or otherwise, teams will seek out the best tools to do their jobs and deliver the most value to the agency. Rather than avoiding AI and its complications, it is incumbent on agency leaders to instead offer a safe way to harness it.

“It is the responsibility of leadership to provide the right tools to workers. It is similar to shadow IT: If you don’t supply the right tools for people to do their job, they will seek alternatives, and you will end up with an unfavorable risk posture,” Herckis says. “Onboarding tools the right way, ensuring they are properly secure, is critical. People get the right tools to do their job, and you get some peace of mind.”

UP NEXT: AI pervades early ARPA-H medical moonshot projects.

Dimitris66/Getty Images