Tools and Strategies for Managing the Shadow AI Threat
Having some familiarity with the threat of shadow IT and how to defend against it, security teams may look to deploy similar tactics to combat shadow AI. To start, a complete inventory of the agency’s IT environment is needed, including AI.
“Getting a technical inventory of AI models and technology through automated means is critical for understanding the situation,” Herckis says. “There are AI security posture management tools available, such as Wiz’s AI-SPM, that can help agencies with this.”
Once a technical inventory has cataloged all AI technologies in the environment, security teams then need to establish controls around them — just as they would for any other technology.
LEARN MORE: Agencies must proactively secure data from AI threats.
“Agencies can then set up correct permissions, giving the right people access to the right data sources,” Herckis says. “There is sensitive data that they may not want moving into those AI environments and mixing in with other data or other outcomes. Ensuring that you identify AI technologies, and then appropriately isolating them, is critical. AI security posture management is a way to continuously do that.”
“By automating threat detection, vulnerability management and policy enforcement, AI-SPM solutions like CrowdStrike Falcon Cloud Security AI-SPM allow organizations to remediate risks in real time, while preventing unauthorized tools from entering the ecosystem,” Rodriguez says. “As highlighted in our recent 2024 State of AI in Cybersecurity survey, the vast majority of cybersecurity professionals believe that integrating safety and privacy controls is essential for unlocking generative AI’s full potential, underscoring the importance of governance in creating a secure and innovative AI environment.”
Establishing AI Use Policies and Understanding Use Cases
In addition to deploying security tools to monitor and manage the environment against AI threats, agencies should put policies and governance in place that will assist in managing its use. The goal should not be a blanket ban on artificial intelligence but should establish guardrails for AI access and use to gain the most benefit from this powerful technology.
“Agencies should have an AI policy that is written and socialized within the organization so people understand the framework they should be operating in,” Mushegian says. “This should be written with a risk-based approach from the agency’s legal head outlining how AI should be used in the organization. Furthermore, the policy should stipulate that to access AI tools, users have to follow the agency’s supply chain policy.”
Agencies should also consider setting up a steering committee on AI to get a better understanding of the use cases and concerns that exist in the organization.
“Introducing an AI council lets you gather the thoughts and feelings of people around the agency, allowing leaders to understand what teams and employees need,” Mushegian says. “You’ll want to include leadership from the security and product teams, as well as general counsel.”
In considering how to manage shadow AI, agency leaders should keep in mind that AI is a valuable resource as well as a potential threat. Whether through proper channels or otherwise, teams will seek out the best tools to do their jobs and deliver the most value to the agency. Rather than avoiding AI and its complications, it is incumbent on agency leaders to instead offer a safe way to harness it.
“It is the responsibility of leadership to provide the right tools to workers. It is similar to shadow IT: If you don’t supply the right tools for people to do their job, they will seek alternatives, and you will end up with an unfavorable risk posture,” Herckis says. “Onboarding tools the right way, ensuring they are properly secure, is critical. People get the right tools to do their job, and you get some peace of mind.”
UP NEXT: AI pervades early ARPA-H medical moonshot projects.