Microsoft Copilot Already Has Access to Multiple AI Solutions
Agencies that use Microsoft endpoints, applications and cloud instances may already have access to multiple Microsoft AI solutions, such as Copilot.
Microsoft frames Copilot as an “AI-powered assistant” that can help individual employees perform their daily tasks. For example, agencies can use Microsoft 365 Copilot by itself, and they can also add role-based agents or create their own agents designed to help workers in particular roles. As part of Microsoft 365, Copilot is already subject to all of Microsoft 365’s cybersecurity and privacy policies and requirements.
Using Azure AI and ML to Customize and Deploy Models
Microsoft offers a range of extensible AI solutions under its Azure AI brand. For agencies that want to create AI-powered apps, Microsoft provides the Azure AI Foundry toolkit. As of this writing, there are over 1,800 AI models available for use with AI Foundry, most developed by third parties.
The same AI models that are used with Azure AI Foundry are also available within Azure Machine Learning workspaces. Here, agencies can customize and deploy machine learning models, such as large language models.
Ensuring the security of agency-developed AI apps or models, especially with such a wide variety of starting models to choose from, is bound to be a much larger undertaking than securing the internal use of a Copilot agent. It will require the use of several other tools, such as those discussed below.
WATCH: NSF is growing a national AI innovation ecosystem.
Azure AI Content Safety Enforces Agency Policies
Microsoft’s Azure AI Content Safety serves several purposes, such as blocking content that violates agency policies. One of the service’s features, Prompt Shields, is of particular interest for AI environment security. Prompt Shields can monitor all prompts and other inputs to Azure-based LLMs and carefully analyze them to identify attacks and any other attempts to circumvent the model’s protections.
For example, Prompt Shields could identify someone attempting to steal sensitive information contained in an LLM or to cause the LLM to produce output that violates the agency’s policies, such as using inappropriate language or directing the LLM to ignore its existing security and safety policies.
Groundedness Detection, another service offered as part of Azure AI Content Safety, essentially looks for AI-generated output that is not solidly based in reliable data. In other words, it can identify and stop some AI hallucinations.
Microsoft Defender Monitors and Maintains Azure Environments
Microsoft provides Defender for Cloud (formerly Azure Security Center) to assist agencies with monitoring and maintaining the security of their Azure environments. This includes any Azure workloads being used to develop or host an agency’s AI apps. Defender for Cloud can help safeguard AI apps by ensuring the platforms under them are patched and configured to eliminate known security vulnerabilities. Also, Defender for Cloud can identify the latest cyberthreats and detect and stop attacks against those platforms and the AI apps running on them. These are all important elements of safeguarding an agency’s AI environments and usage.
Microsoft offers other forms of Defender, including Defender for Cloud App Security (formerly Microsoft Cloud App Security), which identifies cloud app use and reports how risky each app is. This information can be useful in finding unauthorized usage of third-party AI apps and services. Defender for Cloud App Security is also capable of monitoring your agency’s Copilot use for suspicious activity.
Microsoft’s Defender for Endpoint and Defender for Servers provide additional security protection for other components of your agency’s AI environments outside of Azure, such as developer and user workstations and servers.
DISCOVER: Self-healing networks offer enhanced security.
Ensure Data Governance Is Within Your Agency's Purview
Microsoft Purview is a suite of tools and services that work together to help agencies with data governance, management and protection. Existing Purview components, such as Compliance Manager, have been enhanced to include assessments of compliance with certain AI regulations.
Components specific to AI have also been added to Purview. The Purview AI Hub can help agencies monitor for and identify sensitive data in AI prompts, particularly with Copilot use. The AI Hub also monitors which files are accessed through Copilot to look for attempts to access sensitive data in files. The intent of AI Hub is to ensure compliance with policies and requirements by identifying possible violations as they are occurring.