Close

New AI Research From CDW

See how IT leaders are tackling AI opportunities and challenges.

Apr 22 2025
Security

4 Primary Security Risks to Mitigate in GenAI Solutions

Defense officials and other federal authorities should work to reduce these artificial intelligence dangers.

As the use of artificial intelligence by agencies continues to grow, defense officials and other federal authorities must take care to mitigate security risks in generative AI.

Under the Biden White House, the Office of Management and Budget directed agencies “to implement concrete safeguards when using AI in a way that could impact Americans’ rights or safety. These safeguards include a range of mandatory actions to reliably assess, test and monitor AI’s impacts on the public, mitigate the risks of algorithmic discrimination and provide the public with transparency into how the government uses AI.”

Meanwhile, the Defense Department’s Chief Digital and Artificial Intelligence Office established Task Force Lima to evaluate generative AI capabilities. At the end of 2024, the task force spotlighted four important GenAI limitations that create risks for applying the technology in specific use cases.

Those risks include hallucinations, the lack of explainability, security vulnerabilities, and limited test and evaluation techniques. While the Trump administration will certainly go its own way when it comes to AI adoption, these limitations merit consideration, especially in the defense environment.

Click the banner below to get a read on the current AI landscape.

 

Reducing AI Hallucinations with Retrieval-Augmented Generation

Hallucinations are a major challenge for all organizations adopting GenAI solutions. In a hallucination, a large language model produces a result that sounds reasonable but is based on factually incorrect information. In a study by Carnegie Mellon University, researchers found that LLMs hallucinate as frequently as 1 in 10 responses.

The military potentially could reduce hallucinations with retrieval-augmented generation (RAG), a technique that ensures LLMs receive the most current and relevant data.

Lack of Explainability Hinders Trust in AI

In a Ponemon Institute survey, researchers found that 57% of cybersecurity professionals cited “lack of explainability” as a barrier to trusting AI solutions. Lack of explainability stems from the sometimes-opaque design-making processes of GenAI. While processing data, LLMs may not exercise correct judgement in characterizing it and may identify benign behavior as malicious. In response, AI solutions providers have developed explainable AI frameworks to enhance transparency.

DISCOVER: New cyber solutions are found by looking for hidden patterns.

Prompt Injection, Jailbreaking, “Cloudborne” Attacks and Cloud-Jacking

Among the security vulnerabilities inherent in LLMs are prompt injection and jailbreaking. According to the Open Web Application Security Project, “Prompt injection involves manipulating model responses through specific inputs to alter its behavior,” while jailbreaking “is a form of prompt injection where the attacker provides inputs that cause the model to disregard its safety protocols entirely.” RAG and other methods can guard against these vulnerabilities.

The Defense Department also must protect against “Cloudborne” attacks, which exploit a vulnerability in bare-metal cloud servers to implant a malicious backdoor in its firmware, and cloud-jacking, in which bad actors exercise control over cloud resources, possibly including cloud-based LLM deployments.

Expanding GenAI Test and Evaluation Techniques

The National Institute of Standards and Technology has moved to address limited avenues for GenAI test and evaluation by establishing the NIST GenAI evaluation program through the NIST Information Technology Laboratory. The program provides a platform for test and evaluation with the goal of assessing GenAI technologies. Program goals include creation of benchmark datasets; developing detection technologies that can authenticate content; conducting comparative analyses with relevant metrics; and promoting technologies that can identify bad information.

LEARN MORE: Predictive AI is essential to zero-trust security.

Plan and Manage AI Initiatives With Third-Party Partners

The CDW Artificial Intelligence Research Report describes additional challenges with AI management, noting that organizations may face difficulties in finding highly skilled IT staff, whether building their own AI capabilities or adopting cloud services. Government agencies and other groups also may find it challenging to ensure data quality and availability in addition to implementing and scaling AI solutions.

Third-party partners can help federal officials overcome these obstacles when planning and managing their AI initiatives. An experienced managed services provider can assist agencies in navigating the four prominent GenAI risks identified by Task Force Lima, at the Pentagon and beyond.

kontekbrothers/Getty Images