Close

New AI Research From CDW

See how IT leaders are tackling AI opportunities and challenges.

Apr 30 2025
Artificial Intelligence

Building Responsible AI for Agencies

The Department of Defense’s toolkit puts ethics into action with AI development.

The Department of Defense’s Responsible Artificial Intelligence (RAI) Toolkit is informing increased adoption of AI governmentwide.

Developed by the DOD Chief Digital and AI Office in 2023, the toolkit compiles resources for processes — such as identifying useful data sets and flagging bias — intuitively to help developers prove to leadership their capabilities are responsible.

The White House revised two policies guiding agencies’ procurement and use of AI at the beginning of April, on the heels of agencies increasing their use cases threefold in 2024. Given these headwinds, the contents of DOD’s RAI Toolkit are worth revisiting.

Click the banner below to get a read on the current AI landscape.

 

A Cornerstone for Federal AI Development

The RAI Toolkit aims to accelerate the Department of Defense’s adoption of AI by building on the Defense Innovation Unit’s Responsible AI (RAI) Guidelines and Worksheets; the National Institute of Standards and Technology’s AI Risk Management Framework and Toolkit; and the Institute of Electrical and Electronics Engineers’ IEEE 7000-2021 Standard Model Process for Addressing Ethical Concerns during System Design. The toolkit further establishes guidelines for operationalizing DOD’s AI Ethical Principles and developing technical tools and guidance to help personnel design, develop, deploy and use AI systems responsibly.

“Federal agencies are increasingly using AI for important tasks such as predictive analytics, cybersecurity and decision-making in defense and healthcare,” says Riccardo Di Blasio, senior vice president of North America sales at NetApp. “The DOD’s Responsible AI Toolkit is crucial because it helps tackle issues like bias, transparency and security flaws in AI systems.”

“The toolkit gives agencies a structured way to vet AI systems, document their development choices and demonstrate compliance with ethical and legal standards,” says Hodan Omaar, senior policy manager for the Information Technology and Innovation Foundation. “That’s helpful internally, especially for risk management, but it also builds trust externally. Whether you’re communicating to the public or to allies, being able to point to a common process for responsible development helps build trust.”

Terry Halvorsen
Agencies are sometimes in too much of a hurry to get AI running.”

Terry Halvorsen Vice President of Federal Client Development, IBM

 

Ethical Guardrails for Future AI Generations

The role of ethics in AI development is deeply embedded in the RAI Toolkit, which emphasizes setting up guardrails early in the process. This is a necessity for DOD, given the rapid iteration of AI tools and their potential use in a variety of situations including combat, where life-or-death decision-making occurs.

“The first thing you have to have is baseline ethics to say what you're going to do with the AI,” says Terry Halvorsen, vice president of federal client development at IBM. “Right now, the Defense Department is correct in keeping a human in any lethal decision, but the next generation of AI is going to have more automated decision capability needing to keep up with the speed of combat and the speed of mission. That's where I think you’re going to need more guidance.”

“Ethics in AI is often talked about in broad strokes, but the real challenge is operationalizing it,” Omaar says. “The toolkit helps bridge that gap. It ties ethical principles to concrete development practices like requiring explainability for high-risk decisions or ensuring human oversight in specific contexts. That makes ethics something teams can implement and measure, not just aspire to.”

RELATED: Agencies still need to keep AI governance in mind.

Staying Accountable by Tracing AI Data Back to Its Origins

Traceability is another element of responsible AI development underscored by the RAI Toolkit; that is, the ability to track and document all data and decisions of an AI tool, including how it was trained and how it processes information.

“This helps ensure data security in the biggest way. Making sure it's traceable back to the root source is a key characteristic of good, valid data,” Halvorsen says. “You also want attribution of that data for all kinds of other reasons, both political and economic.”

“Traceability ensures accountability, aids in debugging and improvement, and builds transparency,” Di Blasio says. “This is crucial for diagnosing failures and ensuring compliance with regulations.”

When an AI system makes an incorrect decision, traceability allows developers to identify and correct the source of the error. Comprehensive data management solutions that facilitate detailed logging and auditing of AI processes are essential to traceability.

LEARN MORE: These are the four biggest security risks to generative AI.

Building Trust by Explaining AI Decision-Making

AI bias, which can lead to unfair outcomes for certain groups that are unfairly targeted or neglected in data, is one problem where the RAI Toolkit can make a difference. Testing and validating AI data sets helps ensure they are diverse and representative.

“Developers use the Responsible AI Toolkit to assess bias in data sets and ensure transparency through explainability modules,” Di Blasio says. “For instance, they might use specific algorithms to detect and correct biases in training data.”

Robust data management capabilities with a unified, intelligent approach to storage, such as NetApp ONTAP,  support explainability tools that show how models come to decisions, maintaining accountability, Di Blasio says.

DISCOVER: FedRAMP 20x aims to leverage automation like never before.

The RAI Toolkit, Part of a Broader AI Strategy

The high velocity of AI makes it challenging for agencies to keep pace with the latest breakthroughs and their repercussions on development. The CDAO designed the RAI Toolkit to be a living document that will be continually enhanced, but that may not be enough.

“Agencies need a broader strategy for staying current, including partnerships with researchers, feedback loops from deployments and workforce training,” Omaar says. “Tools like this are part of the infrastructure, but they need to be supported by a culture of learning and adaptation.”

Stronger interoperability between federal AI systems and data exchange across agencies is key for successful AI systems.

“I would say the final area that will become more critical is global alignment with international AI standards,” Di Blasio says. “This alignment ensures that advancements in AI are consistent, safe and beneficial on a global scale.”

UP NEXT: Here is what to expect from the White House’s AI Action Plan.

Responsible AI Through Data Curation

Agencies need to apply RAI Toolkit’s principles from the very beginning, starting with the data curation process. IBM stresses this as the most important process, ensuring all data credibility boxes are checked.

“Agencies are sometimes in too much of a hurry to get AI running,” Halvorsen says. “We all know the cliche, ‘Bad data in, bad decisions out.’ With AI, this is absolutely critical to remember.”

Douglas Rissing/Getty Images