Sep 18 2023
Data Analytics

NIST’s Newest Research Center Helping Agencies Understand AI Risks

AI Risk Management Framework use cases are on the horizon.

The government’s new Trustworthy and Responsible AI Resource Center is expanding IT leaders’ understanding of artificial intelligence and its risks, says Reva Schwartz, research scientist at the National Institute of Standards and Technology.

Techniques for mitigating AI risk remain a small segment of the entire technology market, which is why the AIRC houses NIST’s AI Risk Management Framework and supporting documents authored by the agency and others — as long as they’re in scope.

NIST launched the AIRC in March to guide the safe development and deployment of AI in government and industry, and the repository is growing with resources that will turn it into an interactive, role-based experience.

“People want to see content in different formats; they want to be able to search for things better,” Schwartz tells FedTech. “They love the AI RMF explainer video, which is there.”

Click the banner below to learn how to better defend your data.

What’s Included in the AIRC?

Schwartz is part of NIST’s Trustworthy and Responsible AI team that worked on the AI RMF and a companion playbook, which suggests actions to achieve outcomes for the framework’s four functions: govern, map, measure and manage. Her team works with the AIRC design team to ensure new content is properly contextualized and easy to find.

Unlike NIST’s programmatic AI RMF website, the AIRC is a community effort, Schwartz says. For instance, multiple users requested a PDF version of the Excel-formatted playbook, so one was released. Coming in at a hefty 210 pages, the playbook remains a reference that technologists need not read cover to cover.

The AI team quickly realized the AIRC could be used to communicate the work it’s doing, Schwartz says. NIST started a Generative AI Public Working Group in July that shares events and other engagement opportunities via the AIRC.

MORE FROM FEDTECH: Agencies find generative AI is already here.

Aside from the AI RMF and playbook, the AIRC features a glossary of more than 800 terms intended to promote a shared understanding of trustworthy and responsible AI among practitioners and organizations.

“Most of the thinking in AI is really about AI performance, but trustworthy and responsible AI is a whole different ballgame game — a lot of different language and terms there,” Schwartz says. “We wanted to make sure that people could be familiar with those terms.”

The AIRC also includes what it calls crosswalks, which map concepts and terms in NIST’s framework with other international standards and guidelines.

There’s also a roadmap outlining where NIST is headed next with its AI guidance.

DISCOVER: Federal AI policy should focus on innovation.

Introducing AI RMF Use Cases from Stakeholders

Coming soon to the AIRC are a metrics hub and use cases. The latter are expected in a matter of weeks and will consist of AI RMF implementations by agencies, companies and foreign governments that have been submitted to and reviewed by NIST.

Some use cases will be high-level overviews; others will be detailed papers about submitters’ experiences with the AI RMF for specific processes, unlike profiles that simply explain what was done to achieve AI roadmap outcomes.

“Profiles are much broader,” Schwartz says. “And use cases will vary depending on what the stakeholder wants to provide us.”

Suchat longthara/Getty Images

Learn from Your Peers

What can you glean about security from other IT pros? Check out new CDW research and insight from our experts.