Close

See How Your Peers Are Moving Forward in the Cloud

New research from CDW can help you build on your success and take the next step.

Nov 06 2020
Software

In Critical Infrastructure, ‘Trustworthy’ AI Is Necessary

Artificial intelligence systems must earn the confidence of their human counterparts to be worthwhile.

One of the challenges with defining artificial intelligence is that if you put 10 people in a room, you will get 11 different definitions. From our perspective at the National Institute of Standards and Technology, an AI system exhibits reasoning and performs some automated decision-making without the aid of a human.

It’s generally accepted that AI promises to grow the economy and improve our lives. But with these benefits, it also brings new risks. How can we be sure this technology is not just innovative and helpful, but also trustworthy, unbiased and resilient in the face of attack? 

Trustworthy AI systems will need to exhibit characteristics such as resilience, security and privacy if they’re going to be useful and if people are going to adopt them without fear — that’s what we mean by trustworthy. 

Our aim at NIST is to ensure these desirable characteristics become reality. We want systems that can either combat cybersecurity attacks, or at least recognize when they are being attacked. We need to protect people’s privacy.

If systems are going to operate in life-or-death environments such as medicine or transportation, people need to be able to trust that AI will make the right decisions and not jeopardize their health or well-being.

Agencies Need to Prioritize Resilience with AI Deployments

Resilience is key. An AI system needs to be able to fail gracefully. Let’s say you train an AI system to operate in a certain environment. What if the system is taken out of its comfort zone? Catastrophic failure is absolutely ­unacceptable, especially if the AI is deployed in systems that operate critical infrastructure. 

So if the AI is outside the boundaries of its nominal operating environment, can it fail in such a way that it doesn’t cause a disaster? And can it recover from that in a way that allows it to continue to operate? These are the characteristics that we look for in a trustworthy artificial intelligence system. 

Charles Romine, Director, Information Technology Laboratory, National Institute of Standards and Technology
If systems are going to operate in life-or-death environments such as medicine or transportation, people need to be able to trust that AI will make the right decisions and not jeopardize their health or well-being.”

Charles Romine Director, Information Technology Laboratory, National Institute of Standards and Technology

While industry has a remarkable ability to innovate and provide new capabilities that people don’t realize they need or want (and it’s doing that now in the AI consumer space), it sometimes fails to combine that push to market with deep thought about how to measure characteristics that are going to be important in the future: privacy, security, resilience — and trustworthiness. What we can do is to provide the foundational work that the consumer space needs to manage those risks. 

We’re going to need a little more assurance, especially at the point when AI starts to operate critical infrastructure. That’s where NIST can come together with industry to think about those things. I’m often asked how it is possible to influence a multitrillion-­dollar industry on a budget of $150 million. If we were working apart from industry, we would never be able to. But we can work in partnership with them, and we routinely do. And they trust us, they’re thrilled when we show up, and they’re eager to work with us. 

EXPLORE: Find out how to bring federal workers into the conversation around emerging technologies. 

Guidelines Are Necessary for AI Rollouts

I think some of the science-fiction fears of AI have been overhyped. At the same time, it’s important to acknowledge that risks are there, and that they can be pretty severe if they’re not managed in advance. For the foreseeable future, however, these systems are too fragile and too dependent on us to take over. 

One thing that will be necessary for us is to pull out desirable characteristics such as usability, interoperability, resilience, security, privacy and all the things that will require a certain amount of care to build into the systems, and get innovators to start incorporating them. Guidance and standards can help do that.

Last year, NIST published “U.S. Leadership in AI,” our plan for how the federal government should engage in the AI standards development process. I think there’s general agreement that guidance will be needed for interoperability, security, reliability, robustness and all these characteristics that we want AI systems to exhibit if they’re going to be trusted.

LEARN MORE: What should agencies consider before deploying AI? 

Getty Images/Just_Super