With artificial intelligence becoming a solidly mainstream idea, federal agencies that want to deploy the technology must make sure that AI remains adaptable and transparent to the humans that use it.
Improperly designed AI can scale a minor error that could be caught by a human into a systemic problem difficult to correct, said Anil Chaudhry, director of AI implementations for the General Services Administration’s IT Modernization Centers of Excellence program.
“We’re thinking through these issues of how to reduce bias when replicating AI,” he said. “A model shouldn’t be so set in stone that we can’t fix it.”
Chaudhry was part of a panel of federal AI experts who spoke Monday at the virtual ReImagine Nation ELC 2020 conference, discussing new and exciting use cases for AI while also warning that the technology still needs human involvement to work best.
Introducing AI into situations that are ever-evolving — such as a hospital environment where doctors are always using new medicines and seeing new illnesses — takes a certain amount of care, said Wanmei Ou, a 2020 Presidential Innovation Fellow working with the Department of Veterans Affairs on IT projects.
“We must adapt the model in an ongoing way to make sure it achieves high performance in new conditions,” she said.
National AI Strategy Calls for More Government Use
In 2019, President Trump issued an executive order creating a national strategy on artificial intelligence. The American AI Initiative calls for additional investment in AI research, increased use of AI as a resource and more federal programs that leverage AI to improve services.
Much of the focus has been on AI-powered robotic process automation to remove the burden of rote, repetitive tasks from human workers. But AI itself needs more training, especially when it comes to natural language processing — understanding not just language, but the way that humans speak.
For instance, the VA receives thousands of disability claims from veterans that need to be organized into various categories. The challenge, said Ou, is that the veterans describe their disabilities in language that does not always match the official medical terminology used by doctors.
The agency recently developed an algorithm that can quickly and automatically sort through veterans’ claims, understand their contents and classify them into the proper category for resolution, speeding up the notoriously slow process, she said.
Supply Chain Safety Can Improve with AI
AI can also help keep the supply chain safe and secure, said Vincent Annunziato, director of the business transformation and innovation division of the Small Business Administration’s Office of International Trade.
Automated systems can better and more quickly monitor the paperwork related to imports — the mill certificates that verify that imported steel is the correct grade, food inspection reports or documents on the provenance of imported oil sold on the open market, he said.
“We can get data we’ve never seen before,” he said. “And it’s the first time in my career that we are not sacrificing security for facilitation.”
While employees tend to worry that the increased use of AI could mean that there is a decreased need for human workers, Annunziato doesn’t think that will be an issue for the federal workforce.
“The government is more interested in making work more efficient than eliminating jobs,” he said.
Follow FedTech coverage for more articles and videos from ReImagine Nation ELC 2020.