Close

See How Your Peers Are Moving Forward in the Cloud

New research from CDW can help you build on your success and take the next step.

Jun 26 2019
Software

New Guidance Strengthens Government AI Framework

Agencies report early successes with artificial intelligence programs and begin planning for future capabilities.

Agencies must make artificial intelligence a top priority in future plans and budgets, says an updated version of the government’s research and development strategic plan for artificial intelligence released June 21.

The new report, a follow-up to an executive order signed in February stressing the importance of AI, places additional emphasis on partnerships with academia, industry and even international allies to create technological breakthroughs in AI that can be quickly developed into actual practice.

The new R&D plan does not specify particular research agendas, but it does set the tone for how agencies should budget and plan their research, says Michael Kratsios, deputy assistant to the president for technology policy, in the new report’s introduction.

“Advances in AI are crucial for the U.S. science and engineering enterprise, and nearly all sectors of our 21st-century economy,” France A. Córdova, director of the National Science Foundation, said in a statement when the executive order was issued. “Many of the transformative uses of AI that we are witnessing today are founded in federal government investments in fundamental AI research that reach back over decades.”

 

Worker Training and Acceptance of AI Is Critical

In addition to the new emphasis on outside partnerships, the report calls for long-term investments in AI; effective methods for human-AI collaboration; an understanding of the legal, ethical and societal implications of AI; safe and secure AI systems; shared public data and environments for training and testing; standards and benchmarks by which to evaluate AI technology; and more development for workers so that they’re ready to use AI.

“While no official AI workforce data currently exist, numerous recent reports from the commercial and academic sectors are indicating an increased shortage of available experts in AI,” the updated report says. “AI experts are reportedly in short supply, with demand expected to continue to escalate.”

At its most basic, the promising technology can help cut down paperwork backlogs, search archival material, analyze complex data and assist personnel with physical security. Many agencies are already piloting AI programs or incorporating it into their workflows.

But for all the excitement about the potential of artificial intelligence, the technology won’t truly take hold unless human workers buy in to its use, experts said at recent events in Washington.

Warns Defense Department CIO Dana Deasy in a documentary by Government Matters, “We’re still very, very much in the early days. We should all be careful not to overly hype AI.”

The February executive order launched what the White House calls The American AI Initiative. “Even now at the earliest stages of commercializing these technologies, we have seen the power and potential of AI to support workers, diagnose diseases, and improve our national security,” said a statement accompanying the order.

Guiding AI Principles for Government Are Under Development

The administration is already conscious of the need for principles surrounding the use of AI, and earlier this year signed on to the Organisation for Economic Co-operation and Development’s Principles of Artificial Intelligence along with 41 other countries.

At the U.S. level, “our regulatory approach will be a risk-based approach — not all AI is the same. Some AI use cases will require more oversight, other AI cases don’t,” said Lynne Parker, assistant director for AI in the White House Office of Science and Technology Policy, who spoke at FedTalks 2019 earlier this month. 

She expects final guidance later this summer, and agencies will then have six months to comply.

The vision of AI in government in the next few years is fairly consistent across agencies. 

“For us, it means taking all the things we do manually now and, to the extent possible, converting them to AI,” said Bill Pratt, director of strategic technology management at the Department of Homeland Security, who also spoke at FedTalks. “It’s doing the job we’re doing now, but we can do it better and with more value with AI.”

At DOD, the Joint Artificial Intelligence Center is the key to that agency’s AI strategy. The goal of the JAIC is to accelerate the pace of AI adoption, mostly through small, narrowly focused projects — for example, predictive maintenance for the H-60 helicopters used to help fight forest fires, JAIC Director Lt. Gen. Jack Shanahan says in the Government Matters documentary.

“Small-scale projects lead to big lessons,” he said. “The users will decide if it’s a success or not.”

Steve Perry, enterprise data integration manager at the Small Business Administration, outlined one AI success at the recent FCW Data & Analytics Summit. The agency realized that some intermediaries weren’t providing the number of loans that the SBA expected from them. Fearing discrimination, SBA used AI to search for patterns.

“It turned out that the data was correct, it was just missing supplemental information,” he said. “It made sense. The risks in those locations were just higher by default.”

MORE FROM FEDTECH: Learn more about how NASA incorporates AI into its programs.

Researchers Attempt to Make AI More Flexible

But the invisibility sometimes baked into AI — even scientists don’t always understand how the algorithms that run the AI programs come to their conclusions — can be a sticking point, many of the experts say.

AI must develop in ways that make it more flexible in the face of new information; once it’s been trained in a process, those rules tend to stick, says David Gunning, program manager in DARPA’s Information Innovation Office, in the Government Matters documentary.

“If the world changes, the AI doesn’t change, like a person changes his behavior,” he says. 

Michael Amori, the co-founder and CEO of Virtualitics, says his company is developing a way to make the decision-making process within an algorithm more visible — letting people actually see what’s going on behind the scenes in the algorithm.

“If the human doesn’t understand how the answer came about, how confident the algorithm is in that answer and why did it happen, it’s hard to get people to do things,” he said at an AFCEA Emerging Technology event. “That’s really important, or people will not buy into what the AI is telling them.”

Carlos Jones/Oak Ridge National Laboratory