How AI Will Reshape the Federal Workforce
The Office of Management and Budget is preparing new policy guidance for how federal agencies should use automation technologies, paving the way for wider adoption of artificial intelligence in government. As agencies deploy more AI tools, however, they should do so in a way that is responsible, according to a recent report.
The report, “Responsible AI: A Framework for Building Trust in Your AI Solutions,” from Accenture Federal Services, recommends that as agencies continue to roll out AI-based solutions, they should get buy-in from employees and ensure these tools can be effectively managed.
According to a 2018 report from Accenture Research, 82 percent of federal executives believe that “AI will work next to humans as a co-worker, collaboration and trusted advisor” within the next two-plus years.
“A high degree of trust will be required for the workforce to increase their reliance on automated systems for often life-impacting decisions,” the new report says. “This trust can develop from a widespread understanding for how these decisions are made, ability to guide the machine as it learns, as well as knowledge about how humans and machines augment each other for improved outcome.”
To do so, agency IT leaders will need to treat the development of AI algorithms the same way they treat the development of software, and determine how they will be developed, tested, maintained and monitored, according to Dominic Delmolino, CTO of Accenture Federal Services. “Don’t let anyone tell you otherwise,” he says.
MORE FROM FEDTECH: Find out how robotic process automation will help agencies.
How Agencies Can Create Responsible AI Tools
There are some nuances that are unique to AI tools, he adds. The data used to train algorithms becomes part of the AI tool and how it learns and operates, just as a person’s background and experience do. It’s also important for agencies to be able to explain why AI tools make certain recommendations and actions. For example, Delmolino said, the AI tool could tell an employee that the agency had performed a particular action over the past two years when the current set of circumstances were similar.
This is the concept of explainable AI, which the report defines as “systems with the ability to explain their rationale for decisions, characterize the strengths and weaknesses of their decision-making process, and convey an understanding of how they will behave in the future.”
Such explainable AI is “super important” for the government, Delmolino says. Some federal employees may have an inherent distrust of an algorithm that is replacing them in part of their jobs and decision-making, Delmolino adds. “If you are looking to get impact from AI, it’s really important to get your folks on board,” he says. To do that, agency IT leaders need to have their AI tools give their rationale for making decisions.
Users can retain agency by either agreeing with or overriding an AI solution’s decision. If the user does override the system, managers and IT leaders should want to know so that the AI can be retrained, according to Delmolino.
The report recommends that agencies build workforce trust in the smart machines that employees will increasingly rely upon. “They can do so by teaching them how to interact, train and augment these systems,” the report says. “Organizations that fail to take these steps will find many of the benefits of AI elusive and may encounter a talent crunch within the next few years.”
How can agencies do that? Delmolino says that agency IT leaders need to talk with employees who will be using AI tools. They should discuss how workers actually want to use AI to augment or enhance their work.
Delmolino says he would like to see OMB policy on automation have language that “says when you are effectively asking an AI to perform an activity that would normally be performed in the course of work by one of your workers, here are the supervisory elements you should include.”
Agencies hold workers and contractors accountable for their work and have rules for when and how that work is performed and justified, Delmolino says. The same should be true for AI-based tools.
“I think we will see some codification of, when you delegate that authority to automation or AI algorithms, you will be required to have it abide by similar compliance requirements.”