What Is Intelligent Automation and How Is It Being Used?
As the IBM report notes, intelligent automation can be used for a variety of tasks that federal employees “must spend hours to complete, particularly those involving paper forms and other written information.”
IA tools can enable users to “quickly analyze data by reading and interpreting information on documents faster than people can” and also plan ahead. For example, the report notes, by analyzing the way agencies use the internet and internal networks, IA tools can “predict the computer bandwidth needed to support agency employees working remotely on any given day and adjust the agency’s cloud computing capacity to meet these workers’ needs.”
IA tools can also help an agency simulate future trends, enabling finance departments to better predict spending and also create a permanent record of financial transactions.
A blog post from the IBM Center for the Business of Government accompanying the report notes that intelligent automation, and especially artificial intelligence and machine learning, “can be used to improve disaster prediction models and disaster response mechanisms across the country.”
IA can also help with data governance. The post notes that the Defense Department’s Defense Innovation Unit has “focused on access to critical data sets, such as remotely sensed satellite imagery.” DIU has trained machine learning models to label unstructured data with subject-matter experts to produce more reliable data.
Additionally, the blog notes, IA can help with stakeholder engagement. The Energy Department “has moved forward with an initiative around getting data to first responders, which involves 14 partners across both government, industry and academia.” IA tools help cut down the the “time it takes to move key information assets, such as satellite imagery, into the hands of decision-makers working during or after a disaster,” the blog says.
Best Practices for Deploying Intelligent Automation
The report details several best practices IT leaders should keep in mind before rolling out IA tools.
A key one is to acknowledge that IA “is not a silver bullet, and it is not appropriate for every challenge” and that leaders should be trying not to simply automate work but instead to “redesign and mature the process of doing work.”
IT leaders should also seek input from users “during technology design, implementation and use to help ensure products are functional and useful to them.” IT leaders should also evaluate how employees can be redirected to other tasks once their time is freed up using IA tools.
“Agencies can redirect employees’ time to focus on complex tasks only people can do,” the report states. “For example, purchasing goods and services requires agencies to deal with reams of data, and the Department of Homeland Security turned to artificial intelligence to redirect acquisition professionals’ time from sifting through databases to find information about contractors to being able to evaluate information that might affect a contractor’s performance.”
IT leaders should also recognize that technology does not replace people, the report says, and human workers must complete tasks that technologies cannot accomplish.
Agencies should also work to establish standards for how data is formatted, stored, used, accessed and shared, the report says. To minimize bias, agencies should encourage diversity of thought.
“Intelligent automation tools are only as good as the data they use. Therefore, checking to make sure data is accurate, and free of historical biases, should be a priority for agencies,” the report notes. “DOD’s Joint Artificial Intelligence Center and its data governance council engage a diverse group of stakeholders when making data-related decisions, tapping at any given time the expertise of engineers, security experts, data scientists, ethicists and other specialists from government, industry and academia.”