Open Standards Enable Stability and Future-Proofing for AI
Because they involve a still-emerging technology, AI solutions are often built around proprietary designs. That makes them difficult to understand and manipulate for anyone who isn’t a data scientist. It means agencies have few options but to invest in high-priced experts, which limits AI use-case exploration.
Just as significant, it leaves AI solutions vulnerable to technology change. That’s a huge issue at the edge, where organizations might operate tens of thousands of Internet of Things devices. It can be time-consuming and costly to update AI software in edge devices to take advantage of the latest technology — say, 5G for transmitting edge data.
Creating open standards on which vendors and agencies can base their AI solutions would deliver tremendous advantages. It could act almost as a virtualization layer that would let agencies plug in and reuse AI code. It would make AI solutions cloud ready and allow agencies to focus on data capture and analysis, not on constantly updating AI software.
RELATED: How will agencies make use of artificial intelligence and predictive analytics in 2022?
Portability and Reusability Can Deliver Efficiency and Cost Savings
Designing AI solutions to leverage a common data format and containerization would take these advantages even further. A containerized microservices architecture would enable the reusability, portability and longevity of AI solutions. Microservices are well suited to edge applications because they reduce the software footprint on low-powered, purpose-specific edge devices.
Additionally, complex edge deployments, especially for the military, can involve multiple applications for gathering a broad range of data. Often, these applications simply don’t work together. But with open standards and microservices, agencies could integrate these applications at the edge to combine the data they need to make fast, accurate decisions.
Likewise, designing for a service mesh — an infrastructure layer for service-to-service communication over a network — would enable one-click updates to a large ecosystem of IoT devices. That’s especially important for rapidly addressing new security concerns as they emerge.
LEARN MORE: How are agencies making use of edge computing in the field?
Explainability Is Key for Building Public Trust in AI Models
AI solutions can involve very large data sets, and they can perform analysis intended to drive consequential decisions. An organization’s IT teams, users and the general public must be able to understand AI models and trust their outputs. Because of this, explainability is key.
Explainability goes beyond the infrastructure-related issues of standards and portability to address how AI models are designed. Constraining the types of intelligence that are used can make AI models more understandable and trusted. It can also minimize the hardware required, which is beneficial at the edge, where low cost, high efficiency and small form factor are crucial.
What’s more, many potential AI applications are “brownfield” opportunities where data is already being captured, as in the traffic light example. Some applications of AI at the edge require little to no additional hardware. Simple logistic regression, linear regression or other standard machine learning models can power decision-making at the edge without having to rip and replace existing infrastructure.
DIVE DEEPER: How is NASA bringing edge computing and artificial intelligence to space?
How Standards-Based Data Collection Can Enhance AI for Feds
There are many areas where agencies are already collecting data at the edge, such as
infrastructure, construction, facilities and environmental monitoring. AI can be applied to these scenarios for use cases such as predictive maintenance of machinery and equipment; real-time management of public-safety issues such as water consumption and wildfire risk; and continual assessment of roads, bridges and ports.
Standards-based data collection will make it easier to leverage information that’s already being captured, and standards-based AI analysis will make it easier to identify new opportunities for real-time insights, decisions and action. All these advantages could be achieved without significant investments in highly specialized technology and data scientists.
That’s not to say that open standards, portability and explainability will themselves be easy to achieve. They’ll require commitment and collaboration between government and industry. But open standards will spur investment and bring the promise of AI to more government applications that can advance missions and benefit the public.