Close

See How Your Peers Are Moving Forward in the Cloud

New research from CDW can help you build on your success and take the next step.

Mar 14 2022
Software

Leveraging Standards to Optimize AI at the Edge: What Agencies Need to Know

Government and industry must partner to fulfill the promise artificial intelligence holds for analyzing and acting on data captured at the edge.

Imaging you’re waiting for the light at a busy crossroads. Because there have been a lot of accidents at the intersection, the light is timed to hold red in both directions for an extra 15 seconds. But that delay contributes to traffic backups, which is why you’re still sitting at the light.

What if visual analysis could sense in real time that there’s currently no cross traffic, making it safe to temporarily speed up the light? What if it could determine that traffic is heavy and cars are speeding, making it safer to slow down the light even further?

That’s a simple example of the power of artificial intelligence at the edge. And it’s why both civilian and military government agencies are eager to take advantage of AI at the edge to increase safety, improve services, avoid equipment outages, better manage the environment, make predictions and take informed, real-time actions.

And yet, AI at the edge has been held back by a lack of open standards. Different edge devices use different and occasionally incompatible software, impeding interoperability.

Additionally, the fact that there are many potential hardware manufacturers, differences in how they capture data and any difference in data formats can make information harder to analyze. The speed of technological change means proprietary solutions can quickly become outmoded and require costly remediations in the field.

Active collaboration between government and industry on developing and promoting open standards can result in a solid foundation on which to build the future of AI at the edge.

Click the banner below to get access to customized emerging tech content by becoming an Insider.

Open Standards Enable Stability and Future-Proofing for AI

Because they involve a still-emerging technology, AI solutions are often built around proprietary designs. That makes them difficult to understand and manipulate for anyone who isn’t a data scientist. It means agencies have few options but to invest in high-priced experts, which limits AI use-case exploration.

Just as significant, it leaves AI solutions vulnerable to technology change. That’s a huge issue at the edge, where organizations might operate tens of thousands of Internet of Things devices. It can be time-consuming and costly to update AI software in edge devices to take advantage of the latest technology — say, 5G for transmitting edge data.

Creating open standards on which vendors and agencies can base their AI solutions would deliver tremendous advantages. It could act almost as a virtualization layer that would let agencies plug in and reuse AI code. It would make AI solutions cloud ready and allow agencies to focus on data capture and analysis, not on constantly updating AI software.

RELATED: How will agencies make use of artificial intelligence and predictive analytics in 2022?

Portability and Reusability Can Deliver Efficiency and Cost Savings

Designing AI solutions to leverage a common data format and containerization would take these advantages even further. A containerized microservices architecture would enable the reusability, portability and longevity of AI solutions. Microservices are well suited to edge applications because they reduce the software footprint on low-powered, purpose-specific edge devices.

Additionally, complex edge deployments, especially for the military, can involve multiple applications for gathering a broad range of data. Often, these applications simply don’t work together. But with open standards and microservices, agencies could integrate these applications at the edge to combine the data they need to make fast, accurate decisions.

Likewise, designing for a service mesh — an infrastructure layer for service-to-service communication over a network — would enable one-click updates to a large ecosystem of IoT devices. That’s especially important for rapidly addressing new security concerns as they emerge.

LEARN MORE: How are agencies making use of edge computing in the field?

Explainability Is Key for Building Public Trust in AI Models

AI solutions can involve very large data sets, and they can perform analysis intended to drive consequential decisions. An organization’s IT teams, users and the general public must be able to understand AI models and trust their outputs. Because of this, explainability is key.

Explainability goes beyond the infrastructure-related issues of standards and portability to address how AI models are designed. Constraining the types of intelligence that are used can make AI models more understandable and trusted. It can also minimize the hardware required, which is beneficial at the edge, where low cost, high efficiency and small form factor are crucial.

What’s more, many potential AI applications are “brownfield” opportunities where data is already being captured, as in the traffic light example. Some applications of AI at the edge require little to no additional hardware. Simple logistic regression, linear regression or other standard machine learning models can power decision-making at the edge without having to rip and replace existing infrastructure.

DIVE DEEPER: How is NASA bringing edge computing and artificial intelligence to space?

How Standards-Based Data Collection Can Enhance AI for Feds

There are many areas where agencies are already collecting data at the edge, such as

infrastructure, construction, facilities and environmental monitoring. AI can be applied to these scenarios for use cases such as predictive maintenance of machinery and equipment; real-time management of public-safety issues such as water consumption and wildfire risk; and continual assessment of roads, bridges and ports.

Standards-based data collection will make it easier to leverage information that’s already being captured, and standards-based AI analysis will make it easier to identify new opportunities for real-time insights, decisions and action. All these advantages could be achieved without significant investments in highly specialized technology and data scientists.

That’s not to say that open standards, portability and explainability will themselves be easy to achieve. They’ll require commitment and collaboration between government and industry. But open standards will spur investment and bring the promise of AI to more government applications that can advance missions and benefit the public.

gremlin/Getty Images