Close

See How Your Peers Are Moving Forward in the Cloud

New research from CDW can help you build on your success and take the next step.

Aug 23 2024
Data Analytics

TechNet Augusta 2024: AI Challenges at the Tactical Edge and How to Overcome Them

DDIL environments are daunting locations for soldiers to run analytics, but it’s not impossible.

Industry is increasingly working to make machine learning and artificial intelligence useful to warfighters at the tactical edge in what are known as denied, disrupted, intermittent and limited impact (DDIL) environments.

Oracle has begun running ML over its database solution to help soldiers in forward deployments or war zones faced with technical limitations (such as limited connectivity and power) analyze electronic warfare data. The analytics tag and classify potential devices and vehicles belonging to friendly forces or adversaries in the field, allowing commanders to quickly allocate resources and decide whether to fire on a unit or simply hit it with a radio frequency burst to disable it.

The tech company’s highly available enterprise cloud can reach its smallest backpackable, disconnected edge devices, which connect to RF and satellite radio and anything with Ethernet for transmitting this electronic warfare data.

 Oracle is “looking forward, looking disconnected,” said Jeff Fleming, technical cloud account executive for the company, speaking at AFCEA’s TechNet Augusta 2024 conference on Tuesday.

Click the banner below to keep up with all of our TechNet Augusta 2024 coverage.

 

The Challenges DDIL Environments Present for Modeling

Soldiers in the field must have access to all the data needed to train a model, but defense intelligence data sets are often sparse considering certain enemy signatures may never have been encountered before. As a result, ML and AI models must generalize well, said Jay Meil, vice president of AI at SAIC, at the conference.

What’s more, models must make inferences on lightweight systems from locations where multiple edge sensors collecting data may not be able to talk to each other.

“We don’t have data centers; we don’t have large high-performance computing and computer processing,” Meil said. “But we need models to be able to do the same sorts of things in those disconnected environments.”

There is also a wide array of operational security threats in DDIL environments: data or model interception, adversarial ML, data poisoning or spoofing, and result tampering. Soldiers must ensure sensor data being fed into a model is the data that’s expected and hasn’t been altered.

Understandably, these challenges seem daunting, but they’re not insurmountable, Meil said.

Mitigating Environmental and Security Threats at the Tactical Edge

 Edge optimization of AI models takes those that are “heavy,” in terms of compute and processing, and make them more “lightweight” by, say, compressing neural networks. Calculations are cut off once a certain confidence level is reached, which still ensures high accuracy but at four times the processing.

Distributed federated learning is another mitigation method, where models retrain themselves only after edge nodes reconnect and start sharing data again.

A robust mesh network can prioritize and transmit messages AI is inferencing using chain-of-thought learning with generative AI to optimally route the transmission, based on the payload and metadata around it, Meil said.

On the security side of the equation, encryption can be applied to data transmissions and blockchain employed to verify that sensor data is intact.

Models can be hardened by training them with adversarial learning to recognize when they’re being exposed to harmful influences. SAIC has a computer vision model that examines satellite images of ships, and it’s been hardened against distorted images by exposing it to those where orientation and size have been altered or noise added.

LEARN MORE: Agencies looking to benefit from AI need to lay the groundwork.

Having a process for data quality checks, which verify that sensors are sending the right data in the right format before soldiers are alerted, is a best practice, Meil said.

By far the biggest challenge to AI adoption in DDIL environments is training the “human element” to operate and trust models through data science literacy, Meil said.

“Being able to invert and essentially understand, back to raw data, how the system is making decisions increases the trust levels,” he said. “Increasing the trust level increases adoption, and then adopting it is going to help achieve the mission.”

 SAIC developed an interface that allows soldiers to point and click or drag and drop to explore data with built-in models that generate visuals to explain what is happening. The company is working with the Navy, Marine Corps and combat commands to put that capability in the hands of mission owners.

“We’ve actually found that the inference gets better, the models sometimes get better and they’re faster at learning these things because they understand the mission better than the data scientists, the data engineers and the software engineers,” Meil said.

Dave Nyczepir/FedTech Magazine