Close

New Workspace Modernization Research from CDW

See how IT leaders are tackling workspace modernization opportunities and challenges.

Nov 21 2025
Artificial Intelligence

Resilience Is the ‘New Frontier’ of AI Performance

Continuously monitoring model health scores is key.

Agencies can’t implement artificial intelligence without contextualizing the data training their models to ensure its quality.

Using a “health score” to baseline a model when it’s initially trained and when it’s deployed — and then continuously monitoring what changes — allows agencies to see drift over time.

A good way to think of model drift is to consider how a smartphone interface changes when it receives an update. The user must expend cognitive energy determining where settings have moved; model drift involves much bigger shifts.

“Resilience is the new frontier of performance, because these things are changing not only faster but to a degree that is difficult to project in terms of trajectory,” said Munish Walther-Puri, head of AI security services for TPO Group, speaking on a panel at Defense TechConnect in National Harbor, Md.

Click the banner below to learn what's coming next for artificial intelligence.

 

Health Scores Serve as Guideposts When AI Models Stray

Consider an AI system for threat hunting: Over time, the model will learn behavioral analytics that are deviations from what is considered normal, and in those instances, it will need to be retrained.

“People behave nondeterministically some of the time,” said Patience Yockey, a data scientist specializing in operational technology cybersecurity at Idaho National Laboratory.

In other cases, agencies may not have the data readily available to train a model and must instead use a close approximation in the form of simulated data.

In still others, a sensor supplying data used to train a model might go bad, and the agency may need to find a way to keep the model running. In such instances, monitoring that model’s health score is important.

“That is supercritical to having faith and confidence in the results of your models, to know how healthy or how good they are in the first place,” said Alex Jenkins, senior AI solutions engineer at Aveva.

Click the banner below for the latest federal IT and cybersecurity insights.

 

Prioritizing Safety When Automating OT Systems

Using AI to make operational technology autonomous presents more of a challenge than when dealing with IT systems because OT controls and monitors physical equipment and processes.

“When something goes wrong with OT, you can feel it,” Walther-Puri said. “You will feel your own mortality.”

For this reason, developers need to constantly think about the consequences of automated OT systems failing. With the energy grid, for example, if an automated response were to close a breaker and energize a line when someone was standing next to it, that would be a very bad day.

Cyber-informed engineering helps by focusing on safety, reliability and performance.

“Even where we have used automated systems for years and do trust them for the most part, we still have a human sitting in that seat, waiting to take over at a moment’s notice, because we trust the human to be able to make those decisions more than we trust the machines,” said Jeremy Jones, critical infrastructure engineering analyst at Idaho National Laboratory.

FatCamera/Getty Images