ModelOps vs. MLOps: What’s the Difference?
Seen as a subset of ModelOps, MLOps is a set of tools focused more on enabling data scientists and others they are working with to collaborate and communicate when automating or adjusting ML models, Atlas says. It is concerned with testing ML models and ensuring that the algorithms are producing accurate results.
MLOps is also more narrowly focused on factors such as the costs of data engineering and model training. It also focuses on checking the data that is feeding into the ML model.
“It’s the part that makes it worthwhile or not if ModelOps can be successful,” Atlas says.
Federal ModelOps Use Cases
There are many use cases for ModelOps in government, AI experts say.
ModelOps is particularly helpful in the federal context for data management and improving data quality, Halvorsen says. That’s in large part because agencies house reams of legacy data, much of it on paper, that they are digitizing.
That data likely includes many errors that were created in the past, and these could impact models that are trained on the data. Errors could be discovered through DataOps and ITOps tools, Halvorsen says.
Many organizations, including agencies, use ML models to analyze drone footage and other surveillance imagery to detect changes from previous observations, Atlas says. Automating that through ModelOps could be useful to agencies including USDA, the Army Corps of Engineers and others that perform observations in the field and analyze data.
DISCOVER: Data poisoning is evolving alongside AI.
ModelOps also helps agencies check whether the data they are collecting and using for models is current enough for the desired application. “If I’m targeting, it better be current data and not something based on a geographic survey from three years ago,” says Halvorsen, who is a former Department of Defense CIO.
ModelOps can determine data viability as well, Halvorsen says. Data viability refers to the shelf life of data, or how long it can be stored and still be useful.
Additionally, ModelOps tools can account for whether the volume of data being used for a model will limit the impact of any errors within it, Halvorsen says. The opposite may be true if an agency uses a smaller but more accurate data set to train a model.
How Can Agencies Realize the Benefits of ModelOps?
ModelOps is not particularly difficult to implement, but it often fails when IT leaders don’t invest enough in cleaning their data.
“People don’t have a good understanding of their data, and they frankly don’t want to pay to restructure and in some cases rearchitect the data to make it more valuable for use in an AI development,” Halvorsen says.
To get the benefits of ModelOps, there must be strong partnerships and communication among data scientists, engineers, IT security teams and other technologists, Atlas says.
MORE FROM FEDTECH: AIOps helps administrators gain an edge over network disruptions.
“The tricky part of ModelOps is that you are usually crossing a couple of different departments that have competing priorities, but it behooves them to work together to get that benefit,” she says.
It helps to have a leader such as a chief data officer to bridge those gaps and make sure the teams are all working together, Atlas says.
The title of the convening figure — whether it’s a CIO, CDO or chief AI officer — matters less than it being someone who “has the money and the authority to follow the fixes” and implement needed changes in the data or models, Halvorsen says. Given the current state of budgeting, that will probably continue to be CIOs, he says.
With that in place, the leader should focus on how much data the agency is using. The leader must also ensure that the agency gets the most out of its data if it’s determined that large amounts are lying fallow when it comes to training models.
“You have to increase the value you’re getting from your data and how much data you’re using, but you also have to make sure that data is high quality,” Halvorsen says. “Sometimes this is hard because people want results faster, particularly in government where it can be less about money and more about showing results."
“You have to resist the urge to say, ‘OK, well, I’ll just throw the AI at my current data,’” he says. “That will give you crappy results.”
UP NEXT: The State Department is improving the state of federal AI.