In these scenarios, the ideal approach is to consider the workload in the context of the agency’s existing tools and processes.
“We want to make the learning curve less steep. We don’t want our customers to have to onboard a lot of new tools if they can get away with using the ones they are already trained on,” Cvetanov says. “For instance, if an AI workflow is tested and proven to work well in a virtual environment without significant performance degradation, then they can use their existing tooling to manage the GPU environment, as we have full integration with platforms like VMware.”
Prebuilt AI Training Models Are the Way to Go
NVIDIA launched NVIDIA AI Enterprise in 2021 with the goal of making the NVIDIA AI stack more accessible to public and private sector entities. Functioning as the software layer of the NVIDIA AI platform, it comes preloaded with frameworks for developing, validating and deploying ML models.
Additionally, there are prebuilt workflows for tasks such as audio transcription, next-best action recommendation and cybersecurity threat detection.
A particular benefit is the ability of NVIDIA AI Enterprise to work with pretrained AI models. This includes models developed by NVIDIA as well as third-party models approved for use within the federal ecosystem.
DISCOVER: As AI evolves, so does data poisoning.
NVIDIA AI Enterprise also comes with toolsets to help fine-tune a third-party model to run within its environment.
“Training a model from scratch is one of the most capital- and labor-intensive processes in the entire AI development cycle,” Cvetanov says.
The process requires large-scale compute infrastructure and high-quality datasets, coupled with the expertise required to train, optimize and deploy the model in production.
“Having a prebuilt model takes you right to being able to run an AI model in production,” Cvetanov says. “If we can get you 50 percent of the way into the AI project, then it’s more likely to succeed.”