FEDTECH: How can improved processing power in data centers boost performance for agencies?
BALASUBRAMANIAN: Think about power plants. The Department of Energy helps support a lot of these next-generation power plant buildings, but even the current generation of power plants has a ton of documentation with regard to maintenance. They also have a lot of data historically. These plants have been running for decades, so the amount of data that they have is vast. It’s in the millions of pages of documentation.
When it comes to maintenance, AI is starting to be used in locations where these data center-class AI devices can consume and find insights from a large amount of data. It really helps accelerate things that previously were not possible.
These capabilities also extend to medical research. The National Institutes of Health is accelerating finding insights by consuming and processing data a lot faster with these data center GPUs.
It was always about two years behind consumption of cancer research. With these data center accelerators — both with AI capabilities and the large amount of performance and memory they bring — NIH is able to cut down that two-year lag to a few weeks or a few months.
EXPLORE: Agencies should be part of the AI proof-of-concept process.
FEDTECH: What are some best practices to ensure you’re getting the most bang for your buck with AI investments?
UNANGST: In some cases, frankly, there’s a little bit of confusion around how and where to deploy these solutions. It’s critical, whether you’re in an enterprise environment or a federal environment, to understand what problems you’re trying to solve. And then, depending on the problem, AMD has a host of solutions, ranging from the cloud to the edge to the endpoint, that can be deployed and used for the right applications.
It’s critical to start with understanding the problems you want to solve and the types of data models that you want to use. When you take that approach, this huge world of AI starts to get much clearer.
RELATED: Data poisoning is evolving alongside AI.
FEDTECH: How does the AMD Instinct MI300A cater to the evolving demands of supercomputing, and what advantages does it bring to scientific research applications?
BALASUBRAMANIAN: There’s a distinct difference in the focus on the federal side of things with the Department of Defense compared with enterprise customers and the tangible benefits that they’re looking for.
We believe what the MI300A offers in the supercomputing field, especially for government agencies, is the empowerment of these customers with cutting-edge AI and compute capabilities that lower the barrier for AI adoption and for accelerating research at these organizations. It’s a simplified architecture that packs the best of what AMD brings from our CPU and GPU architecture into a single package.
The architecture plays a pretty big role in how it gets utilized. If you have a lot of data movement between your CPUs and GPUs, then you must balance the workload between these two really powerful technologies to come up with an ideal way to use them with the most efficient power cost and performance for your data center.
When you reduce that complexity, researchers who were previously not programing ninjas — who had to know how to best use the architecture for data transfers and processing it between CPU and GPU — don’t need that capability anymore. The accelerated processing unit takes care of it in a much more efficient way. Processing gets routed to the CPUs and GPUs according to the way you have it programmed.
With the memory being local, it really boosts the performance quite a bit. It is a lot more efficient from an architecture perspective, given how it’s packaged, and it significantly simplifies the way researchers harness the power of these APUs to accelerate their research.
Brought to you by