Jan 30 2024
Hardware

AMD Is Empowering Agencies with AI-Enabled Client Systems

Early use cases are focused on improving employee productivity, with more on the way.

Integrated artificial intelligence is opening the door to exciting possibilities in client computing and databases as it sparks a wave of innovation across the technology industry.

Semiconductor giant AMD has made significant strides by integrating AI engines with client systems, from fourth-generation, Windows 11-supported Epyc and Ryzen processors to its MI300X virtual machines.

The company recently unveiled the Instinct MI300A, which combines AMD central processing unit cores and graphics processing units to fuel the convergence of high-performance computing and AI.

AI integration offers federal customers powerful new capabilities and makes for a compelling use case, according to AMD’s Matt Unangst, senior director of commercial client product marketing, and Mahesh Balasubramanian, director of data center GPU product marketing. The pair sat down with FedTech to discuss why AI-enabled client systems are the future and what the company’s newest AI and computing capabilities can do for agencies.

DISCOVER: AI governance matters for early agency adopters.

FEDTECH: What does the power of integrated AI mean for agencies?

UNANGST: We’re seeing rapid growth in the number of applications that can take advantage of a rise in the AI power available in our client products.

Microsoft is making its AI-based digital assistant, Copilot, available across the Office environment, among other enhancements. Many of those capabilities will increasingly take advantage of the AI engines incorporated into our products. These features are focused on improving the productivity of employees wherever they may be.

We’re at the very beginning stages of applications that use this hardware. This will rapidly evolve in the next few years, with more use cases harnessing the power of AI.

Click the banner below to learn how to modernize your agency's digital experience.

FEDTECH: What are some of the capabilities that you expect soon?

UNANGST: We anticipate running large language models, like a ChatGPT type of model, on endpoint devices.

You can imagine a virtual chief of staff or a virtual assistant built into your laptop, where you ask questions about a certain document or a series of PowerPoint slides. This will allow employees to access information very quickly, where that previously took a lot of time and effort.

Security applications can take advantage of our integrated AI technology to deliver more enhanced solutions, an area that is obviously a big focus for development for federal customers.

Mahesh Balasubramanian
When it comes to maintenance, AI is starting to be used in locations where these data center-class AI devices can consume and find insights from a large amount of data.

Mahesh Balasubramanian Director of Data Center GPU Product Marketing, AMD

FEDTECH: How can improved processing power in data centers boost performance for agencies?

BALASUBRAMANIAN: Think about power plants. The Department of Energy helps support a lot of these next-generation power plant buildings, but even the current generation of power plants has a ton of documentation with regard to maintenance. They also have a lot of data historically. These plants have been running for decades, so the amount of data that they have is vast. It’s in the millions of pages of documentation.

When it comes to maintenance, AI is starting to be used in locations where these data center-class AI devices can consume and find insights from a large amount of data. It really helps accelerate things that previously were not possible.

These capabilities also extend to medical research. The National Institutes of Health is accelerating finding insights by consuming and processing data a lot faster with these data center GPUs.

It was always about two years behind consumption of cancer research. With these data center accelerators — both with AI capabilities and the large amount of performance and memory they bring — NIH is able to cut down that two-year lag to a few weeks or a few months.

EXPLORE: Agencies should be part of the AI proof-of-concept process.

FEDTECH: What are some best practices to ensure you’re getting the most bang for your buck with AI investments?

UNANGST: In some cases, frankly, there’s a little bit of confusion around how and where to deploy these solutions. It’s critical, whether you’re in an enterprise environment or a federal environment, to understand what problems you’re trying to solve. And then, depending on the problem, AMD has a host of solutions, ranging from the cloud to the edge to the endpoint, that can be deployed and used for the right applications.

It’s critical to start with understanding the problems you want to solve and the types of data models that you want to use. When you take that approach, this huge world of AI starts to get much clearer.

RELATED: Data poisoning is evolving alongside AI.

FEDTECH: How does the AMD Instinct MI300A cater to the evolving demands of supercomputing, and what advantages does it bring to scientific research applications?

BALASUBRAMANIAN: There’s a distinct difference in the focus on the federal side of things with the Department of Defense compared with enterprise customers and the tangible benefits that they’re looking for.

We believe what the MI300A offers in the supercomputing field, especially for government agencies, is the empowerment of these customers with cutting-edge AI and compute capabilities that lower the barrier for AI adoption and for accelerating research at these organizations. It’s a simplified architecture that packs the best of what AMD brings from our CPU and GPU architecture into a single package.

The architecture plays a pretty big role in how it gets utilized. If you have a lot of data movement between your CPUs and GPUs, then you must balance the workload between these two really powerful technologies to come up with an ideal way to use them with the most efficient power cost and performance for your data center.

When you reduce that complexity, researchers who were previously not programing ninjas — who had to know how to best use the architecture for data transfers and processing it between CPU and GPU — don’t need that capability anymore. The accelerated processing unit takes care of it in a much more efficient way. Processing gets routed to the CPUs and GPUs according to the way you have it programmed.

With the memory being local, it really boosts the performance quite a bit. It is a lot more efficient from an architecture perspective, given how it’s packaged, and it significantly simplifies the way researchers harness the power of these APUs to accelerate their research.

Brought to you by

AMD
Close

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT