High-Performance Computing in Government: Aggregating the Impact

New high-performance computing (HPC) solutions offer an aggregate approach to IT initiatives. What does this mean for federal agencies?

Your browser doesn’t support HTML5 audio

There’s strength in numbers. This is the functional premise of high-performance computing (HPC) solutions: the aggregation of multiple compute resources into an interconnected whole capable of tackling bigger problems in smaller amounts of time.

While HPC has seen substantive adoption among private enterprises — and is now gaining ground in medical research applications — federal agencies have been slower to make the move. However, thanks to the falling cost of basic compute components, coupled with the increasing affordability and security offered by cloud computing services, federal organizations now stand to see substantive gains from HPC deployments.

What exactly is high-performance computing, and how can it benefit federal firms? How does HPC compare to supercomputer solutions — and what does application look like in practice?

Click the banner below to watch a video about Oak Ridge’s supercomputers.

What Is High-Performance Computing?

“High-performance computing is the aggregation of computing power,” says Frank Downs, a member of the ISACA Emerging Trends Working Group.

“While all computing power is somewhat aggregated,” he says, “HPC is the aggregation of many different computers and systems to tackle one problem.”

Downs highlights the use of HPC frameworks to help create the first-ever picture of a black hole. Aggregating individual compute instances made it possible to sift through massive amounts of stellar data and stitch together the results to create a historic image.

Cameron Chehreh, CTO and vice president of presales engineering at Dell EMC Federal, offers a similar assessment. He notes that HPC “is the practice of combining the total computing power of multiple computers to handle larger amounts of data and solve large problems.”

HPC, Chehreh notes, “has origins in the 1960s and has been critical to increasing innovation and supporting discovery across industries.”

In practice, HPC systems typically take the form of large clusters made up of individual computing nodes. According to Chehreh, these may include processing power through CPUs and GPUs on servers; tools such as NVIDIA and Intel software development kits; frameworks including TensorFlowMXNet and Caffe; and essential platforms with Kubernetes and Pivotal Cloud Foundry.

Cloud solutions play a critical role in new HPC deployments as way to decouple performance from local compute services. Using robust and reliable public, private or multicloud services now makes it possible to create customized computing nodes designed to deliver a more unified HPC approach.

DIVE DEEPER: How are agencies making use of edge computing in the field?

How Can HPC Benefit Government Agencies?

For federal government agencies, HPC solutions offer multiple benefits including:

Increased speed. “One of the biggest benefits of HPC is speed,” Chehreh says. This is especially critical as the volume of data processed by government agencies increases exponentially — “having HPC solutions store and analyze data at increased speeds allows decisions to be made quicker and with more accuracy.”

Reduced waste. Federal agencies can also reduce IT waste with the adoption of HPC models. “Agencies can find use for older systems and technologies by bringing them into HPC clusters,” Downs says. This piece-by-piece approach also offers performance benefits, according to Downs: “If one part breaks, you don’t lose your computing power.”

Improved agility. While legacy tools and technologies remain commonplace for many federal agencies, they’re not up to the challenge of today’s data-driven IT environments. “1,000 times the data created by 1,000 times more users will break traditional IT infrastructure,” Chehreh says. The ability to handle these data volumes in real time is now critical to deliver relevant, actionable insight.

1,000 times the data created by 1,000 times more users will break traditional IT infrastructure.”
Cameron Chehreh

CTO, Dell EMC Federal

Long-term cost control. Cost is a key concern for federal organizations. Chehreh notes that while “HPC can have a high upfront cost, it can ultimately save organizations money by delivering results faster as well.” Combined with the ability to integrate existing IT tools where applicable, HPC helps pave the way for better cost management across federal agencies.

Downs does highlight two potential challenges for HPC deployments: management and security.

“Management can be difficult,” he says. “If all compute systems aren’t visible or available, there can be an issue.”

He also notes that HPC is “more of a headache for security, whether it’s open or closed. More connected computing nodes means more things to protect.”

It makes sense: With connections that may run the gamut from secure onsite servers to publicly managed cloud instances and private cloud deployments, ensuring security across the HPC continuum is inherently challenging.

LEARN MORE: How does high-performance computing power medical research?

HPC vs. Supercomputers

Although the terms are often used interchangeably, there are distinct differences between HPC deployments and their supercomputer counterparts.

According to Downs, “a supercomputer is one big computer, while high-performance computing is many computers working toward the same goal.” He also notes that “supercomputers are customized to perform a specific task, but HPCs can be adjusted to meet other requirements.”

For Chehreh, the separation between the two is smaller: “Supercomputing generally refers to large supercomputers that equal the combined resources of multiple computers, while HPC is a combination of supercomputers and parallel computing techniques.”

The High-Performance Computing User Facility at the National Renewable Energy Laboratory features state-of-the-art computational modeling and predictive simulation capabilities to help researchers and industry reduce the risks and uncertainty of adopting new and innovative energy technologies. Source: NREL

In effect, this means that supercomputers and HPC deployments leverage the same resources — they simply do so in different ways. Worth noting is that supercomputers are typically much more expensive than their HPC counterparts because components can’t be easily added or removed.

And while time-based, per-use supercomputer options exist, they often have long waiting lists and may exceed the budgets of smaller federal agencies.

Ultimately, supercomputing and HPC are like computational cousins. Both offer increased processing speed, but supercomputers are purpose-built to tackle specific problems, while HPC solutions take the multimodel approach of a polymath.

Click the banner below to get access to customized emerging tech content by becoming an Insider.

Exploring Practical HPC Potential

The practical application of HPCs in the federal government are seemingly endless. “Anywhere there are large quantities of data, there’s a use case for HPC,” Chehreh says. “Federal use cases of HPC range from R&D in driverless vehicles to high-tech hospitals that can help in the battle against opioids and improve drug effectiveness, and identifying patterns that can help the Department of Homeland Security reduce the trafficking of people and drugs.”

Additionally, Chehreh says, HPCs have the potential to save federal employees as much as 30 percent of working time — the equivalent of 1.1 billion working hours over the next five to seven years.

These are big performance shoes to fill, but government agencies are already stepping up to the challenge. Work at the Argonne Leadership Computing Facility and the Los Alamos National Laboratory Parallel Reconfigurable Observation Environment is developing HCP frameworks designed to speed the pace of discovery and innovation across federal agencies.

The Energy Department is also leveraging HPC to help ramp up research initiatives. “When solving for the next generation of energy efficiency or nuclear advancement,” Chehreh says, “that research is completed on HPC clusters. HPC allows scientists to solve for advanced complex calculations in reasonable amounts of time to advance science.”

Other HPC applications include use in healthcare to help sequence the human genome, advance the development of drug treatments and — not surprisingly — respond to the coronavirus pandemic.

Early on in the pandemic, says Chehreh, the White House launched the COVID-19 HPC Consortium. Fighting COVID-19, he notes “has required extensive research in areas like bioinformatics, epidemiology and molecular modeling to understand the threat and to develop strategies to address it.”

For federal agencies, high-performance computing offers a way to combine existing and cutting-edge technologies into a cluster-based, computational whole that significantly increases processing speed, reduces IT waste, enhances data throughput and helps streamline long-term spending.

MORE FROM FEDTECH: Find out how NASA is using a high-performance computer in space.