Skip to main content

Editor’s note: This is the first in a series of virtual visits with NCSA thought leaders on current topics impacting the field of high-performance computing.

What Comes Next: The State of Supercomputing at NCSA
By Bill Gropp, NCSA Director

NCSA has always been focused on using computing to solve the most difficult and challenging problems facing society at any given time. From predicting when a volcano will erupt to collaborating on building a better internet, our organization is frequently called upon to lend the best resources to solve the world’s greatest challenges. Recognizing that the supercomputer of 40 years ago was less powerful than an average cellphone today, more and more, means looking at software and computing systems working together. 

From Blue Waters to Delta: How Supercomputing Continues to Evolve
For several decades up to about 2006, the performance of individual computing elements doubled every 1 ½ to 2 years. This meant that application performance, in many cases, also increased exponentially with little or any effort by software developers. Of course, the story is more complicated than that; even during that time, the performance of memory did not increase at the same rate, and some applications struggled to track the increase in processor performance. But around 2006, this changed – while individual computing elements continued to shrink in size (Moore’s Law), their speed (clock rate, Dennard Scaling) stopped increasing. This happened just as NSF called for proposals for a Leadership Computing System. Blue Waters was awarded from that call and was designed around advanced yet conventional CPUs. Even at that time, the potential of new approaches, such as GPUs, was clear, and using GPUs was an option for Blue Waters even from the beginning. But most of the performance, and most of the application use, was focused on conventional processors.

An image of Delta cabinets in the NPCF machine room.
The NCSA Delta System

In the past 10 years, performance has increasingly come from specialization – for example, GPUs, while flexible and computationally powerful, are less general-purpose than “conventional” processors. Even more specialized processors have been developed for applications areas such as machine learning. The supercomputers at the top of the Top500 list all have some degree of specialization in their processors; most use separate accelerators, usually GPUs. However, few applications can take advantage of these newer approaches without changes, sometimes significant, to their software – and even their algorithms. The Delta project embraces these changes – both an emphasis on taking advantage of innovations in how we can compute, with its large number of GPUs, and in working with the applications community to update and evolve their applications. Learning to design and implement applications to take advantage of innovations in processors, memory and data is one of the most important goals in HPC today.

The Impact of Delta and GPU Computing
With Delta, NCSA remains a major HPC provider for the National Science Foundation. We will be working with applications from the leading research groups around the country, both helping them succeed but also learning from them about the directions and needs in a broad range of applications.

I’m a big believer in use-inspired research, and having this exposure to the needs of researchers will inform the direction that we take.

Delta also puts us at the forefront of several transitions in HPC. I’ve mentioned GPUs and other innovative processors already. Another is in high-performance I/O. HPC has adopted a particular model (POSIX) that, as it turns out, is not a good match for high performance. HPC centers have worked around this by not implementing some of the features, but that can cause problems for applications that expect the standards to be followed. In what I’ll call “big data” computing, a number of different, and higher-performance, approaches have been developed. HPC can learn from these and take advantage of the development work targeting the big data systems. An important part of Delta is a “non-POSIX” I/O system that will explore these directions in high-performance I/O and help move the HPC community to a better way to manage data.

HPC in the Healthcare Space
Nightingale gives researchers at Illinois the ability to work with sensitive data, such as electronic protected health information (ePHI), in an environment that provides both advanced computing resources and full compliance with HIPAA regulations. This opens up many possibilities both for NCSA and for researchers across campus. For example, some of the modeling for the COVID-19 strategy for campus was conducted on Nightingale. We’re excited about opportunities to work with researchers in the Cancer Center at Illinois, the Carle Illinois College of Medicine, and the many researchers across campus who need a place to work with ePHI data. Our Health Innovations Project Office is both helping those researchers make use of Nightingale and has its own research projects, building on our partnerships with major healthcare providers. Nightingale brings the power of HPC to research in health sciences.

Abstract and stylized programming code blurring into binary code in orange
Programming Code

Software Applications: the All-Important ‘A’ in NCSA
Managing scientific data is challenging, to say the least. Clowder has been a great success in helping applications teams deal with heterogeneous sources of data. We have a strong record in helping developers take advantage of GPUs and other advanced architectures, and I’m excited about opportunities to work with research efforts, such as those in the Center for Exascale-Enabled ScramJet Design, to combine human expertise with guided automation in code generation. As a Center focused on applications, I’m excited by the success our Industry team has had in working with our partners to make use of commercial cloud resources – we see clouds as a complementary capability to be used when it makes sense. We see tremendous opportunity in AI and machine learning – this helped Illinois win two of the seven inaugural NSF AI Institutes, one (AIFARMS) hosted at NCSA and the other with NCSA participation. Our work in visualization continually amazes me. And tying much of this together is NCSA’s long history and strong commitment to Open Source Software”.

A Final Word About the Future
Though they’re mentioned above, I’d like to emphasize two points. First, NCSA is really about the applications – about solving problems through the use of advanced computing. That might mean writing software. That might mean running a supercomputer. It might mean training researchers in using existing tools. It might mean building workflows that combine commercial clouds with on-site computing. The focus is always on solving the most challenging applications. 

Second, key to success is our collaborations and partnerships, both with researchers all across campus and across the nation and world. It takes an interdisciplinary team to solve many of the most challenging problems. Collaboration and partnership are key to building those teams – and whether NCSA is leading or supporting them, being part of these partnerships is our future.

Back to top