Skip to main content

Cyberinfrastructure for transformative science


One of the most exciting research advances in science and engineering in the past decade is the digitization of observational science. Fields as disparate as astronomy, biology, and environmental science are being revolutionized by the use of digital technologies. From massive digital detectors in a new generation of telescopes to sensor arrays for characterizing ecological and geological areas and new advanced sequencing instruments for genomics research. The knowledge gained from data-driven discovery is already transforming our understanding of many natural phenomena, and the future is full of promise.

Data-driven discovery requires sophisticated, advanced information systems to collect, transport, store, manage, integrate, and analyze these increasingly large amounts of data that cross scientific, geographical, and administrative domains. The development of cyberinfrastructure for data-driven discovery is in its infancy. Although these needs could often be met by augmenting the resources available at the National Science Foundation-funded supercomputing centers, most major data-driven discovery projects, which usually have lifetimes measured in decades, are reluctant to use the centers. Why? Because of their uncertain future (current Track 2 grants are only for four years and funding for Blue Waters, the Track 1 system, expires in 2016). But most cannot afford to build the infrastructure themselves. This presents a challenge to the projects. It is also a lost opportunity for leveraging the expertise at, and cost efficiency of, the supercomputing centers.

In recognition of the increasing importance of research cyberinfrastructure, I was asked to testify Feb. 23 before the U.S. House Subcommittee on Research and Science Education of the Committee on Science and Technology. NSF also recently issued a “Dear Colleague Letter” on cyberinfrastructure framework for 21st century science and engineering, citing the imperative for NSF to develop a long-term vision for the nation’s cyberinfrastructure.

A long-term vision poses many unique challenges for NSF, especially in the area of funding. To be useful to researchers, cyberinfrastructure must be sustained over long periods of time; it cannot be sustained through a series of short-term, loosely integrated projects. On the other hand, cyberinfrastructure must also evolve as computing technology advances, otherwise, it will rapidly become outdated. So, there must be flexibility in how the funding is used.

The NSF-wide Advisory Committee for Cyberinfrastructure has begun work on developing a new cyberinfrastructure framework outlined in the letter. They established six task forces, involving distinguished scientists and engineers from across the nation as well as NSF program officers. I am participating in three of these task forces: grand challenges, software and tools, and high-performance computing. Although the task forces are in the early stages of their work, they have already held a number of meetings to explore and discuss new concepts and strategies for developing a comprehensive national cyberinfrastructure.

For many years, NSF has been successful in deploying new computing systems that are delivering extraordinary value for the U.S. research community. However, the focus of these acquisitions was on the delivery of raw computing cycles and the funding available to provide support for the users of these new high-performance computer systems was limited. This is unfortunate, because this approach favors those scientists and engineers who are already using supercomputers and need little assistance, while our experience at NCSA and at many other centers indicates there is a growing need for high-performance computing resources in almost all fields of science and engineering.

Without adequate user support, it will be difficult for new researchers to make effective use of the available resources. High-quality support staff is one of the most valuable resources of NSF’s supercomputing centers and a fully funded user support program is needed.

Another area of future concern is that in order for universities to participate in or benefit from NSF’s HPC and data-driven discovery projects they will need adequate network bandwidth linking them to the relevant project sites. Not all universities and colleges have access to network bandwidth adequate for their participation in or interaction with the big computing and data projects, an imbalance that will become more acute as the data volumes increase. And the volume of data that will be generated over the next few years in HPC and data-driven discovery will far outstrip the capacities of the current networks.

Advancing our ability to model complex natural systems requires as much, if not more, investment in software than in hardware. In the future, increases in the performance of computational modeling and simulation codes will only be achieved through the use of larger and larger numbers of processors.

Although this “scalability” problem has been with us for nearly 20 years, for much of that time its impact was not felt because of the dramatic increases in the performance of single cores. With single core performance now stalled, computational scientists and engineers must confront the scalability problem head on.

But the need for ever more scalability has increased the difficulty of developing science and engineering applications for HPC. This problem can only be solved through inspired research. Progress will require the creation of new software development tools or the revision of existing tools and integration of these tools into a robust, easy-to-use application development environment.

It is clear that the current approach to developing an HPC software stack is too fragmented. Recently, a large international group of computer and computational scientists has come together to discuss plans for the development of software for petascale and exascale computers. The Joint Laboratory for Petascale Computing is exploring how laboratories, universities, and vendors can work together to coordinate the development of a robust, full-featured software stack for petascale and beyond computers.

The NSF task forces have noted the need for long-term, multi-level efforts in HPC software that involve all of NSF’s directorates and the Office of Cyberinfrastructure. And increased stability of the supercomputing centers that NSF supports, coupled with a rigorous review process to ensure operational quality, will certainly be one of the major recommendations of the Task Force on High Performance Computing.

I would be remiss if I did not mention education. Although not a part of the cyberinfrastructure per se, our ability to advance science and engineering using the national cyberinfrastructure requires a new generation of scientists and engineers who can contribute to and understand the use of the basic technologies involved in cyberinfrastructure and computational science and engineering and who can collaborate with colleagues in other fields to take full advantage of the extraordinary capabilities provided by this infrastructure. We need to define the core competencies important for the next generation of scientists and engineers, followed by the development of implementation plan(s) to affect the needed curriculum and course changes.

The curriculum and course changes required to educate the next generation of research leaders is not obvious. Many schools have established graduate programs in computational science and engineering that supplement study in a discipline with courses in computer science and engineering and applied mathematics. Such programs are invaluable in preparing students for future careers in computing- and data-intensive fields. But are they sufficient? And what about undergraduate education? At the rate that analog science is becoming digital science, what do we need to teach all undergraduates in science and engineering about computing and related technologies to prepare them for life and work in the 21st century. Through its investments in research and education, NSF can serve as a catalyst for this transformation.

The outcome of all of these efforts will be a long-term vision for the nation’s cyberinfrastructure, one that continues the growth of HPC as an integral part of the fabric of experimental, theoretical, and observational science.

Thom Dunning
Director, NCSA

Disclaimer: Due to changes in website systems, we've adjusted archived content to fit the present-day site and the articles will not appear in their original published format. Formatting, header information, photographs and other illustrations are not available in archived articles.

Back to top