Skip to main content

Cloud and supercomputing cooperate in molecular dynamics research


by Nicole Gaynor

Blue Waters is helping Vijay Pande’s research group at Stanford tackle serious diseases at the molecular level.

Can cloud computing replace supercomputers like Blue Waters in the future? No, says Vijay Pande, director of the biophysics program at Stanford University. Both are critical to his study of serious diseases like Alzheimer’s and cancer.

Because of the sheer power available in each core on Blue Waters, Pande says the system is a completely different kind of resource from cloud computing and is more flexible.

“If we just had the [cloud computing] Folding@home part without Blue Waters, we would generate a lot of data but we would have a hard time analyzing it,” says Pande.

Pande’s lab uses cloud computing through Folding@home and Google Exacycle to run many detailed molecular dynamics (MD) simulations of protein folding independent of one another. People volunteer unused computing power on their home computers to help crunch numbers for Foldling@home. Computers around the world run independent MD simulations run and return their results to Folding@home. Google Exacycle works in a similar manner except that Google’s computing infrastructure supplies all the computing power and researchers apply for time. Both of these are examples of cloud computing (also called distributed computing), which is great for raw computing power as long as I/O and communication requirements are low.

“A lot of what we do is run the raw trajectories on Folding@home, or Google Exacycle, analyze it on Blue Waters, and spit it back out to Folding@home,” says Pande.

Blue Waters supplies high speed network and storage, and tight connections between nodes—all of the characteristics missing from cloud computing. The combination of resources and Pande’s methodology allows his experiments to reach time scales one thousand times longer than similar experiments.

“It’s like the difference between traveling to the store versus going to the moon,” says Pande.

Trading continuity for efficiency

Molecular dynamics are difficult to study experimentally because molecules are minuscule and vibrate rapidly. Even using computer models that calculate changes every femtosecond, challenges remain. Researchers must first create an accurate model of molecular changes, then run it long enough and with high enough temporal resolution to simulate realistic processes, and finally analyze the masses of data that result from the model runs.

“It isn’t very interesting change until it accumulates for millions of steps,” says Robert Brunner, a research programmer at NCSA who helped the Pande team make the best use of the computational and storage resources of Blue Waters to interpret their results.

Pande estimated that at ten nanoseconds per day, it would take one million days—3,000 years—to complete a single simulation on a single processor.

Proteins don’t start with a single well-defined structure. Rather, they go through a process called folding that determines their three-dimensional structure. Proteins can only function properly if they fold into the correct structure. Most errors are benign—but not all.

Many serious diseases, like Alzheimer’s, many cancers, Parkinson’s, and Mad Cow disease, result from errors in protein folding, or the way protein molecules in the body are constructed. Pande’s group aims to discern which errors lead to disease, how the errors happen, and what kinds of medicine may prevent the folding errors or mitigate their effect.

“It’s like one of these sci-fi movies where you’re trying to fight a shape shifter,” says Pande.

Traditionally, large-scale MD simulations involve running long simulations on the high-powered networked processors of a supercomputer. Klaus Schulten, director of the Theoretical and Computational Biophysics Group at the University of Illinois, pioneered the use of graphical processing units (GPUs) to speed up these simulations on supercomputers with his award-winning code called NAMD (Nanoscale Molecular Dynamics). The slowest part of such a large-scale model run is transferring information between the cores.

Pande traded the continuity of Schulten’s type of runs for even more efficient parallelization that avoids the communication bottleneck. On top of that, a suite of shorter, independent simulations can run on heterogeneous hardware, like cloud computing, and handles hardware failures better. If a single simulation dies, the rest continue. Using the same amount of computing time, Pande’s code completes many short model runs that effectively examine the state of a molecule at various times throughout Schulten’s long run, providing a complementary approach to traditional MD simulation.

Results from the first generation of MD runs pass to Blue Waters. A tool called MSMBuilder clusters first generation results into microstates, or groups of molecules that are similar in structure. It then identifies which molecules have reached a long-lived, or metastable, state. Some of these microstates become the starting molecules for the second round of model runs. The subgroup of molecules passes back to Folding@home for a second generation of runs. This process may iterate several times during a single experiment.

Exa-MD of the future

In the future, the power of both supercomputers and cloud computing will increase. Increased cloud capacity will allow Pande’s method to sample a wider variety of shapes in less time. Increased supercomputing power will be necessary to guide how the cloud capacity is best applied to exploring possible shapes, and to analyze the increased volume of data produced by the cloud computations.

Supercomputing and cloud computing power will both increase in the future, says Brunner. Both are a boon for work like Pande’s. Higher cloud computing power will allow Pande’s individual runs to complete more quickly, while higher supercomputing power will guide efficient use of cloud resources and increase the ability to analyze the sea of data the cloud produces.

Pande does not see his work competing with Schulten’s. As computing power increases, Pande envisions a suite of long runs—in other words, a combination of his and Schulten’s methods—in order to scale to the exascale computers of the future.

“Let’s say NAMD scales to a thousand cores but the machine has a million cores. We could run a thousand simulations each of a thousand cores,” says Pande. “We could run a hundred thousand such systems and that’s how you scale to billions of cores.”

Disclaimer: Due to changes in website systems, we've adjusted archived content to fit the present-day site and the articles will not appear in their original published format. Formatting, header information, photographs and other illustrations are not available in archived articles.

Back to top