Skip to main content

Innovation to drive discovery


by Barbara Jewett

NCSA’s Innovative Systems Laboratory explores new ways advances in computing can aid scientific discovery.

“Oh, this will be good!”

That’s the lure of the new, says NCSA Deputy Director Rob Pennington, especially when it comes to computer architectures. But then, he notes, you have to check it out.

It was that “checking it out” that led Pennington to establish the Innovative Systems Laboratory (ISL) at NCSA, bringing together people who were very familiar with technology with people who were very familiar with applications and the algorithms, building on the experience base of the two sides of the problem.

Now in its second decade, the laboratory is known as ISL 2.0. The numeral refers to the evolution of the experiments the group conducts and the breadth of the examination of the real problems that come up, rather than the decade of its existence. But regardlessof the moniker, one thing hasn’t changed: the commitment to enabling scientific discovery by considering who has a science problem with a need for the ISL 2.0 team’s work. Work that is really a very innovative, very interesting experiment because they don’t know the answers yet, says Pennington. “As you learn more, you learn to ask more probing questions and you just keep digging down. It’s the basic idea behind science.”

Volodymyr (Vlad) Kindratenko has been with ISL from the beginning and oversees the lab’s operations. He says that one way ISL 2.0 differs from its earlier iteration is that now it is a virtual organization within NCSA involving more people from more groups at the center.

“We still look at some of the technologies and we try to deploy them and see how they fit into what we need. Then we also try to improve those technologies,” says Kindratenko. Currently, ISL 2.0 has one node with 8 GPUs in it; an Intel Xeon Phi processor—anew generation of accelerator architecture from Intel; a Hadoop cluster; and a Dell high-memory node with 3 terabytes (TB) of RAM.

“But with ISL 2.0 we also look at other things, in particular storage and clouds,” he says. “We have a system called the Virtual Lab for Advanced Design running an OpenStack cloud. This system is a prototyping ground for us to put together a cloud-like architecture and then see how this cloud can fit into the requirements of different projects at NCSA and on our University of Illinois campus.”

In the clouds

One of the campus partners is astronomy professor Robert Brunner, who’s been collaborating with ISL almost since its inception. At the time, ISL was exploring IBM Cell processors, FPGAs, and GPUs and Brunner was interested in how his team could use these new computational technologies, especially GPUs, to possibly accelerate their cosmological computations.

“This is outside of your normal funding stream,” Brunner says, “so it is very difficult to pursue things like that on your own. I found out about ISL about nine years ago through Rob Pennington and I connected with Vlad, and that was a very fruitful collaboration. It led to follow-on funding for my group because we were able to demonstrate that there was real potential and benefit; we were very early adopters of these technologies. And I think it benefitted Rob and Vlad because there were research papers coming out from ISL talking about us as an example area where these technologies can benefit, as well as conference presentations and publicity.”

For the last two years, says Brunner, he’s been exploring cloud technologies with ISL 2.0. While researchers will always have previous work, especially simulations, that can be scaled to more and more cores on bigger and bigger machines, there is a growing class of users, he notes, that just need more nodes to scale things out regardless of whether the nodes are “the latest and greatest technologies.” A cloud may be perfect for those situations.

“I think there is a concern that there’s this growing concept of cloud computing. And it is sort of nebulous. No pun intended,” he laughs. “Without ISL we wouldn’t have the opportunity to explore it. I think its important for NCSA to also have some understanding of what clouds are, what they can do, how researchers might use them. You want to make sure you’re not missing out on strategic opportunities, and I think that is what ISL provides. It’s a great service. And they get the benefit of having partnered with people on campus with real problems that want to try to test these technologies out. It’s not just simple demos, it’s real people actually trying to solve problems. And we learn.”

Brunner’s research team has strong partnerships with professors in the computer science department, so it was natural to involve them in the cloud computing exploration. A handful of computer science undergraduate and graduate students have been deploying codes on the ISL 2.0 OpenStack cloud, testing it and trying to put up real services. Looking to the future, Brunner says a cloud may turn out to be a good fit for the Dark Energy Survey or the Large Synoptic Survey Telescope “where we want to provide improved functionality in terms of accessing, viewing, processing, and mining that rich data set.”

An unexpected outcome of the ISL 2.0 cloud is Brunner’s increased popularity with students. Many students dream of a job at cloud-driven enterprises like Google or Amazon. Though there are a few U of I cloud computing classes, opportunities for students to get experience with clouds are scarce. Brunner says when students find out his group is working with a cloud they seek him out, asking if he can give them a project. Unfortunately, he has more student offers than he can use. But the requests did lead to changes in his teaching, incorporating basic cloud technology so that if he ever has access to a cloud for education purposes students could just upload without changing anything they do.

Secure, yet speedy

But clouds aren’t Brunner’s only interest in ISL 2.0. There is still interest in new hardware technologies.

“We’re talking with Vlad about use of the new Intel Many Integrated Core (MIC) architecture. He has some new chips there and we have a post-doc who is very interested in how you can use some of the new features in these new processors, like the Haswell, to be able to accelerate programs using the built-in crytopgraphic functionality. That might seem odd, right? Something that is supposed to be helping you be more secure in your computation we can use to do things faster,” he says.

Brunner says it turns out a lot of things his team does, even the numerical simulations that run on Blue Waters and other parallel computers, fundamentally organize data by using what are called tree structures. Typically those tree structures focus on how to organize data on disk because you’ve got lots of data and you want to push it around and keep it organized, he explains.

“But you also use tree structures in memory, so my post-doc has focused on that, looking at how you can change how you organize your tree data structure in memory to accelerate performance. It turns out you can gain a factor of 10 to even 100 by changing the way you access memory for these tree structures. And it turns out that these new cryptographic features that are provided in chips like the Haswell chip give you that—for free! So we may be able to partner with ISL 2.0 to explore that, which is one of those synergies that we would never do otherwise,” says Brunner.

Memory to spare

And then there are opportunities like the ISL 2.0 high-memory machine, the one with 3TB of RAM.

“Again, that’s something that allows you to explore your algorithms in a way you otherwise wouldn’t be able to do. It’s not a production resource, it’s not like you say ‘I’m going to have that resource for a month and I’m going to run on it.’ It’s a shared resource for testing and checking things out. But doing that then gives us confidence that we can then maybe go to a shared memory system on XSEDE and have an expectation of a real benefit because we’ve seen the prototyping work. It is a really valuable service and ISL does great work. I just can’t say enough good stuff,” says Brunner.

Another person who can’t give enough praise is Liudmila (Luda) Mainzer. As a senior research scientist at NCSA and the University of Illinois’ High-Performance Biological Computing (HPCBio) group, Mainzer works in genomic variant calling. As the name implies, genomic variant calling is a method to figure out the differences, or variants, between the genome of an individual organism and the reference, or the average expected genome of its species at large.

“The human reference genome, for example, is an assembly of samples from many human individuals, and is representative of an average healthy human genome,” Mainzer explains. “We are all different, we all have variations in our genome, and we need something to compare against to figure out if there is disease. For example, in cancer, the biopsy samples can be sequenced and their genomes compared to the healthy reference, to determine if there are any differences that could point to the cause of the disease. That is one purpose behind human variant calling. Similarly, plant variant calling, such as in corn or soybean, is important for figuring out which genomic variances contribute to higher biomass production or greater parasite resistance.”

Mainzer uses ISL 2.0’s high-memory machine. When asked why, she laughs. “Because it has in it 96 threads and 3 terabytes (TB) of RAM!”

For example, she says, her laptop has 4GB of RAM. The NCSA iForge cluster’s nodes have 256GB of RAM, a quarter terabyte. “If I requested 12 of those nodes, aggregately I would get 3TB of RAM. Here, ISL has—in a single machine—3TB of RAM.”

What that means, she says, is that from the perspective of many applications she uses the machine can be assumed to have infinite RAM, which is a tremendous asset when benchmarking variant calling software. If the benchmarking process shows that it is limited by something, she knows it is not RAM and can then look for problems with CPU utilization or disk I/O. On the flip side of this coin, genome assembly—the software that creates the reference genome we discussed above—requires many terabytes of RAM to function properly. For that application, the ISL high-memory node can enable some work that would not be possible elsewhere.

While she has many ideas for experiments she wants to run on the high-memory node, she has primarily tested the new “ultrafast” variant calling tools that aspire to replace the standard, well-established but slower approaches. Currently, variant calling takes about 24 hours to complete on a single human genome, even with the many tricks that parallelize the computation to shorten runtime.

A number of companies are working to create ultrafast workflows that can accurately do genomic human variant calling in two hours or less. Speed matters, says Mainzer, as we are looking toward a future where we could be genotyping every baby born. In the state of Illinois, for example, that is currently about 500 babies every day.

“Now, if running a variant calling workflow takes 24 hours, that is 500 instances of the workflow running for 24 hours straight. My tests indicate that this level of sustained data analysis would use up half of Blue Waters, every day. That is a lot of computation!”

So Mainzer and her team member Gloria Rendon have been testing the alternative, “ultrafast” genomic human variant calling software packages to verify if they live up to their speed and accuracy billing. Invariably, says Mainzer, these software packages require a lot of RAM, often up to 300 gigabytes (GB). Blue Waters nodes have 64GB of RAM. That’s where the ISL 2.0 system comes in. Equally important, she says, is that the ISL high-memory node has 96 threads. For comparison, Blue Waters nodes have 32 threads and iForge nodes range between 20 and 64 threads.

“My desktop has six. In bioinformatics the number of threads matters. For genome assembly having a ton of RAM is more important than having many threads, but for variant calling you want as many threads as possible, so having 96 is fantastic. I am very happy, and cannot recommend that system enough. You can quote me on that!” she says enthusiastically.

The reason for her happiness is simple: results. Testing the variant calling software on other systems had problems. Usually the software being tested is still in development, so quirks and difficulties are expected, as the developers cannot always foresee the intricacies of the system on which the software will be tested.

“We usually try to install and test on Blue Waters, at IGB, and on iForge. In the case of the ultrafast variant calling packages, amusingly, we had problems on every single system, except the ISL high-memory node. Here, everything just works! The ISL high-memory node allowed us to do things we simply could not have done otherwise,” she says.

Mainzer adds that “this does mean the new software is not quite ready for prime time. Who has a 96 thread, 3TB RAM system? Not your average hospital. But that is actually a valuable data point. We can go back to the company and tell them that the software needs work, since it has difficulty on any “normal” system we have tried to test it on.

“I feel that testing this software is a valuable service to the community, because someone has to sit down and spend the time to figure out what works, what doesn’t, and why. If it doesn’t work, how can I make it work? It can be extremely time consuming, and most people do not have the computer resources that enable this kind of testing, so we are in a unique situation of being able to really help,” Mainzer says proudly. “As I mentioned, we are heading toward the time when maybe we will be sequencing every single human being in the country. So this work has great societal impact.”

Disclaimer: Due to changes in website systems, we've adjusted archived content to fit the present-day site and the articles will not appear in their original published format. Formatting, header information, photographs and other illustrations are not available in archived articles.

Back to top