Changing the world | National Center for Supercomputing Applications at the University of Illinois
Changing the world
11.07.12 - Permalink
Cray President and CEO Peter J. Ungaro recently spoke with Access' Barbara Jewett about why the company Seymour Cray founded continues to be inimitable.
Let's start by talking about Blue Waters. Did you ever doubt you were doing the right thing in taking on the project?
Not for a single second. We're a company that is focused on taking supercomputing technology and applying it at huge scale, so I can't think of a better project that our company is built around than taking on something like Blue Waters and building one of the fastest and most capable systems in the world. When we announced it, a lot of people came up to me and said "You're crazy. This is a crazy project. You are going to do this in such a fast timeline." But this is what Cray is all about. This kind of challenge is what energizes us and why I think we exist in the market today. The other cool thing was, as we started working with the team from NCSA, we quickly discovered we had a great partner. The NCSA thinks very similarly to how Cray thinks as a company. They are focused on their users, and the way the organization views real performance and real usage of the machine is exactly the same way we see things. That made our partnership even stronger and will be a key factor in our combined success.
What do you think about a replacement for the Linpack benchmark as the determiner of placement on the TOP500 list?
I'm a huge fan of the TOP500. The list is a great way, maybe even the best way, to keep track of what's going on with the supercomputer market, what's going on at various centers around the world, and how the market and landscape is changing and evolving over time. We owe a lot to the four guys who build the list twice a year. That said, I don't think that using the Linpack benchmark is a good way to tell who has the best or fastest supercomputer because it only measures one statistic. Linpack isn't bad but it is only one metric. It's like trying to determine who's the best pitcher in baseball by seeing who can throw the ball the fastest. But are they accurate? Do they have other pitches they can throw? Do they have mental toughness? No intelligent baseball team would draft a pitcher just because he threw the ball fast—just as I don't think a supercomputer center buys a supercomputer and only looks at one metric. Would Cray disagree with changing the TOP500 list and looking at a number of different metrics? No, we would welcome that. But I know there's a lot of debate, because running the one Linpack benchmark takes a lot of time. These systems are very expensive, and if something takes a lot of time it becomes difficult for people to invest that amount of time. So that's the other side of it. The more metrics you have, the harder it is from a time investment standpoint. You have to take machines out of production just to see how they rank on the TOP500 list.
When you spoke at NCSA earlier this year, you mentioned that the business side of the technology now has to be considered. What are some of these business aspects?
Energy costs are going to be a huge driver. The size of the market that wants to buy these kinds of machines is also going to be a significant driver in the research and development (R&D) investment. The other big thing is what I would call productivity costs, which is driving our R&D agenda today. How do we let people take advantage and make good use of these systems? How do you take a system like Blue Waters, which has both CPUs and GPUs in it, and let people take advantage of both of those technologies, let people leverage them and use them, without becoming experts in each of those areas, without having to do things from scratch and rewrite complete applications? We want scientists to be scientists and use supercomputers to make huge breakthroughs, and not have to become expert parallel computer programmers and spend all their time trying to learn how to use each generation of these machines. This is having a huge impact on the R&D agendas of companies like Cray.
Is Cray going to be more involved with software going forward?
Cray is a systems company. We've always been a systems company. And a big part of being a systems company is doing some software, doing some hardware, and then putting all that together. But where we focus, and what I think makes us really unique in the market, is that we spend most of our time in R&D looking at how hardware and software technologies come together, and how we can integrate them in a very tight way so we can give our customers a real supercomputer, not just a bunch of building blocks for them to assemble and figure out for themselves. That tight integration between hardware and software is a big part of what gives us our scalability and performance advantages. As technology changes, we figure out what's the right balance of hardware and software so we can continue to do this very tight integration. Over the past few years, we've put more energy and incremental investments into our software R&D technologies than our hardware, but what we're focused on is how all that integrates together because we are a systems company.
Big data is a hot topic today, whether it's gathered by mega retailers, or it's the data sets, visualizations, and simulations created by researchers. What’s Cray's take on big data?
In many ways high-performance computing (HPC) and big data are actually all one thing, although there are different components to it: the simulation of the data, the storage and moving around of that data and the management of it, and then the analytics we do on top of that data. As a systems company, our three business units address these components. One is our HPC systems unit, which is where all of our supercomputers are developed. Blue Waters was done in that group. Our second group is the storage and data management unit. The Sonexion™ storage system going into Blue Waters was developed by that team. And we have a third group, YarcData, which is focused on the data analytics side of things, particularly as it relates to graph analytics. As we look out to the future, with simulations being run on systems like Blue Waters, we have to be able to run those simulations, which generate huge data sets that need to be stored and moved around. We then have to be able to do analytics on that data and make sure we are really pulling the knowledge out of that data. It is not just finding a needle in a haystack but understanding how all this data works together and finding unique and hidden information in the relationships between the data that really gives us that moment where we pull that all together and make a huge step forward.
When you gaze into your HPC crystal ball, what do you see for the future of Cray and the future of supercomputing? And do you think we'll get to exascale any time soon?
The biggest thing I see coming is that today we like to focus on what I call simulation engines, which are our traditional supercomputers like Blue Waters. But we also have storage, like what we're doing with our Sonexion™ product, and then we have the data analytics side that we're doing with a product called uRiKA, and I see these three environments coming together on a single platform. A long time ago, Cray announced a technical vision that we call Adaptive Supercomputing, where we build one very scalable infrastructure, then put in different processing technologies and build an adaptive software environment to allow the user to take advantage of how these systems are going to evolve. But how I see this vision evolving is that these three distinct, different areas will all come together under a single infrastructure, with one integrated system. Some people like to call that a data-intensive architecture, which is kind of an integration of HPC and big data. But whatever you want to call it, I see that point out in the not-so-distant future.
Will we have exascale by 2018 or 2019 or 2020? It's really a matter of whether somebody is going to focus their funding on building one of those machines because they believe that doing so will have a profound impact on their mission. Do we have the capability of building one of those machines? I absolutely believe we do. What will its performance, power usage, and capabilities be? That one is still a research problem that we're working on. I am concerned, though, that we're losing time because a lot of the debates that are happening right now are about the importance of exascale and not how to do it, how to structure it, and such. I don't know if exascale is going to be done first in the U.S. It may be done first in another country or two before it gets to the U.S., but I do think that by the end of the decade exascale machines will be in use somewhere ... and hopefully it will be a Cray supercomputer!
Anything else you'd like our readers to know?
I want to tell everybody that we are very excited about being a part of what we're doing with the NCSA and Blue Waters, and what we believe that system will mean for the National Science Foundation's scientific community. It is going to be such a huge step forward for the community overall. I can't wait to get the machine completed, into production, and into the hands of users because I can't wait to see what is going to come out of it. Blue Waters has a chance to change the world, literally, in many different scientific areas. We get so excited at Cray just thinking about seeing our supercomputers being used as tools for scientists to change the world. That's a big deal for us and is a big part of why we exist as a company, all the way back from the early days of Seymour.
Blue Waters is supported by the National Science Foundation through awards ACI-0725070 and ACI-1238993.