Skip to main content

Vast digital realm


Brett Bode, head of NCSA’s Advanced Digital Services, chats with Access editor Barbara Jewett about the contributions his directorate makes to the center’s success.

What is ADS?

Advanced Digital Services, or ADS, covers all of the high-performance computing, data, storage, applications, user support, and systems support for our production project members. That includes system, storage, and networking administration, building management for the National Petascale Computing Facility, the 24/7 help desk for XSEDE and other projects, the consulting office, training, advanced application support, and a production visualization team. The ADS directorate includes about 70 people.

So you are behind the scenes of a lot of things at NCSA.

Behind the scenes and also the user support. When a user has a problem and calls in, they are going to get somebody in ADS—help desk, consulting, advanced support.

ADS also has projects, one of which is a storage condo. What is that?

The condo is a mid-tier storage solution to provide reasonably large amounts of reasonably fast-access storage that is not tied to a specific computational resource. When we talk about mid-tier storage, think of needs in excess of a multi-terabyte sort of range. It is not desktop storage; it is more project storage. NCSA deputy director Rob Pennington helped kick this off and seed the project. The Dark Energy Survey (DES) and Large Synoptic Survey Telescope (LSST) are the two largest projects making use of it today, along with NCSA’s Advanced Visualization Lab (AVL). We are now going to offer this service to other teams and projects that might have need for it. Particularly here at NCSA, if teams have need for moderate to large amounts of storage, say 20 TB or more, we’d be a good fit. The system is funded via a cost-recovery fee based on the amount of storage allocated to a project. The condo service is offered to NCSA and potentially to others outside NCSA as well, both here on the University of Illinois campus and outside the university.

How is the storage condo accessed?

The predominant way people are using it today is that we export the storage as a file system for just their storage to their systems. For example, the DES systems will mount the storage condo onto those systems and use it directly, but they only see their portion of the storage that is allocated to them. Same thing for AVL and LSST. Matt Turk, one of NCSA’s research scientists, is making use of the condo to provide a large data export for a web server to publicly share a 30+ TB dataset. Matt uses a VM on the NCSA VM farm to provide a webserver, it mounts a file system from the condo that has 35 TB of storage he uses for his large data set. We’re open to other means of condo access based on project requirements.

What is the condo’s capacity? How quickly will it fill?

Currently the condo has a little over 1 PB of storage. We have most of the storage sold, so we don’t currently have a lot of unsold storage even though it is not all currently being used. But the design is scalable so we can add to the capacity easily without impacting existing projects, so we can add customers in pretty quickly. The architecture underneath is fairly scalable and reliable. So we can grow into many petabytes if we have the customers.

What is Ice House?

Ice House is a cold storage service. It will be a slow storage service, designed for long-term data storage, archives, or backups—data that you really need to have preserved but you don’t need to access very often. This service is still in development, but we are exploring using the mass storage system library, tape drives, and tapes that NCSA recently retired. We have 4.5 PB of tape available, but we can certainly grow to substantially more than that. We hope we get to the point that we run out of that tape.

An example of a typical user would be the National Data Service (NDS) or other initiatives looking at data surrounding publications where that data needs to be archived, but you may only see a few downloads a year of that data. So it doesn’t need to be sitting on a real fast storage system but it needs to be available in some fashion. We’re thinking this service could provide a portable entry point to the National Data Service, materials genome, and other projects that go along with those. We have other customers that have interest, for instance the University of Illinois library for long-term data preservation. The library is a customer of the storage condo and is paying us to do data replication onto the university’s campus cluster because they need a guarantee that that data is not lost. They are more than happy to pay for a tape copy as well to guarantee that this data is going to be preserved. Ice House is for the long tail of data access where you need that lifetime of data—it needs to be preserved, it needs to be preserved for a substantial amount of time, but it may not be accessed very frequently.

It is true there is data that could go into cold storage that never gets read again. But organizations just need to know that it is out there. NSF and other agencies are demanding data retention or long-term data management plans for data produced as the result of grants, and institutions are struggling with how to handle long-term data storage. We now have a responsibility to hang on to project data for others to be able to access and reproduce that research for what could be substantial lengths of time. Frankly, 10 years is not a long time in some research projects’ timelines. Cold storage is the new library archive. Previously everything was printed in journals and after many years they’d be taken out and stored in a warehouse. Essentially our cold storage is the same thing—that data that has gotten old enough that people are not looking at it often but you can’t throw it away yet either because it could still be valuable.

What is a NOC?

Network Operations Center, which in some ways may be a misnomer because we are actually doing more than network operations. It is providing services to XSEDE and the Illinois Campus Cluster: a 24/7 help desk that somebody can call if they have a problem and monitoring of systems and services. NCSA also provides after-business-hours help to our Campus Information Technologies and Educational Services (CITES) in monitoring some systems and takes calls to the CITES help line.

We are open to expanding the NOC and being the help desk for other projects, businesses, and universities who need a 24/7 help desk service. We can’t do a physical building walk-through for others, but anything that can be done virtually we can do in addition to answering the telephone.

All of the storage and the monitoring relies on networking, which is another thing for which ADS is responsible.

That’s right. We have two projects going on right now that are near and dear to anyone who has a home in the NCSA building: the wired connectivity which we oversee and the wireless network run by CITES. We are working with CITES to try and improve wireless service in our building, doing multiple access point (AP) upgrades. Part of the problem is that wireless service has changed in the nine years since this building was constructed and the density of APs put in place then was OK given technology of the day. We now have more types of wireless service—people probably don’t know that. Used to be 802.11b was pretty much everything for wireless and now there’s a, g, n and now ac. And as those have progressed over time, those have gotten faster and faster but the range has contracted somewhat. So the density of APs needs to be increased in order to have coverage throughout the building. When completed, we’ll have better and more uniform coverage without dead spots or drop offs. We’re nearly doubling the number of APs in the building. One thing people will notice is that the new APs are on the ceiling but below the ceiling tiles so they will be visible, unlike the old ones that were above the ceiling tiles and hidden.

At the same time, we are upgrading the physical network infrastructure in the building that serves the wired ports in each office space. The existing ports were procured with the building and they have reached the end of its operational life—at least the end of the operational life for which you can obtain a warranty. We will still provide 1 Gb network speeds to each port. In the interest of cost saving, we have turned off ports that are not in use. It used to be every single port in the building was turned on, you could plug in to any port anywhere, any time. Which meant we had an awful lot of unused switched ports. And to some extent security did not like that either. So we’ve tried to right-size the network design.

Look for news of a major upgrade to NCSA’s external network connectivity as part of the Blue Waters project in the next issue of Access.

What role do you see ADS playing in the future? Storage and data currently seem to rule the world.

Storage and data. Data is big. Folks that have a terabyte of data, that’s small enough to buy an external drive for your laptop or your desktop. But as you get into the 10, 20, or 30 TB range that’s not a functionality we should be replicating under people’s desks. We can more efficiently provide that storage on a large storage system that has some reliability built into it as well as some performance features. ADS can provide both the base storage and the intellectual talent in data analysis and visualization as a service to NCSA, our campus, and others.

Disclaimer: Due to changes in website systems, we've adjusted archived content to fit the present-day site and the articles will not appear in their original published format. Formatting, header information, photographs and other illustrations are not available in archived articles.

Back to top