Home     Agenda     Registration     Local Info     CURRENT WORKSHOP  

Speaker Abstracts and Biographies

Petascale Computing Facilities at Argonne

Ray Bair, senior computational scientist, Argonne National Laboratory and the University of Chicago

Abstract: Argonne National Laboratory is the site of the U.S. Department of Energy's newest high-end supercomputer facility for open computational science, the Argonne Leadership Computing Facility. In 2007, Argonne completed a new 6,000-square-foot computer room designed for petascale Blue Gene systems, taking advantage of the unique characteristics of the site. That computer room is now operational. The lessons learned from that design were carried forward to a larger 25,000-square-foot facility for next-generation systems to be occupied in 2009. A unique aspect of these projects is the joint R&D project Argonne has with IBM and Lawrence Livermore National Laboratory to design Blue Gene/P and Blue Gene/Q systems, which enables interaction between the computer and facility engineering efforts.

Ray Bair, director, Argonne National Laboratory Leadership Computing Facility

Raymond Bair is director of the Argonne Leadership Computing Facility, a national computational science resource supported by the Office of Science of the U.S. Department of Energy. He also directs the Argonne Laboratory Computing Resource Center that provides mid-range supercomputing for Argonne projects. He is a senior fellow in the Computation Institute at the University of Chicago. His R&D interests lie at the confluence of computer science, computational and laboratory research, with an emphasis on large-scale applications of high-performance computing and communications. Bair earned his Ph.D. at Caltech. Later at Pacific Northwest National Laboratory he was instrumental in establishing the Molecular Science Computing Facility for DOE's Environmental Molecular Sciences Laboratory.


Designing and Building a Multi-use Facility

Dennis Cromwell, Associate Vice President, Enterprise Infrastructure, Indiana University

Abstract: On Oct. 12, 2007, Indiana University broke ground on the construction of a new data denter with a construction budget of $38 million. IU expects to complete construction by the end of 2008. The data center is a single-story structure that will encompass 82,700 gross square feet and can provide at least 30,000 square feet of raised floor. An earthen berm guards the exterior of the data center, it is designed to FEMA standard to withstand an F5 tornado and includes its critical infrastructure within the facility. It also will employ state-of-the-art physical security and the potential still exists for the installation of a designed "green roof," pending additional funding. The interesting design feature is that the raised floor space is divided into three "pods" of approximately 10,000 square feet. The first pod will house enterprise systems needing critical redundancy and high availability, and the team designed the pod to meet the Uptime Institute tier-3 standard. Research computing will fill the second pod, and it maximizes power and cooling but does not provide complete infrastructure redundancy. The third pod will be shell space for later growth in research computing. This presentation covers the unique challenges of designing a multi-use facility and discusses work with the primary architect, Smith Group, and the consulting engineering firm, EYP Mission Critical Facilities.

Dennis Cromwell, Associate Vice President, Enterprise Infrastructure, Indiana University

Cromwell leads the UITS Enterprise Infrastructure Division, which manages the critical technology infrastructure that enables Indiana University services and software applications, including data centers, servers and storage, campus networks, telephone switches, identity management, and cable and wiring. Cromwell has extensive experience at Indiana University delivering production information systems for faculty, staff, and student use. Previously, Cromwell served IU as director of University Information Systems for UITS, responsible for all aspects of enterprise information systems including project management, applications development, database administration, identity management, data warehousing, production assurance, and operations. He led the infrastructure team at IU in a technology transformation from mainframe-based computing to an open systems environment that supports high availability systems for all eight campuses. Cromwell also has significant IT industry experience from two major software firms where he was a nationwide resource for database and application development tools. Cromwell is a frequent presenter and seminar leader at higher education IT events including EDUCAUSE and CUMREC. He is a graduate of Indiana University, with a bachelor's degree in mathematics, has continuing education credits also from Indiana University.


What's in Store for Next Generation High-Performance Data Centers

William J. Kosik, PE, CEM, LEED AP, EYP MCF Inc.

Abstract: Ongoing technological advances in high-perfomance computing continually increase the importance of reliable power and cooling for the data center. Also, as the electrical power consumption of the servers continues to grow, the cost of energy and difficulties in cooling very high-density data centers has driven a need for innovative (yet cost-effective) design solutions for the electrical and cooling systems. At the same time, advances in commercially available data center cooling technology are now enabling equipment cabinet densities as high as 30 kW and beyond. For facilities that require very high-density equipment cabinets, there is a dual benefit, since some of the new cooling technologies can also be more energy efficient.

This is particularly relevant when planning for new computer technology since many computer facilities constructed in the last decade have difficulty in supporting the extremely high power and cooling loads. When planning an upgrade, expansion or a new supercomputing facility, it is extremely important to build a robust, flexible electrical and mechanical support infrastructure.

Attendees will come away from the presentation with a strong understanding of current and next-generation data center technologies primarily relating to power and cooling techniques for supercomputing facilities.

The audience that will gain the most from this tutorial will have an interest in the mechanical and electrical infrastructure that is required to support world-class high-performance computing environments. Typical audience members will consist of facilities, real estate, design and construction teams involved in supercomputing facilities.

William J. Kosik, PE, CEM, LEED AP, EYP MCF Inc.

Bill Kosik is a managing principal at EYP Mission Critical Facilities (EYP MCF) and a member of the firm's Technology Council, EYP MCF's R&D organization. Kosik is a licensed professional mechanical engineer, LEED Accredited Professional (LEED AP) and a Certified Energy Manager (CEM). He is the leader of "Moving Towards Sustainability," one of eight corporate pillars at EYP MCF, which is focused on the research, development, and implementation of sustainable, energy-efficient, and environmentally responsible design strategies for data centers and other high-performance building types. As one of EYP MCF's resident experts on high-performance computing, Kosik is also collaborating with multiple SC500 clients, developing innovative design strategies for cooling high-density environments, and creating scalable cooling and power models for 10sTeraFlop to 100sTeraFlop to 10sPetaFlop scenarios. Kosik has presented on the topics of data center optimization and building performance simulation at venues including AFCOM, Data Centre Dynamics, and the Liebert Users Conference. Also among his 50+ articles and speaking engagements are presentations to AFCOM, the American Institute of Architects, ASHRAE, Data Center Dynamics, IFMA, Labs21, NeoCon, ULI and 7x24, as well as articles in the ASHRAE Journal, Energy & Power Management magazine, Building Operations Management, Engineered Systems, Consulting Specifying Engineer and R&D Magazine. Kosik worked as a consultant for the U.S. Green Building Council on the launch of the LEED Core & Shell Pilot Program and presented at multiple USGBC conferences, as well as at the World Forum for Building Innovation in the UK and the Sustainable Buildings Conference in Finland. He also worked with the city of Chicago in developing city-specific environmental criteria. Kosik's projects have earned 19 ASHRAE Awards. He has his degree in engineering mechanics from the University of Illinois at Urbana-Champaign.


Oak Ridge National Laboratory's Experiences with Major Computing Facilities

Jim Rogers, Director of Operations, National Center for Computational Sciences, Oak Ridge National Laboratory

Abstract: Since 2003, Oak Ridge National Laboratory has built two high-performance computing centers totaling 70,000 square feet of raised floor space with a total of 35 megawatts of power and over 12,000 tons of chiller capacity. In this talk, I will describe the computing facilities, discuss the features we included specifically for high-performance computing, talk about trade-offs we made, and relay lessons learned.

Jim Rogers,  Director of Operations, National Center for Computational Sciences, Oak Ridge National Laboratory

Jim Rogers, Director of Operations, National Center for Computational Sciences, Oak Ridge National Laboratory. The NCCS was founded in 1992 to advance the state of the art in high-performance computing by putting new generations of supercomputers into the hands of the scientists who can use them the most productively. It is a managed activity of the Advanced Scientific Computing Research program of the Department of Energy Office of Science (DOE-SC) and is located at the Oak Ridge National Laboratory. The NCCS was designated as the National Leadership Computing Facility in 2004. The facility is currently home to a Cray XT4 and a Cray X1E. There are near-term plans to introduce a 1PF Cray XT5 in 2008 to further extend the computational resources available to researchers. ORNL is actively executing facility modifications to accommodate new systems, and is designing a new 100,000 square foot facility expected to be completed in 2010. Rogers has 20 years of experience in high-performance computing (HPC) and has provided strategic planning, technology insertion, and integration support for multiple computing centers, including the U.S. Army Corps of Engineers Engineer Research and Development Center, the Aeronautical Systems Center, NASA, the Defense Intelligence Agency, and the Alabama Supercomputer Center. He currently has primary responsibility for managing the operations of the NCCS systems at Oak Ridge National Lab.


Future High Performance Computing Systems and the Supporting Facilities Infrastructure

Ed Seminaro, IBM Fellow

Abstract: As the insatiable urge for computing power to solve the most difficult problems in society continues to grow, so do the complexity and power consumption of the computing systems that are deployed in the high-performance computing arena. In order to build more powerful systems that move the status quo of single system capacity from teraflops to petaflops and enable us to approach an exaflop, further integration is required to a point that it eradicates the component-level thinking used to construct these systems today. This talk describes at a high level directions to collapse the processing, memory, networking, and storage structure typically used today in supercomputing into a more integrated power-efficient structure along with a considerable portion of the infrastructure that is usually required in the facility to provide power and cooling to these subsystems. The overall energy efficiency that can be obtained at the data center level is discussed, along with a view of the contribution of each of the key elements of power dissipation.

Ed Seminaro, IBM Fellow

Ed Seminaro is the chief hardware system architect for IBM's Power Processor Based Server and Storage Product Line, and is an IBM Fellow in IBM's Server and Technology Group. Ed and his team are responsible for establishing and executing the system design of IBM's pSeries and iSeries UNIX product family. Ed has a B.S. in electrical engineering from Rutgers University in New Jersey and has completed graduate work in electrical engineering at Syracuse University in New York. In the workplace Ed has been involved in the design, development and manufacturing of traditional mainframe and UNIX servers for over 26 years in both technical and management roles. He has deep technical expertise in the areas of computer engineering, circuit design, packaging, cooling, system design and in the past has been viewed as an industry expert in the area of high-frequency power conversion. He is credited with many industry advances that have enabled IBM to maintain leadership in the server industry, especially in the areas of packaging density and power efficiency. Today he is looked at throughout the IBM corporation as one of the key architects of future server and storage products.


 

NCSA Home UIUC Home

National Center for Supercomputing Applications at the
University of Illinois at Urbana-Champaign

All rights reserved.
©2008 Board of Trustees of the University of Illinois.