Skip to main content

NCSA to provide Ember as shared-memory resource for nation’s researchers


The National Center for Supercomputing Applications (NCSA) will soon deploy a new highly parallel shared memory supercomputer, called Ember. With a peak performance of 16 teraflops, Ember doubles the performance of its predecessor, the five-year-old Cobalt system.

Ember will be available to researchers through the National Science Foundation’s TeraGrid until that program concludes in March 2011 and then will be allocated through its successor, the eXtreme Digital program.

“There has been a clear increase in the demand for shared memory resources,” said TeraGrid Forum chairman John Towns, whose persistent infrastructure team at NCSA will deploy and support Ember. “Allocation requests for shared-memory systems have consistently exceeded the available resources—by as much sevenfold in a recent review of requests—and have been followed by a series of results highlighted in TeraGrid publications. We know Ember will be an essential tool for science and engineering research.”

Ember will support applications that require the large-scale shared memory architecture, especially in the fields of chemistry and solid and fluid mechanics. In computational chemistry, for example, large shared memory nodes that can reliably handle long-running ab initio calculations enable material property predictions more efficiently than equivalent distributed memory, semi-direct algorithms.

A Georgia Tech team that has been using Cobalt to study the transport of electrons through graphene, a material that shows potential as a replacement for some semiconductors, is pleased that a new, more powerful SMP resource will soon be available.

“With NCSA’s help, we’ve reduced the memory requirements of the package we use tremendously, but we still need the large memory pool that a shared memory computer offers,” said postdoctoral fellow Salvador Barraza-Lopez. “Ember will let us get things done. It means we can get relevant results without overinvesting our limited time in further programming parts of the code for a distributed memory machine.”

Because of the greater ease of shared memory programming compared to parallel programming, Ember will also provide a valuable entry platform for researchers who are new to high-performance computing.

Ember will be composed of SGI Altix® UV systems with a total of 1,536 processor cores and 8 TB of memory. The system will have 170 TB of storage with 13.5 GB/s I/O bandwidth. Ember will be configured to run applications with moderate to high levels of parallelism (16-384 processors).

For more information about applying for access to Ember and other TeraGrid resources, see https://www.teragrid.org/web/user-support/allocations.

Disclaimer: Due to changes in website systems, we've adjusted archived content to fit the present-day site and the articles will not appear in their original published format. Formatting, header information, photographs and other illustrations are not available in archived articles.

Back to top