Skip to main content

Nano is now

by Trish Barker

Soon our devices will depend on transistors that are a few atoms wide. A tool developed by the Nanoelectronic Modeling Group will help design them.

If you think nano-devices are the stuff of science fiction or the distant future, think again. “If you have an iPhone or other smartphone, you already have nanotechnology in your pocket,” says Gerhard Klimeck, director of the Network for Computational Nanotechnology and professor of electrical and computer engineering at Purdue University.

That’s because the quest for more and more powerful electronic devices—smarter smartphones, faster computers—has driven the size of transistors smaller and smaller, until their scale can now be measured in atoms. When Intel introduced its 22 nanometer transistor in 2012, the company helpfully explained that “more than 6 million 22 nm tri-gate transistors could fit in the period at the end of this sentence.” The thin silicon “fin” that forms the body of the transistor is 8 nm thick—roughly 64 atoms.

Intel is rumored to be on the verge of a 14 nm transistor, and it won’t be long before transistors fall under the 10 nm barrier. “That’s when the existence of atoms becomes really important and quantum mechanics becomes really important,” says Klimeck. His Nanoelectronic Modeling Group at Purdue is building a tool to understand and design these nanoscale devices.

Klimeck says his group’s three primary goals are: to build a tool that is useful to the research and development community; to use that tool to explore scientific questions about future transistor technologies; and to be at the forefront of high-performance computing. With a Petascale Computing Resource Allocation award from the National Science Foundation, the group is using Blue Waters to address all three goals.

Scaling and accelerating NEMO5

The group’s NEMO5 software—which is used by major companies including Intel, GLOBALFOUNDRIES, and Samsung—builds on several previous nanoelectronic modeling codes dating to 1993: NEMO1D, NEMO3D, and OMEN. During the Blue Waters “friendly user” period before the supercomputer went into full production operations in March 2013, Klimeck’s team worked with NCSA staff to port the software and its associated libraries to the system.

“We have a lot of external libraries that we leverage,” says Jim Fonseca, an iNEMO research scientist. “There are good things about that and bad things, and one of the bad things is getting them all to work nicely together.”

NCSA senior research programmers Ryan Mokos, Tom Cortese, and Victor Anisimov helped the group trouble-shoot their library issues, updating many and dealing with development code that was still in flux.

The next challenge: Taking full advantage of the GPUs offered by Blue Waters.

“The main reason we’re interested in Blue Waters is because of the GPU capability,” Fonseca says. The group hopes that by harnessing GPUs they can reduce the amount of time spent on certain key numerical algorithms (such as calculating eigenvalues), thereby accelerating the performance of NEMO5 and making larger problems more manageable.

When the team ran into difficulties with the first GPU option they tried to implement, Mokos helped with an initial build of the MAGMA package of libraries from NVIDIA so the team could try it out.

“We’ve gotten two of three main numerical algorithms ported,” Fonseca says—two types of eigenvalue solvers. Replacing one of these CPU routines with the GPU routine resulted in a 2x speedup.

Charting the semiconductor roadmap

As NEMO5 scales to more and more processors on Blue Waters and the GPU acceleration continues, Klimeck envisions the many research doors that are opened by these performance improvements.

“What we’re building is an engineering tool that will be used in the understanding and design of devices that are at the end of Moore’s Law,” he says. “There are no computer-aided design tools that can model these devices in an atomistic sense. All the standard semiconductor device design tools that are out there assume that matter is smooth and continuous and ignore the existence of atoms. But you need to know about the atoms when devices are determined by atomic dimensions.”

Things are just fundamentally different at the nanoscale. “Quantum mechanical behaviors become important,” Klimeck says. “The whole system needs to be treated with a wave approach, considering electron waves instead of thinking of electrons like billiard balls or as a continuum gas.”

Already PhD students Mehdi Salmani and SungGeun Kim are using OMEN on Blue Waters to model various devices for the International Technology Roadmap for Semiconductors (ITRS) to see if the ever-smaller devices that are projected to be available in the next 15 years are physically feasible, and what impact quantum effects like scattering and confinement may have on performance. They also are exploring how to get the best performance for each “node” on the ITRS in order to keep the ITRS tables as closely aligned as possible with Moore’s Law. Their results will be included in the next ITRS early in 2014.

The ITRS simulations have scaled up to 96,000 cores. “Some of the features, like the scattering effect, we can’t capture on smaller computers,” Salmani says. “Without Blue Waters—forget about it!”

Another area of investigation: the performance impact of imperfections in these tiny devices.

“There will be fluctuations due to etch processes and how the devices are made,” says Klimeck. “So the questions are, how do fluctuations at that atomic scale translate into overall performance of these on/off switches?”

“If you have a device that is 10 nm long, and maybe 3 nm wide, a few atoms fluctuation is causing a fluctuation of 5 or 10 percent in the device width. As you scale down, the atomic resolution becomes more and more important,” adds Fonseca.

The researchers and the companies that use their software might also want to explore making devices out of different materials, maybe replacing some of the silicon-based devices with alternative materials such as indium arsenide, indium antimonide, or even more exotic materials such as graphene, carbon nanotubes, or even topological insulators to see how device performance changes as the material composition changes.

“The typical problem we need to handle is maybe 100,000 to a million atoms. And we’re dealing with a quantum statistical mechanics problem that is not at equilibrium,” he says. “Ten years ago people would have told me that isn’t solvable. You can’t get a computer that’s big enough.” Now that the petascale Blue Waters system is available, “We can chew up any CPU time you can throw at us.”

Disclaimer: Due to changes in website systems, we've adjusted archived content to fit the present-day site and the articles will not appear in their original published format. Formatting, header information, photographs and other illustrations are not available in archived articles.

Back to top