Researchers report finer lines for microchips: Advance could lead to next-generation computer chips, solar cells

July 8, 2008
The tool -- called a nanoruler -- used to make finer patterns of lines over larger areas than have been possible with other methods. Credit: Raif Heilmann, MIT

MIT researchers have achieved a significant advance in nanoscale lithographic technology, used in the manufacture of computer chips and other electronic devices, to make finer patterns of lines over larger areas than have been possible with other methods.

Their new technique could pave the way for next-generation computer memory and integrated-circuit chips, as well as advanced solar cells and other devices.

The team has created lines about 25 nanometers (billionths of a meter) wide separated by 25 nm spaces. For comparison, the most advanced commercially available computer chips today have a minimum feature size of 65 nm. Intel recently announced that it will start manufacturing at the 32 nm minimum line-width scale in 2009, and the industry roadmap calls for 25 nm features in the 2013-2015 time frame.

The MIT technique could also be economically attractive because it works without the chemically amplified resists, immersion lithography techniques and expensive lithography tools that are widely considered essential to work at this scale with optical lithography. Periodic patterns at the nanoscale, while having many important scientific and commercial applications, are notoriously difficult to produce with low cost and high yield. The new method could make possible the commercialization of many new nanotechnology inventions that have languished in laboratories due to the lack of a viable manufacturing method.

The MIT team includes Mark Schattenburg and Ralf Heilmann of the MIT Kavli Institute of Astrophysics and Space Research and graduate students Chih-Hao Chang and Yong Zhao of the Department of Mechanical Engineering. Their results have been accepted for publication in the journal Optics Letters and were recently presented at the 52nd International Conference on Electron, Ion and Photon Beam Technology and Nanofabrication in Portland, Ore.

Schattenburg and colleagues used a technique known as interference lithography (IL) to generate the patterns, but they did so using a tool called the nanoruler—built by MIT graduate students—that is designed to perform a particularly high precision variant of IL called scanning-beam interference lithography, or SBIL. This recently developed technique uses 100 MHz sound waves, controlled by custom high-speed electronics, to diffract and frequency-shift the laser light, resulting in rapid patterning of large areas with unprecedented control over feature geometry.

While IL has been around for a long time, the SBIL technique has enabled, for the first time, the precise and repeatable pattern registration and overlay over large areas, thanks to a new high-precision phase detection algorithm developed by Zhao and a novel image reversal process developed by Chang.

According to Schattenburg, "What we're finding is that control of the lithographic imaging process is no longer the limiting step. Material issues such as line sidewall roughness are now a major barrier to still-finer length scales. However, there are several new technologies on the horizon that have the potential for alleviating these problems. These results demonstrate that there's still a lot of room left for scale shrinkage in optical lithography. We don't see any insurmountable roadblocks just yet."

Source: MIT, by David Chandler

Explore further: A way to cause graphene to self-fold into 3-D shapes

Related Stories

A way to cause graphene to self-fold into 3-D shapes

October 9, 2017

(—A team of researchers with Johns Hopkins University and MIT has found a way to cause flat sheets of graphene to self-fold into 3-D geometric shapes. In their paper published on the open access site Science Advances, ...

MIT engineers work toward cell-sized batteries

August 20, 2008

( -- Forget 9-volts, AAs, AAAs or D batteries: The energy for tomorrow’s miniature electronic devices could come from tiny microbatteries about half the size of a human cell and built with viruses.

Recommended for you

Atomic blasting creates new devices to measure nanoparticles

December 14, 2017

Like sandblasting at the nanometer scale, focused beams of ions ablate hard materials to form intricate three-dimensional patterns. The beams can create tiny features in the lateral dimensions—length and width, but to create ...

Engineers create plants that glow

December 13, 2017

Imagine that instead of switching on a lamp when it gets dark, you could read by the light of a glowing plant on your desk.

Faster, more accurate cancer detection using nanoparticles

December 12, 2017

Using light-emitting nanoparticles, Rutgers University-New Brunswick scientists have invented a highly effective method to detect tiny tumors and track their spread, potentially leading to earlier cancer detection and more ...


Adjust slider to filter visible comments by rank

Display comments: newest first

1.5 / 5 (2) Jul 08, 2008
Moore's Law still alive
1.3 / 5 (3) Jul 09, 2008
I love Moore's Law but this is stupid since there are 45nm chips already in production and widespread use. Was this written in 2006?!
5 / 5 (2) Jul 09, 2008

It's a quadruple exposure process using 351 nm wavelength but the overlay is nanometer-scale.
5 / 5 (2) Jul 09, 2008
It's not the patterning resolution that affects ultimate device sizes. Sure, Intel and others are doing neat things getting down to 32nm with High-K dielectrics and their PR tricks regarding them. Problems really start arising below the 25nm node due to atomic scaling.

Over 10 years ago (15 actually), it wasn't unusual to get 25nm resolution patterns. This was done using a tool created at Bell Labs called SCALPEL. It stands for Scattering with Angular Limitation Projection Electron-beam Lithography. These guys at MIT are using a different technique but it's not like 25nm processing is all that new and certainly not likely to lead to the next great chip development. The equipment was abandoned after Ma Bell was broken into the Baby Bells and Lucent took over Bell Labs. Lucent dropped the project because they "aren't in the semiconductor fabrication business." Well, neither was Bell Labs but their research is responsible for much of what we use today.

http://people.vir...h8t/SEAS ECE People Lloyd R_ Harriott_files/JVB scalpel suboptical.pdf

The real problems with scaling occur due to atomic and stochastic issues. There are certainly short channel effects and random telegraph signals to think about. One of the biggest problems is that of doping a channel to make it N or P type. This is typically done with high-energy ion implantation which is a random process. Random processes work fine at the macro scale but when we 'zoom' down to the nano scale, the disorder becomes apparent. For example, let's say you need 1 in 10^3 dopant atoms to make a channel appear N-type. What happens when the channel only has 1000 atoms? Where is that atom and how does it affect the channel? What happened to the threshold voltage and where does current flow now? These are some of the real issues facing semiconductor designers. I'm not saying their research isn't valid; it's just that the resolution has been around for over a decade and isn't suddenly going to enable a new tier of technology.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.