Researchers report finer lines for microchips: Advance could lead to next-generation computer chips, solar cells

Jul 08, 2008
The tool -- called a nanoruler -- used to make finer patterns of lines over larger areas than have been possible with other methods. Credit: Raif Heilmann, MIT

MIT researchers have achieved a significant advance in nanoscale lithographic technology, used in the manufacture of computer chips and other electronic devices, to make finer patterns of lines over larger areas than have been possible with other methods.

Their new technique could pave the way for next-generation computer memory and integrated-circuit chips, as well as advanced solar cells and other devices.

The team has created lines about 25 nanometers (billionths of a meter) wide separated by 25 nm spaces. For comparison, the most advanced commercially available computer chips today have a minimum feature size of 65 nm. Intel recently announced that it will start manufacturing at the 32 nm minimum line-width scale in 2009, and the industry roadmap calls for 25 nm features in the 2013-2015 time frame.

The MIT technique could also be economically attractive because it works without the chemically amplified resists, immersion lithography techniques and expensive lithography tools that are widely considered essential to work at this scale with optical lithography. Periodic patterns at the nanoscale, while having many important scientific and commercial applications, are notoriously difficult to produce with low cost and high yield. The new method could make possible the commercialization of many new nanotechnology inventions that have languished in laboratories due to the lack of a viable manufacturing method.

The MIT team includes Mark Schattenburg and Ralf Heilmann of the MIT Kavli Institute of Astrophysics and Space Research and graduate students Chih-Hao Chang and Yong Zhao of the Department of Mechanical Engineering. Their results have been accepted for publication in the journal Optics Letters and were recently presented at the 52nd International Conference on Electron, Ion and Photon Beam Technology and Nanofabrication in Portland, Ore.

Schattenburg and colleagues used a technique known as interference lithography (IL) to generate the patterns, but they did so using a tool called the nanoruler—built by MIT graduate students—that is designed to perform a particularly high precision variant of IL called scanning-beam interference lithography, or SBIL. This recently developed technique uses 100 MHz sound waves, controlled by custom high-speed electronics, to diffract and frequency-shift the laser light, resulting in rapid patterning of large areas with unprecedented control over feature geometry.

While IL has been around for a long time, the SBIL technique has enabled, for the first time, the precise and repeatable pattern registration and overlay over large areas, thanks to a new high-precision phase detection algorithm developed by Zhao and a novel image reversal process developed by Chang.

According to Schattenburg, "What we're finding is that control of the lithographic imaging process is no longer the limiting step. Material issues such as line sidewall roughness are now a major barrier to still-finer length scales. However, there are several new technologies on the horizon that have the potential for alleviating these problems. These results demonstrate that there's still a lot of room left for scale shrinkage in optical lithography. We don't see any insurmountable roadblocks just yet."

Source: MIT, by David Chandler

Explore further: A crystal wedding in the nanocosmos

add to favorites email to friend print save as pdf

Related Stories

Polymer microparticles could help verify goods

Apr 13, 2014

Some 2 to 5 percent of all international trade involves counterfeit goods, according to a 2013 United Nations report. These illicit products—which include electronics, automotive and aircraft parts, pharmaceuticals, ...

Multimode waveguides bring light around corners

Dec 14, 2012

(Phys.org)—Light has become one of our most powerful servants, carrying information ranging from a chat room "LOL" to an entire digitized movie through hundreds of miles of fiber optics in seconds. But ...

Recommended for you

A crystal wedding in the nanocosmos

5 hours ago

Researchers at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR), the Vienna University of Technology and the Maria Curie-Skłodowska University Lublin have succeeded in embedding nearly perfect semiconductor ...

PPPL studies plasma's role in synthesizing nanoparticles

Jul 22, 2014

DOE's Princeton Plasma Physics Laboratory (PPPL) has received some $4.3 million of DOE Office of Science funding, over three years, to develop an increased understanding of the role of plasma in the synthesis ...

User comments : 4

Adjust slider to filter visible comments by rank

Display comments: newest first

ShadowRam
1.5 / 5 (2) Jul 08, 2008
Moore's Law still alive
jburchel
1.3 / 5 (3) Jul 09, 2008
I love Moore's Law but this is stupid since there are 45nm chips already in production and widespread use. Was this written in 2006?!
guiding_light
5 / 5 (2) Jul 09, 2008
http://ol.osa.org...pdf?da=1&id=95394&seq=0&CFID=565415&CFTOKEN=56045582

It's a quadruple exposure process using 351 nm wavelength but the overlay is nanometer-scale.
Agisman
5 / 5 (2) Jul 09, 2008
It's not the patterning resolution that affects ultimate device sizes. Sure, Intel and others are doing neat things getting down to 32nm with High-K dielectrics and their PR tricks regarding them. Problems really start arising below the 25nm node due to atomic scaling.

Over 10 years ago (15 actually), it wasn't unusual to get 25nm resolution patterns. This was done using a tool created at Bell Labs called SCALPEL. It stands for Scattering with Angular Limitation Projection Electron-beam Lithography. These guys at MIT are using a different technique but it's not like 25nm processing is all that new and certainly not likely to lead to the next great chip development. The equipment was abandoned after Ma Bell was broken into the Baby Bells and Lucent took over Bell Labs. Lucent dropped the project because they "aren't in the semiconductor fabrication business." Well, neither was Bell Labs but their research is responsible for much of what we use today.

http://people.vir...h8t/SEAS ECE People Lloyd R_ Harriott_files/JVB scalpel suboptical.pdf

The real problems with scaling occur due to atomic and stochastic issues. There are certainly short channel effects and random telegraph signals to think about. One of the biggest problems is that of doping a channel to make it N or P type. This is typically done with high-energy ion implantation which is a random process. Random processes work fine at the macro scale but when we 'zoom' down to the nano scale, the disorder becomes apparent. For example, let's say you need 1 in 10^3 dopant atoms to make a channel appear N-type. What happens when the channel only has 1000 atoms? Where is that atom and how does it affect the channel? What happened to the threshold voltage and where does current flow now? These are some of the real issues facing semiconductor designers. I'm not saying their research isn't valid; it's just that the resolution has been around for over a decade and isn't suddenly going to enable a new tier of technology.