Smallest hard disk to date writes information atom by atom

July 18, 2016, Delft University of Technology
STM scan (96 nm wide, 126 nm tall) of the 1 kB memory, written to a section of 'On the Origin of Species' by Charles Darwin (without text markup). Credit: Ottelab/TUDelft

Every day, modern society creates more than a billion gigabytes of new data. To store all this data, it is increasingly important that each single bit occupies as little space as possible. A team of scientists at the Kavli Institute of Nanoscience at Delft University reduced storage to the ultimate limit: They stored one kilobyte (8,000 bits) representing each bit by the position of a single chlorine atom. "In theory, this storage density would allow all books ever created by humans to be written on a single post stamp," says lead scientist Sander Otte. They reached a storage density of 500 Terabits per square inch (Tbpsi), 500 times better than the best commercial hard disk currently available.

His team reports on this development in Nature Nanotechnology on Monday July 18.

Feynman

In 1959, physicist Richard Feynman challenged his colleagues to engineer the world at the smallest possible scale. In his famous lecture There's Plenty of Room at the Bottom, he speculated that if we had a platform allowing us to arrange in an exact orderly pattern, it would be possible to store one piece of information per atom. To honor the visionary Feynman, Otte and his team coded a section of Feynman's lecture on an area 100 nanometers wide.

Sliding puzzle

The team used a scanning tunneling microscope (STM), which uses a sharp needle to probe the atoms of a surface one by one. Scientists can use these probes to push the atoms around. "You could compare it to a sliding puzzle," Otte explains. "Every bit consists of two positions on a surface of , and one chlorine atom that we can slide back and forth between these two positions. If the chlorine atom is in the top position, there is a hole beneath it—we call this a one. If the hole is in the top position and the chlorine atom is on the bottom, then the bit is a zero." Because the chlorine atoms are surrounded by other chlorine atoms, except near the holes, they keep each other in place. That is why this method with holes is much more stable than methods with loose atoms, and more suitable for data storage.

STM scan (96 nm wide, 126 nm tall) of the 1 kB memory, written to a section of 'There is plenty of room at the bottom' by Richard Feynman (with text markup). Credit: Ottelab/TUDelft

Codes

The researchers from Delft organized their memory in blocks of eight bytes (64 bits). Each block has a marker, made of the same type of holes as the raster of chlorine atoms. Inspired by the pixelated square barcodes (QR codes) often used to scan tickets for airplanes and concerts, these markers work like miniature QR codes that carry information about the precise location of the block on the copper layer. The code will also indicate if a block is damaged—for instance, due to some local contaminant or an error in the surface. This allows the memory to be scaled up easily to very large sizes, even if the copper surface is not entirely perfect.

Explanation of the bit logic and the atomic markers. Credit: Ottelab/TUDelft

Datacenters

The new approach offers excellent prospects in terms of stability and scalability. Still, this type of memory should not be expected in datacenters soon. Otte: "In its current form, the memory can operate only in very clean vacuum conditions and at liquid nitrogen temperature (77 K), so the actual storage of data on an atomic scale is still some way off. But through this achievement we have certainly come a big step closer."

An animation video explaining the mechanism of atomic data storage. Credit: Delft University of Technology

Explore further: Single-atom magnet breaks new ground for future data storage

More information: A kilobyte rewritable atomic memory, Nature Nanotechnology, dx.doi.org/10.1038/nnano.2016.131

Related Stories

Smallest Swiss cross—made of 20 single atoms

July 15, 2014

The manipulation of atoms has reached a new level: Together with teams from Finland and Japan, physicists from the University of Basel were able to place 20 single atoms on a fully insulated surface at room temperature to ...

Recommended for you

Observing cellular activity, one molecule at a time

May 21, 2018

Proteins and molecules assemble and disassemble naturally as part of many essential biological processes. It is very difficult to observe these mechanisms, which are often complex and take place at the nanometer scale, far ...

A soft solution to the hard problem of energy storage

May 18, 2018

It's great in the lab, but will it actually work? That's the million-dollar question perpetually leveled at engineering researchers. For a family of layered nanomaterials, developed and studied at Drexel University—and ...

New blood test rapidly detects signs of pancreatic cancer

May 17, 2018

Pancreatic cancer is expected to become the second deadliest cancer in the United States by 2030. It is tough to cure because it is usually not discovered until it has reached an advanced stage. But a new diagnostic test ...

9 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

Eikka
5 / 5 (1) Jul 19, 2016
What sort of compounds would be stable enough at room temperature not to diffuse around from the thermal motion and ruin the data?

It's very hard to keep anything perfectly still at the atomic scale, which is apparent in metal alloys: they "age" meaning the distribution of atoms in the crystal grains changes over time. A small and persistent external force on the lattice of atoms will cause "dislocations" to spread around and travel through the medium, so the atomic scale storage medium will look like a map criss-crossed with earthquake fault lines.

JDnHuntsvilleAL
5 / 5 (1) Jul 23, 2016
[Every day, modern society creates more than a billion gigabytes of new data.]
~
I have to disagree on that. Think about what makes up a huge portion of that "new" data -- news reports. Each news report is being counted, but frankly most news reports are duplicates of other news reports, so they are counting the same "new data" multiple times. Same with pictures, especially of celebrities. One picture gets counted countless times.
Da Schneib
1 / 5 (1) Jul 23, 2016
Think about what makes up a huge portion of that "new" data -- news reports. Each news report is being counted, but frankly most news reports are duplicates of other news reports, so they are counting the same "new data" multiple times. Same with pictures, especially of celebrities. One picture gets counted countless times.
From a data storage standpoint this is immaterial. The context of the data is more important than the content, from a storage standpoint.
Da Schneib
1 / 5 (1) Jul 23, 2016
Let me give a reductio ad absurdum.

The data is composed entirely of zeroes and ones. The zeroes and ones are repeated and none of them is "new." There is, therefore, no "new" data.

Absurd, right?

Precisely.
Eikka
5 / 5 (1) Jul 24, 2016
Absurd, right?

Precisely.


On the contrary. Repeating patterns of zeroes and ones can be compressed. If the data is already known to repeat, it can be simply indexed rather than stored as duplicate.

It's like saying "ditto"

Of course duplication does happen, because everyone chooses to copy instead of referring to the original data.
epoxy
Jul 24, 2016
This comment has been removed by a moderator.
Da Schneib
1 / 5 (1) Jul 24, 2016
On the contrary. Repeating patterns of zeroes and ones can be compressed.
I fail to see how that has any bearing. You've made an intuitive leap you haven't shared with the rest of us.
Eikka
not rated yet Jul 27, 2016
On the contrary. Repeating patterns of zeroes and ones can be compressed.
I fail to see how that has any bearing. You've made an intuitive leap you haven't shared with the rest of us.


Well, in your words: "The zeroes and ones are repeated and none of them is "new." There is, therefore, no "new" data."

One bit out of context doesn't mean anything. Data is a plurality. When you say "no new data", you're saying there's a repeating string of bits.

If you have repeating zeroes or ones, you don't actually have to write each down. Just write down how many of them there are. Likewise, if there's a collection of zeroes and ones that are repeated and nothing new is introduced to the pattern, you simply note the pattern and write down "ditto n-times over".

All data compression is just about figuring out what the pattern is and then saying "and so-on..."
Eikka
not rated yet Jul 27, 2016
Compare, "datum" - "a single piece of information".

"Data" is the plural of "datum".

A plain one or zero is not even "datum", because it lacks context. Only by assinging it an information value does it turn into a datum, such as "the lights are on", which has a binary truth value.

So the original reductio ad absurdum falls flat. You can't look at the bits individually in isolation in any meaningful way as "data" because you haven't assigned them any information value. Assuming you have, then saying they repeat and no new data is added simply means you've got a repeating string of bits, which is compressible.

From a data storage standpoint this is immaterial. The context of the data is more important than the content, from a storage standpoint.


From a storage standpoint, content is paramount, because you don't have to store any information that isn't actually there. You just say "it repeats".

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.