Stanford site advances science of turning 2-D images into 3-D models

Jan 23, 2008
Stanford site advances science of turning 2-D images into 3-D models
A three-dimensional 'fly around' image, above, was created from a two-dimensional image using an algorithm developed by Stanford computer scientists. Credit: Ashutosh Saxena

An artist might spend weeks fretting over questions of depth, scale and perspective in a landscape painting, but once it is done, what's left is a two-dimensional image with a fixed point of view. But the Make3d algorithm, developed by Stanford computer scientists, can take any two-dimensional image and create a three-dimensional "fly around" model of its content, giving viewers access to the scene's depth and a range of points of view.

"The algorithm uses a variety of visual cues that humans use for estimating the 3-D aspects of a scene," said Ashutosh Saxena, a doctoral student in computer science who developed the Make3d website with Andrew Ng, an assistant professor of computer science. "If we look at a grass field, we can see that the texture changes in a particular way as it becomes more distant."

The algorithm runs at make3d.stanford.edu .

The applications of extracting 3-D models from 2-D images, the researchers say, could range from enhanced pictures for online real estate sites to quickly creating environments for video games and improving the vision and dexterity of mobile robots as they navigate through the spatial world.

Extracting 3-D information from still images is an emerging class of technology. In the past, some researchers have synthesized 3-D models by analyzing multiple images of a scene. Others, including Ng and Saxena in 2005, have developed algorithms that infer depth from single images by combining assumptions about what must be ground or sky with simple cues such as vertical lines in the image that represent walls or trees. But Make3d creates accurate and smooth models about twice as often as competing approaches, Ng said, by abandoning limiting assumptions in favor of a new, deeper analysis of each image and the powerful artificial intelligence technique "machine learning."

Restoring the third dimension

To "teach" the algorithm about depth, orientation and position in 2-D images, the researchers fed it still images of campus scenes along with 3-D data of the same scenes gathered with laser scanners. The algorithm correlated the two sets together, eventually gaining a good idea of the trends and patterns associated with being near or far. For example, it learned that abrupt changes along edges correlate well with one object occluding another, and it saw that things that are far away can be just a little hazier and more bluish than things that are close.

To make these judgments, the algorithm breaks the image up into tiny planes called "superpixels," which are within the image and have very uniform color, brightness and other attributes. By looking at a superpixel in concert with its neighbors, analyzing changes such as gradations of texture, the algorithm makes a judgment about how far it is from the viewer and what its orientation in space is. Unlike some previous algorithms, the Stanford one can account for planes at any angle, not just horizontal or vertical. This allows it to create models for scenes that have planes at many orientations, such as the curved branches of trees or the slopes of mountains.

A paper on the algorithm by Ng, Saxena and a fellow student, Min Sun, won the best paper award at the 3-D recognition and reconstruction workshop at the International Conference on Computer Vision in Rio de Janeiro in October 2007.

On the Make3d website, the algorithm puts images uploaded by users into a processing queue and will send an e-mail when the model has been rendered. Users can then vote on whether the model looks good, and can see an alternative rendering and even tinker with the model to fix what might not have been rendered right the first time.

Photos can be uploaded directly or pulled into the site from the popular photo-sharing site Flickr.

Although the technology works better than any other has so far, Ng said, it is not perfect. The software is at its best with landscapes and scenery rather than close-ups of individual objects. Also, he and Saxena hope to improve it by introducing object recognition. The idea is that if the software can recognize a human form in a photo it can make more accurate distance judgments based on the size of the person in the photo.

For many panoramic scenes, there is still no substitute for being there. But when flat photos become 3-D, viewers can feel a little closer—or farther.

Source: By David Orenstein, Stanford University

Explore further: Powerful new software plug-in detects bugs in spreadsheets

add to favorites email to friend print save as pdf

Related Stories

Robots recognize humans in disaster environments

Oct 21, 2014

Through a computational algorithm, a team of researchers from the University of Guadalajara (UDG) in Mexico, developed a neural network that allows a small robot to detect different patterns, such as images, ...

Scientists build first map of hidden universe

Oct 16, 2014

A team led by astronomers from the Max Planck Institute for Astronomy has created the first three-dimensional map of the 'adolescent' Universe, just 3 billion years after the Big Bang. This map, built from ...

Post-Snowden, iPhone 6 encryption fans safety debate

Sep 28, 2014

Encryption technology in the iPhone 6 has taken root in a scales-of-justice debate between privacy supporters and public safety officials. Apple is using a more advanced encryption technology.

Miniature camera may lead to fewer accidents

Oct 09, 2014

Measuring only a few cubic millimeters, a new type of camera module might soon be integrated into future driver assistance systems to help car drivers facing critical situations. The little gadget can be ...

Recommended for you

Researchers developing algorithms to detect fake reviews

Oct 21, 2014

Anyone who has conducted business online—from booking a hotel to buying a book to finding a new dentist or selling their wares—has come across reviews of said products and services. Chances are they've also encountered ...

User comments : 4

Adjust slider to filter visible comments by rank

Display comments: newest first

freemind
5 / 5 (1) Jan 23, 2008
I've been amazed by youtube videos showing how it works. Follow the link in the article.
nilbud
5 / 5 (2) Jan 23, 2008
Must resist urge to submit the last supper.
gopher65
not rated yet Jan 23, 2008
I uploaded a few and tested it out. Pretty cool, but, as they say on their site, it *is* still in development, and they have a long way to go. I think the best one I submitted was a picture of an Egyptian statue. That one worked ok:). This software freaks with photoshopped stuff though hehe.

This is an awesome idea and I'm glad someone is working on this technology:).
Ashibayai
not rated yet Jan 24, 2008
If they take this and add the ability to compile data from multiple photos, we'll have an extremely impressive system for generating 3D models in everyday photos.