Stanford site advances science of turning 2-D images into 3-D models

Jan 23, 2008
Stanford site advances science of turning 2-D images into 3-D models
A three-dimensional 'fly around' image, above, was created from a two-dimensional image using an algorithm developed by Stanford computer scientists. Credit: Ashutosh Saxena

An artist might spend weeks fretting over questions of depth, scale and perspective in a landscape painting, but once it is done, what's left is a two-dimensional image with a fixed point of view. But the Make3d algorithm, developed by Stanford computer scientists, can take any two-dimensional image and create a three-dimensional "fly around" model of its content, giving viewers access to the scene's depth and a range of points of view.

"The algorithm uses a variety of visual cues that humans use for estimating the 3-D aspects of a scene," said Ashutosh Saxena, a doctoral student in computer science who developed the Make3d website with Andrew Ng, an assistant professor of computer science. "If we look at a grass field, we can see that the texture changes in a particular way as it becomes more distant."

The algorithm runs at make3d.stanford.edu .

The applications of extracting 3-D models from 2-D images, the researchers say, could range from enhanced pictures for online real estate sites to quickly creating environments for video games and improving the vision and dexterity of mobile robots as they navigate through the spatial world.

Extracting 3-D information from still images is an emerging class of technology. In the past, some researchers have synthesized 3-D models by analyzing multiple images of a scene. Others, including Ng and Saxena in 2005, have developed algorithms that infer depth from single images by combining assumptions about what must be ground or sky with simple cues such as vertical lines in the image that represent walls or trees. But Make3d creates accurate and smooth models about twice as often as competing approaches, Ng said, by abandoning limiting assumptions in favor of a new, deeper analysis of each image and the powerful artificial intelligence technique "machine learning."

Restoring the third dimension

To "teach" the algorithm about depth, orientation and position in 2-D images, the researchers fed it still images of campus scenes along with 3-D data of the same scenes gathered with laser scanners. The algorithm correlated the two sets together, eventually gaining a good idea of the trends and patterns associated with being near or far. For example, it learned that abrupt changes along edges correlate well with one object occluding another, and it saw that things that are far away can be just a little hazier and more bluish than things that are close.

To make these judgments, the algorithm breaks the image up into tiny planes called "superpixels," which are within the image and have very uniform color, brightness and other attributes. By looking at a superpixel in concert with its neighbors, analyzing changes such as gradations of texture, the algorithm makes a judgment about how far it is from the viewer and what its orientation in space is. Unlike some previous algorithms, the Stanford one can account for planes at any angle, not just horizontal or vertical. This allows it to create models for scenes that have planes at many orientations, such as the curved branches of trees or the slopes of mountains.

A paper on the algorithm by Ng, Saxena and a fellow student, Min Sun, won the best paper award at the 3-D recognition and reconstruction workshop at the International Conference on Computer Vision in Rio de Janeiro in October 2007.

On the Make3d website, the algorithm puts images uploaded by users into a processing queue and will send an e-mail when the model has been rendered. Users can then vote on whether the model looks good, and can see an alternative rendering and even tinker with the model to fix what might not have been rendered right the first time.

Photos can be uploaded directly or pulled into the site from the popular photo-sharing site Flickr.

Although the technology works better than any other has so far, Ng said, it is not perfect. The software is at its best with landscapes and scenery rather than close-ups of individual objects. Also, he and Saxena hope to improve it by introducing object recognition. The idea is that if the software can recognize a human form in a photo it can make more accurate distance judgments based on the size of the person in the photo.

For many panoramic scenes, there is still no substitute for being there. But when flat photos become 3-D, viewers can feel a little closer—or farther.

Source: By David Orenstein, Stanford University

Explore further: Coping with floods—of water and data

add to favorites email to friend print save as pdf

Related Stories

NOAA/NASA satellite sees holiday lights brighten cities

Dec 17, 2014

Even from space, holidays shine bright. With a new look at daily data from the NOAA/NASA Suomi National Polar-orbiting Partnership (Suomi NPP) satellite, a NASA scientist and colleagues have identified how ...

Teaching robots to see

Dec 15, 2014

Syed Saud Naqvi, a PhD student from Pakistan, is working on an algorithm to help computer programmes and robots to view static images in a way that is closer to how humans see.

Recommended for you

Coping with floods—of water and data

Dec 19, 2014

Halloween 2013 brought real terror to an Austin, Texas, neighborhood, when a flash flood killed four residents and damaged roughly 1,200 homes. Following torrential rains, Onion Creek swept over its banks and inundated the ...

Cloud computing helps make sense of cloud forests

Dec 17, 2014

The forests that surround Campos do Jordao are among the foggiest places on Earth. With a canopy shrouded in mist much of time, these are the renowned cloud forests of the Brazilian state of São Paulo. It is here that researchers ...

User comments : 4

Adjust slider to filter visible comments by rank

Display comments: newest first

freemind
5 / 5 (1) Jan 23, 2008
I've been amazed by youtube videos showing how it works. Follow the link in the article.
nilbud
5 / 5 (2) Jan 23, 2008
Must resist urge to submit the last supper.
gopher65
not rated yet Jan 23, 2008
I uploaded a few and tested it out. Pretty cool, but, as they say on their site, it *is* still in development, and they have a long way to go. I think the best one I submitted was a picture of an Egyptian statue. That one worked ok:). This software freaks with photoshopped stuff though hehe.

This is an awesome idea and I'm glad someone is working on this technology:).
Ashibayai
not rated yet Jan 24, 2008
If they take this and add the ability to compile data from multiple photos, we'll have an extremely impressive system for generating 3D models in everyday photos.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.