Stanford site advances science of turning 2-D images into 3-D models

Jan 23, 2008
Stanford site advances science of turning 2-D images into 3-D models
A three-dimensional 'fly around' image, above, was created from a two-dimensional image using an algorithm developed by Stanford computer scientists. Credit: Ashutosh Saxena

An artist might spend weeks fretting over questions of depth, scale and perspective in a landscape painting, but once it is done, what's left is a two-dimensional image with a fixed point of view. But the Make3d algorithm, developed by Stanford computer scientists, can take any two-dimensional image and create a three-dimensional "fly around" model of its content, giving viewers access to the scene's depth and a range of points of view.

"The algorithm uses a variety of visual cues that humans use for estimating the 3-D aspects of a scene," said Ashutosh Saxena, a doctoral student in computer science who developed the Make3d website with Andrew Ng, an assistant professor of computer science. "If we look at a grass field, we can see that the texture changes in a particular way as it becomes more distant."

The algorithm runs at make3d.stanford.edu .

The applications of extracting 3-D models from 2-D images, the researchers say, could range from enhanced pictures for online real estate sites to quickly creating environments for video games and improving the vision and dexterity of mobile robots as they navigate through the spatial world.

Extracting 3-D information from still images is an emerging class of technology. In the past, some researchers have synthesized 3-D models by analyzing multiple images of a scene. Others, including Ng and Saxena in 2005, have developed algorithms that infer depth from single images by combining assumptions about what must be ground or sky with simple cues such as vertical lines in the image that represent walls or trees. But Make3d creates accurate and smooth models about twice as often as competing approaches, Ng said, by abandoning limiting assumptions in favor of a new, deeper analysis of each image and the powerful artificial intelligence technique "machine learning."

Restoring the third dimension

To "teach" the algorithm about depth, orientation and position in 2-D images, the researchers fed it still images of campus scenes along with 3-D data of the same scenes gathered with laser scanners. The algorithm correlated the two sets together, eventually gaining a good idea of the trends and patterns associated with being near or far. For example, it learned that abrupt changes along edges correlate well with one object occluding another, and it saw that things that are far away can be just a little hazier and more bluish than things that are close.

To make these judgments, the algorithm breaks the image up into tiny planes called "superpixels," which are within the image and have very uniform color, brightness and other attributes. By looking at a superpixel in concert with its neighbors, analyzing changes such as gradations of texture, the algorithm makes a judgment about how far it is from the viewer and what its orientation in space is. Unlike some previous algorithms, the Stanford one can account for planes at any angle, not just horizontal or vertical. This allows it to create models for scenes that have planes at many orientations, such as the curved branches of trees or the slopes of mountains.

A paper on the algorithm by Ng, Saxena and a fellow student, Min Sun, won the best paper award at the 3-D recognition and reconstruction workshop at the International Conference on Computer Vision in Rio de Janeiro in October 2007.

On the Make3d website, the algorithm puts images uploaded by users into a processing queue and will send an e-mail when the model has been rendered. Users can then vote on whether the model looks good, and can see an alternative rendering and even tinker with the model to fix what might not have been rendered right the first time.

Photos can be uploaded directly or pulled into the site from the popular photo-sharing site Flickr.

Although the technology works better than any other has so far, Ng said, it is not perfect. The software is at its best with landscapes and scenery rather than close-ups of individual objects. Also, he and Saxena hope to improve it by introducing object recognition. The idea is that if the software can recognize a human form in a photo it can make more accurate distance judgments based on the size of the person in the photo.

For many panoramic scenes, there is still no substitute for being there. But when flat photos become 3-D, viewers can feel a little closer—or farther.

Source: By David Orenstein, Stanford University

Explore further: Ride-sharing could cut cabs' road time by 30 percent

add to favorites email to friend print save as pdf

Related Stories

SHORE facial analysis spots emotions on Google Glass

Aug 28, 2014

One of the key concerns about facial recognition software has been over privacy. The very idea of having tracking mechanisms as part of an Internet-connected wearable would be likely to upset many privacy ...

Microsoft Research turns 2D camera into depth sensor

Aug 14, 2014

Microsoft Research at SIGGRAPH 14 made news this week with its presentation of how to turn a regular video camera into a depth camera. "Learning to be a depth camera for close-range human capture and interaction" ...

Recommended for you

Ride-sharing could cut cabs' road time by 30 percent

20 hours ago

Cellphone apps that find users car rides in real time are exploding in popularity: The car-service company Uber was recently valued at $18 billion, and even as it faces legal wrangles, a number of companies ...

Avatars make the Internet sign to deaf people

Aug 29, 2014

It is challenging for deaf people to learn a sound-based language, since they are physically not able to hear those sounds. Hence, most of them struggle with written language as well as with text reading ...

Chameleon: Cloud computing for computer science

Aug 26, 2014

Cloud computing has changed the way we work, the way we communicate online, even the way we relax at night with a movie. But even as "the cloud" starts to cross over into popular parlance, the full potential ...

User comments : 4

Adjust slider to filter visible comments by rank

Display comments: newest first

freemind
5 / 5 (1) Jan 23, 2008
I've been amazed by youtube videos showing how it works. Follow the link in the article.
nilbud
5 / 5 (2) Jan 23, 2008
Must resist urge to submit the last supper.
gopher65
not rated yet Jan 23, 2008
I uploaded a few and tested it out. Pretty cool, but, as they say on their site, it *is* still in development, and they have a long way to go. I think the best one I submitted was a picture of an Egyptian statue. That one worked ok:). This software freaks with photoshopped stuff though hehe.

This is an awesome idea and I'm glad someone is working on this technology:).
Ashibayai
not rated yet Jan 24, 2008
If they take this and add the ability to compile data from multiple photos, we'll have an extremely impressive system for generating 3D models in everyday photos.