From 2-D pictures to 3 dimensionsMarch 3rd, 2008 in Technology / Computer Sciences
Your pictures of the Grand Canyon, Times Square or other destinations may be pretty good, but wouldn't it be nice to show them off in three dimensions? An award-winning 3-D reconstruction algorithm designed by a team of computer science researchers from UC-San Diego brings this dream within the grasp of reality. Credit: Manmohan Chandraker / UC San Diego
Your pictures of the Grand Canyon, Times Square or other destinations may be pretty good, but wouldn’t it be nice to show them off in three dimensions?
An award-winning 3D reconstruction algorithm designed by a team of computer science researchers from UC San Diego brings this dream within the grasp of reality.
This research gets at the heart of “autocalibration,” a well-studied, fundamental problem in computer vision. Autocalibration aims to recover the three dimensional structure of a scene using only its images, acquired from cameras whose internal settings and spatial orientations are unknown.
Autocalibraton is part of a larger 3D image reconstruction challenge that has caught the attention of Google, Microsoft and others.
Manmohan Chandraker, a fifth-year PhD student in the Department of Computer Science and Engineering at UCSD’s Jacobs School of Engineering led the work. He, Sameer Agarwal – a computer science UCSD alumnus now at the University of Washington, and their respective Ph.D. advisors, David Kriegman and Serge Belongie presented their research at the International Conference on Computer Vision (ICCV), held in Rio de Janeiro, Brazil in October 2007. ICCV is the premier conference in the field of computer vision. For this work, Chandraker took home one of three honorable mentions for ICCV’s prestigious David Marr prize.
This technology could be put to use in a wide variety of applications. For example, someone selling shoes online could take pictures of their shoes and create 3D reconstructions of their inventory. Such reconstructions would provide more information about what the shoes actually look like than images or video footage can.
The algorithm could also be used to automatically align security camera networks used in casinos and airports. Coupled with existing technology for immersive media, the algorithm could be used to create augmented-reality walkthroughs of cities, supermarkets or any other places of interest.
In the ICCV paper, the UCSD computer scientists propose the first practically scalable algorithm for 3D reconstruction which provides “a theoretical certificate of optimality.” In other words, the technique computes the best possible 3D reconstruction obtainable from the input data and does not slow down drastically for a large number of photographs.
“Our algorithm is guaranteed to provide the best 3D reconstruction,” said Chandraker. “It is very much a practical algorithm. In fact, the significance of the paper lies in our approaches for designing a theoretically correct algorithm that also works well in practice. Our approach utilizes modern convex optimization techniques to globally minimize the involved cost functions in a branch and bound framework,” explained Chandraker.
The paper, titled “Globally Optimal Affine and Metric Upgrades in Stratified Autocalibration” is available at http://vision.ucsd.edu/kriegman-grp/papers/iccv07a.pdf . MATLAB prototype code for the implementation will be available online when it is ready.
Source: University of California - San Diego
"From 2-D pictures to 3 dimensions." March 3rd, 2008. http://phys.org/news123763523.html