Algorithm combines videos from unstructured camera arrays into panoramas

Algorithm combines videos from unstructured camera arrays into panoramas
Credit: Disney Research

Even non-professionals may someday be able to create high-quality video panoramas using multiple cameras with the help of an algorithm developed by a team of Disney researchers.

Their method smooths out the blurring, ghosting and other distortions that routinely occur when video feeds from unstructured camera arrays are combined to create a single panoramic video. The algorithm corrects for the apparent difference in position of an object caused by different camera angles - known as parallax - and image warping that occurs because of slight timing differences between cameras, both of which are known to lead to visible discontinuities, ghosting, and other imperfections in existing approaches.

The researchers have demonstrated their technique using as many as 14 various types of cameras, generating panoramic video in the order of tens to more than 100 megapixels.

"We can foresee a day when just about anyone could create a high-quality video panorama by setting up a few or even linking several smartphones, just as many people today can easily create a still photo panorama with their smartphones," said Alexander Sorkine-Hornung, a senior research scientist at Disney Research Zürich, who collaborated with colleagues at ETH Zürich and Disney Imagineering on the study.

Their findings will be presented at EUROGRAPHICS 2015, the Annual Conference of the European Association for Computer Graphics, May 4-8, in Zürich, Switzerland.

Combining video feeds enables the creation of video panoramas beyond the resolution of any one camera. Though combining, or stitching, separate still images into one is a technique as old as photography, stitching together video feeds remains a difficult challenge due to parallax changing over time.

Though some professional methods using pre-calibrated camera arrays exist for creating panoramas, the Disney team focused on combining videos from multiple cameras that have overlapping visual fields, but are not precisely positioned and are not perfectly synchronized.

Their technique automatically analyzes the images from the cameras to estimate the position and alignment of each , which eliminates the need to use special or manual calibration techniques, and allows for a flexible positioning of the cameras.

The algorithm corrects for differences in parallax that create ghosting and other disturbing effects in the areas of the panorama where images from separate cameras are stitched together. It also detects and corrects for image warping - wavy lane markings on roads, or buildings that appear to bend over - that occurs when images are stitched together. Finally, the technique also compensates for slight differences in the timing of frames between cameras, which otherwise causes jitter and other artifacts in the image.

More information: "Panoramic Video from Unstructured Camera Arrays-Paper" [PDF, 54.33 MB]

Provided by Disney Research

Citation: Algorithm combines videos from unstructured camera arrays into panoramas (2015, May 4) retrieved 19 March 2024 from https://phys.org/news/2015-05-algorithm-combines-videos-unstructured-camera.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Team creates techniques for high quality, high resolution stereo panoramas

187 shares

Feedback to editors