Extreme makeover: computer science edition

Nov 12, 2008
Siddharth Batra, graduate student in computer science, places the Stanford logo onto a wall of the Hewlett Teaching Center using software that can embed graphics into a video—images, photos or even another video.

(PhysOrg.com) -- Suppose you have a cherished home video, taken at your birthday party. You're fond of the video, but your viewing experience is marred by one small, troubling detail. There in the video, framed and hanging on the living room wall amidst the celebration, is a color photograph of your former significant other.

Bummer.

But what if you could somehow reach inside the video and swap the offending photo for a snapshot of your current love? How perfect would that be?

A group of Stanford University researchers specializing in artificial intelligence have developed software that makes such a switch relatively simple. The researchers, computer science graduate students Ashutosh Saxena and Siddharth Batra, and Assistant Professor Andrew Ng, see interesting potential for the technology they call ZunaVision.

They say a user of the software can easily plunk an image on almost any planar surface in a video, whether wall, floor or ceiling. And the embedded images don't have to be still photos—you can insert a video inside a video.

Here's the opportunity to sing karaoke side-by-side with your favorite American Idol celebrity and post the video to YouTube. Or preview a virtual copy of a painting on your wall before you buy. Or liven up those dull vacation videos.

There is also a potential financial aspect to the technology. The researchers suggest that anyone with a video camera might earn some spending money by agreeing to have unobtrusive corporate logos placed inside their videos before they are posted online. The person who shot the video, and the company handling the business arrangements, would be paid per view, in a fashion analogous to Google AdSense, which pays websites to run small ads.

The embedding technology is driven by an algorithm that first analyzes the video, with special attention paid to the section of the scene where the new image will be placed. The color, texture and lighting of the new image are subtly altered to blend in with the surroundings. Shadows seen in the original video will be seen in the added image as well. The result is a photo or video that appears to be an integral part of the original scene, rather than a sticker pasted artificially on the video.

For the algorithm ("3D Surface Tracker Technology") to produce these realistic results, it also must deal with what researchers call "occluding objects" in the video. In our birthday video, an "occluding object" might be a partygoer walking in front of the newly hung photo. The algorithm can handle most such objects by keeping track of which pixels belong to the photo and which belong to the person walking in the foreground; the photo disappears behind the person walking by and then reappears, just as in the original video.

Camera motion gives the algorithm another item to digest. As the camera pans and zooms, the portion of the wall containing the embedded object moves and changes shape. The embedded image must keep up with this shape-shifting geometry, or the video may go one direction while the embedded image goes another.

To prevent such mishaps, the algorithm builds a model, pixel by pixel, of the area of interest in the video. "If the lighting begins to change with the motion of the video or the sun or the shadows, we keep a belief of what it will look like in the next frame. This is how we track with very high sub-pixel accuracy," Batra said. It's as if the embedded image makes an educated guess of where the wall is going next, and hurries to keep up.

Other technologies can perform these tricks—witness the spectacular special effects in movies and the virtual first-down lines on televised football games—but the Stanford researchers say the existing systems are expensive, time consuming and require considerable expertise.

Some of the recent Stanford work grew out of an earlier project, Make3D, a website that converts a single still photograph into a brief 3D video. It works by finding planes in the photo and computing their distance from the camera, relative to each other.

"That means, given a single image, our algorithm can figure out which parts are in the front and which parts are in the background," said Saxena. "Now we have extended this technology to videos."

The researchers realize that their technology will be used in unpredictable ways, but they have some guesses. "Suppose you're a student living in a dorm and suppose you want to show it to your parents [in a video]. You can put a nice poster there of Albert Einstein," Batra said. "But if you want to show it to your friends, you can have a Playboy poster there."

A hands-on demonstration of the technology can be seen at zunavision.stanford.edu .

Provided by Stanford University

Explore further: Spanish scientists create algorithms to measure sentiment on social networks

add to favorites email to friend print save as pdf

Related Stories

Cadillac CT6 will get streaming video mirror

Dec 20, 2014

Cadillac said Thursday it will add high resolution streaming video to the function of a rearview mirror, so that the driver's vision and safety can be enhanced. The technology will debut on the 2016 Cadillac ...

Ear-check via phone can ease path to diagnosis

Dec 18, 2014

Ear infections are common in babies and young children. That it is a frequent reason for young children's visit to doctors comes as no consolation for the parents of babies tugging at their ears and crying ...

BPG image format judged awesome versus JPEG

Dec 17, 2014

If these three letters could talk, BPG, they would say something like "Farewell, JPEG." Better Portable Graphics (BPG) is a new image format based on HEVC and supported by browsers with a small Javascript ...

Aromajoin gets in the stream of digital olfaction age

Dec 05, 2014

(Phys.org) —Welcome to the digital olfaction age. From Tokyo to Haifa to Berlin, scientists are keen to demonstrate their work to push digital olfaction along, whether they are talking about digital olfactory ...

Recommended for you

N. Korea suffers another Internet shutdown

12 hours ago

North Korea suffered an Internet shutdown for at least two hours on Saturday, Chinese state-media and cyber experts said, after Pyongyang blamed Washington for an online blackout earlier this week.

Sony's PlayStation 'gradually coming back'

12 hours ago

Sony was still struggling Saturday to fully restore its online PlayStation system, three days after the Christmas day hack that also hit Microsoft's Xbox, reporting that services were "gradually coming back."

Chattanooga touts transformation into Gig City

12 hours ago

A city once infamous for the smoke-belching foundries that blanketed its buildings and streets with a heavy layer of soot is turning to lightning-fast Internet speeds to try to transform itself into a vibrant ...

Uber broke Indian financial rules: central bank chief

12 hours ago

India's central bank chief lashed out at Uber, already under fire over the alleged rape of a passenger, saying the US taxi-hailing firm violated the country's financial regulations by using an overseas payment ...

User comments : 0

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.