Focus images instantly with Adobe’s computational photography

Oct 09, 2007 by Lisa Zyga weblog
Adobe 3D Lens
Dave Story demonstrates the only prototype of Adobe´s 3D camera lens, part of the company´s newest computational photography technique. (Image credit: Audioblog.fr)

Adobe has recently unveiled some novel photo editing abilities with a new technology known as computational photography. With a combination of a special lens and computer software, the technique can divide up a camera image in different views and reassemble them with a computer.

The method uses a lens embedded with 19 smaller lenses and prisms, like an insect’s compound eye, to capture a scene from different angles at the same time. As Dave Story, Vice President of Digital Imaging Product Development at Adobe, explained, this lens can determine the depth of every pixel in the scene.

This means that, after the photo is taken and transferred to a computer, people can edit certain layers of the photo within seconds. If a user wants to eliminate the background, the new software can simply erase everything in the image that appears at or beyond a certain distance.

Further, people can use a 3D focus brush to “reach into the scene and adjust the focus,” Story explained during a news conference, in a video posted by Audioblog.fr. At the conference, he uses the focus brush to bring a blurry statue in the foreground of an image into focus simply by dragging the tool over the area on the image. Alternatively, he switched to a de-focus brush to bring a second statue located further back in the image out of focus.

“This is something you cannot due with a physical camera,” he said. “There’s no way to take a picture with just this section in focus and everything else out of focus. It’s not physically possible to make a camera that does that. But with a combination of that lens and your digital dark room, you have what we call computational photography. Computational photography is the future of photography.”

Knowing the 3D nature of every pixel also enables people to view photos from different angles after they are taken, which Story demonstrated. Months after a photo is snapped, people can “move the camera” as if traveling through a scene in Google Earth. Story suggested that this ability would be useful if background objects were accidentally aligned in undesirable positions, such as a lamp post appearing to stick straight out of a person’s head. In that case, you could rotate the image slightly to one side, in order to view the scene from a different angle.

“We can do things that people now have to do manually, much more easily,” Story said. “But we can also use computational photography to allow you to accomplish physically impossible results.”

Audioblog.fr via CNet

Explore further: Under some LED bulbs whites aren't 'whiter than white'

add to favorites email to friend print save as pdf

Related Stories

NREL driving research on hydrogen fuel cells

Mar 25, 2014

Hydrogen fuel cell electric vehicles (FCEV) were the belles of the ball at recent auto shows in Los Angeles and Tokyo, and researchers at the Energy Department's National Renewable Energy Laboratory (NREL) ...

Promoting love can punish sales

Mar 12, 2014

Valentine's Day has come and gone. But those images of romance are still everywhere : a happy couple holding hands in an eharmony ad, two lovebirds sharing a tender kiss in a Nikon camera commercial.

Tracking catalytic reactions in microreactors

Feb 21, 2014

A pathway to more effective and efficient synthesis of pharmaceutical drugs and other flow reactor chemical products has been opened by a study in which for the first time the catalytic reactivity inside ...

What does Google want with DeepMind?

Jan 31, 2014

All eyes turned to London this week, as Google announced its latest acquisition in the form of DeepMind, a company that specialises in artificial intelligence technologies. The £400m pricetag paid by Google ...

3-D scanning with your smartphone

Jan 31, 2014

Traditionally, 3-D scanning has required expensive laser scanner equipment, complicated software, and technological expertise.

Recommended for you

Under some LED bulbs whites aren't 'whiter than white'

12 hours ago

For years, companies have been adding whiteners to laundry detergent, paints, plastics, paper and fabrics to make whites look "whiter than white," but now, with a switch away from incandescent and fluorescent lighting, different ...

Freight train industry to miss safety deadline

Apr 16, 2014

The U.S. freight railroad industry says only one-fifth of its track will be equipped with mandatory safety technology to prevent most collisions and derailments by the deadline set by Congress.

User comments : 3

Adjust slider to filter visible comments by rank

Display comments: newest first

KB6
3.7 / 5 (3) Oct 09, 2007
With all that extra data I'm wondering how much bigger those files would be in your camera.
Would they be 19x bigger (an image for each lens) making your SD card, memory stick, etc. effectively 19x smaller?
SLam_to
4.5 / 5 (2) Oct 09, 2007
The files will probably be the same size as your camera normally produces, but lower resolution.

The concept sounds similar to a plenoptic camera.
http://graphics.s...lfcamera
Ragtime
3 / 5 (2) Oct 09, 2007
The concept sounds similar to a mushfly eye.

More news stories

LinkedIn membership hits 300 million

The career-focused social network LinkedIn announced Friday it has 300 million members, with more than half the total outside the United States.

Researchers uncover likely creator of Bitcoin

The primary author of the celebrated Bitcoin paper, and therefore probable creator of Bitcoin, is most likely Nick Szabo, a blogger and former George Washington University law professor, according to students ...

Impact glass stores biodata for millions of years

(Phys.org) —Bits of plant life encapsulated in molten glass by asteroid and comet impacts millions of years ago give geologists information about climate and life forms on the ancient Earth. Scientists ...