Focus images instantly with Adobe’s computational photography

Oct 09, 2007 by Lisa Zyga weblog
Adobe 3D Lens
Dave Story demonstrates the only prototype of Adobe´s 3D camera lens, part of the company´s newest computational photography technique. (Image credit: Audioblog.fr)

Adobe has recently unveiled some novel photo editing abilities with a new technology known as computational photography. With a combination of a special lens and computer software, the technique can divide up a camera image in different views and reassemble them with a computer.

The method uses a lens embedded with 19 smaller lenses and prisms, like an insect’s compound eye, to capture a scene from different angles at the same time. As Dave Story, Vice President of Digital Imaging Product Development at Adobe, explained, this lens can determine the depth of every pixel in the scene.

This means that, after the photo is taken and transferred to a computer, people can edit certain layers of the photo within seconds. If a user wants to eliminate the background, the new software can simply erase everything in the image that appears at or beyond a certain distance.

Further, people can use a 3D focus brush to “reach into the scene and adjust the focus,” Story explained during a news conference, in a video posted by Audioblog.fr. At the conference, he uses the focus brush to bring a blurry statue in the foreground of an image into focus simply by dragging the tool over the area on the image. Alternatively, he switched to a de-focus brush to bring a second statue located further back in the image out of focus.

“This is something you cannot due with a physical camera,” he said. “There’s no way to take a picture with just this section in focus and everything else out of focus. It’s not physically possible to make a camera that does that. But with a combination of that lens and your digital dark room, you have what we call computational photography. Computational photography is the future of photography.”

Knowing the 3D nature of every pixel also enables people to view photos from different angles after they are taken, which Story demonstrated. Months after a photo is snapped, people can “move the camera” as if traveling through a scene in Google Earth. Story suggested that this ability would be useful if background objects were accidentally aligned in undesirable positions, such as a lamp post appearing to stick straight out of a person’s head. In that case, you could rotate the image slightly to one side, in order to view the scene from a different angle.

“We can do things that people now have to do manually, much more easily,” Story said. “But we can also use computational photography to allow you to accomplish physically impossible results.”

Audioblog.fr via CNet

Explore further: Report: FBI's anthrax investigation was flawed

add to favorites email to friend print save as pdf

Related Stories

Image descriptions from computers show gains

Nov 18, 2014

"Man in black shirt is playing guitar." "Man in blue wetsuit is surfing on wave." "Black and white dog jumps over bar." The picture captions were not written by humans but through software capable of accurately ...

Q&A: What is 4chan and where did it come from?

Nov 07, 2014

Clicking on the website 4chan's "random," or "/b/" subsection will take you to a place very far from the polished vacation photos on Instagram and the adorable baby snapshots on Facebook. Here, you might ...

Eye-catching space technology restoring sight

Nov 04, 2014

Laser surgery to correct eyesight is common practice, but did you know that technology developed for use in space is now commonly used to track the patient's eye and precisely direct the laser scalpel?

Review: Better cameras, less glare in iPad Air 2

Oct 22, 2014

If I've seen you taking photos with a tablet computer, I've probably made fun of you (though maybe not to your face, depending on how big you are). I'm old school: I much prefer looking through the viewfinder ...

Recommended for you

Report: FBI's anthrax investigation was flawed

19 hours ago

The FBI used flawed scientific methods to investigate the 2001 anthrax attacks that killed five people and sickened 17 others, federal auditors said Friday in a report sure to fuel skepticism over the FBI's ...

Study reveals mature motorists worse at texting and driving

Dec 18, 2014

A Wayne State University interdisciplinary research team in the Eugene Applebaum College of Pharmacy and Health Sciences has made a surprising discovery: older, more mature motorists—who typically are better drivers in ...

Napster co-founder to invest in allergy research

Dec 17, 2014

(AP)—Napster co-founder Sean Parker missed most of his final year in high school and has ended up in the emergency room countless times because of his deadly allergy to nuts, shellfish and other foods.

LA mayor plans 7,000 police body cameras in 2015

Dec 16, 2014

Mayor Eric Garcetti announced a plan Tuesday to equip 7,000 Los Angeles police officers with on-body cameras by next summer, making LA's police department the nation's largest law enforcement agency to move ...

User comments : 3

Adjust slider to filter visible comments by rank

Display comments: newest first

KB6
3.7 / 5 (3) Oct 09, 2007
With all that extra data I'm wondering how much bigger those files would be in your camera.
Would they be 19x bigger (an image for each lens) making your SD card, memory stick, etc. effectively 19x smaller?
SLam_to
4.5 / 5 (2) Oct 09, 2007
The files will probably be the same size as your camera normally produces, but lower resolution.

The concept sounds similar to a plenoptic camera.
http://graphics.s...lfcamera
Ragtime
3 / 5 (2) Oct 09, 2007
The concept sounds similar to a mushfly eye.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.