Researchers develop optimal algorithm for determining focus error in eyes and cameras

September 26, 2011

University of Texas at Austin researchers have discovered how to extract and use information in an individual image to determine how far objects are from the focus distance, a feat only accomplished by human and animal visual systems until now.

Like a camera, the human eye has an auto-focusing system, but human auto-focusing rarely makes mistakes. And unlike a camera, humans do not require trial and error to focus an object.

Johannes Burge, a postdoctoral fellow in the College of Liberal Arts' Center for Perceptual Systems and co-author of the study, says it is significant that a statistical algorithm can now determine focus error, which indicates how much a lens needs to be refocused to make the image sharp, from a single image without trial and error.

"Our research on defocus estimation could deepen our understanding of human ," Burge says. "Our results could also improve auto-focusing in digital cameras. We used basic optical modeling and well-understood statistics to show that there is information lurking in images that cameras have yet to tap."

The researchers' algorithm can be applied to any blurry image to determine focus error. An estimate of focus error also makes it possible to determine how far objects are from the focus distance.

In the , inevitable defects in the lens, such as , can help the visual system (via the and brain) compute focus error; the defects enrich the pattern of "defocus blur," the blur that is caused when a lens is focused at the wrong distance. Humans use defocus blur to both estimate depth and refocus their eyes. Many small animals use defocus as their primary depth cue.

"We are now one step closer to understanding how these feats are accomplished," says Wilson Geisler, director of the Center for Perceptual Systems and coauthor of the study. "The pattern of blur introduced by focus errors, along with the statistical regularities of natural images, makes this possible."

Burge and Geisler considered what happens to images as focus error increases: an increasing amount of detail is lost with larger errors. Then, they noted that even though the content of images varies considerably (e.g. faces, mountains, flowers), the pattern and amount of detail in images is remarkably constant. This constancy makes it possible to determine the amount of defocus and, in turn, to re-focus appropriately.

Explore further: Sony Develops High Frame Rate Single Lens 3D Camera Technology

More information: The article, titled "Optimal defocus estimation in individual natural images," will be published in the Proceedings of the National Academy of Sciences.

Related Stories

Sony Introduces New 3D Digital Still Cameras

July 8, 2010

Sony today unveiled two new Cyber-shot digital still cameras (models DSC-TX9 and DSC-WX5) that are the world's smallest 3D cameras, capturing 3D images with a single lens system using a sweeping motion.

Sony Announces New W-Series Cameras, Brings HD Photo Viewing

February 27, 2007

Sony Electronics today announced a new line of digital cameras featuring a high-definition component output. These new Cyber-shot W-series cameras offer direct connection to a High-Definition TV, which is ideal for integrating ...

Recommended for you

Desktop scanners can be hijacked to perpetrate cyberattacks

March 28, 2017

A typical office scanner can be infiltrated and a company's network compromised using different light sources, according to a new paper by researchers from Ben-Gurion University of the Negev and the Weizmann Institute of ...

Self-driving car crash comes amid debate about regulations

March 28, 2017

A crash that caused an Uber self-driving SUV to flip onto its side in a Phoenix suburb serves as a stark reminder of the challenges surrounding autonomous vehicles in Arizona, a state that has gone all-in to entice the company ...

Renewable energy has robust future in much of Africa: study

March 27, 2017

As Africa gears up for a tripling of electricity demand by 2030, a new Berkeley study maps out a viable strategy for developing wind and solar power while simultaneously reducing the continent's reliance on fossil fuels and ...

1 comment

Adjust slider to filter visible comments by rank

Display comments: newest first

that_guy
not rated yet Sep 26, 2011
I have to disagree with part of the premise of the article. Humans don't seem prone to focus error, not because they don't make errors, but because of practice. We also have quicker reaction, and if we do misfocus, are able to correct almost immediately, unless our eyes are really tired.

People who spend all their time in front of a computer do tend to have a harder time focusing on things in the distance.

I believe we may use something like this algorithm as part of our focusing routine, but by itself, I bet the new algorithm is a better single purpose tool than any one piece of our biological focusing routine (Although, as a system of everything we do to focus, we're probably better overall)

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.