What the doctor prescribes: Customized medical-image databases

Aug 02, 2010

Digital archives of biomedical images could someday put critical information at doctors' fingertips within seconds, illustrating how computers can improve the way medicine is practiced. The current reality, however, isn't quite up to speed, with databases virtually overwhelmed by the explosion of medical imaging.

Rochester Institute of Technology professor Anne Haake recently won grants from the National Science Foundation and the National Institutes of Health to address this problem. Haake envisions an image database built on input from the intended end-users and designed from the beginning with flexible user interfaces. Haake and her interdisciplinary team will develop a prototype using input from dermatologists to refine the search mechanism for images of various skin conditions.

"We need to involve users from the very beginning," says Haake, professor of information sciences and technologies at the B. Thomas Golisano College of Computing and Information Sciences. "This is especially true in the biomedical area where there is so much domain knowledge that it will be specific to each particular specialty."

Haake understands the real need to make biomedical images useful. She began her career as a developmental biologist before pursuing computing and biomedical informatics. This project combines her two strengths and was inspired by research she conducted while on sabbatical at the NIH National Library of Medicine.

Dr. Cara Calvelli, a dermatologist and a professor in the Physician Assistant program in RIT's College of Science, has recruited dermatologists, residents and PA students for the project. She is also helping to properly describe the sample images, some of which come from her own collection. "The best way to learn is to see patients again and again with various disorders," Calvelli says. "When you can't get the patients themselves, getting good pictures and learning how to describe them is second best."

Funding Haake won from the NSF will support research using and the design of a content-based image retrieval system accessible through touch, gaze, voice and gesture; the NIH portion of the project will be used to fuse image understanding and medical knowledge.

Bridging the "semantic gap" is the challenge facing researchers working in content-based image retrieval, Haake says. Search functions can go awry when computer engineered algorithms trip on nuances and fail to distinguish between disparate objects, such as a whale and a ship. Building a system based on the end-user's knowledge can prevent semantic hiccups from occurring.

Pengcheng Shi, director for Graduate Studies and Research in the Golisano College, is providing his expertise in image understanding. "For many years computing/technical people have said we can write algorithms such that it will work," he says. "But people start to realize that machines are not all that powerful. At the end of the day we need to put the human back into it. What are the physicians looking at and how are they looking at it in order to make their decisions?"

A novel aspect of the project explores the use of eye tracking to find out what an expert thinks is important. Watching where physicians look when making a diagnosis from a picture can reveal the key regions in an image in a more reliable manner than by asking the same people to remember where they concentrated the most to make their conclusions.

"Where people look is not really where people say they look because we're just not aware of our visual strategies," Haake says. "Eye tracking is a way to identify the perceptually important areas, what people pay attention to and where they are looking."

The eye tracking effort is taking place in RIT's Multidisciplinary Vision Research Laboratory in the Chester F. Carlson Center for Imaging Science under the supervision of co-director Jeff Pelz. "People tend not to pay attention to where they look. People move their eyes 150,000 times a day, but you don't spend time thinking about where you will move your eyes next and you don't waste any memory remembering where your eyes have been," says Pelz, whose lab is part of the College of Science. "You just move your eyes to the next place you need information and a fraction of a second later you move them again."

The study asks 16 pairs of dermatologists and PA students to view skin conditions in 50 different images displayed on a monitor. The pairing creates a master-apprentice dynamic.

"If you record the interaction between the master and apprentice while the master is explaining to the apprentice how to do something, it is an excellent way to learn domain knowledge from an expert," Pelz says. "You get something different and better than if you just listen to two doctors talking to each other or a doctor talking to a layperson."

A tracking device attached to the monitor recorded the physicians' eye movements as they lingered on the critical regions in each image. At the same time, vocabulary mined from audio recordings of the physicians' explanations will form the common search words in the database.

Identifying the relevant features in the images provided by Calvelli and Logical Images Inc., a Rochester, N.Y.-based company, will help Haake's team improve the accuracy and efficiency of retrieving images from the database. Based on the eye-tracking data, the algorithms will compare similarities and differences in subject matter, color, contrast, size and shape—what the dermatologists focused on during the eye-tracking observations.

The efforts of three graduate students are instrumental to the project. Rui Li, a doctoral student in computing and information science, writes algorithms to search for the important features identified in the eye-tracking data, and Sai Mulpura and Preethi Vaidyanathan, who are seeking their master's and doctorate, respectively, in imaging science, work in the Multidisciplinary Vision Research Laboratory meshing the eye-tracking data and mining the audio files.

"We will fuse all these data and find a way with one single image to find a number of images that look alike based on these descriptions," says Vaidyanathan.

Haake envisions the database as a model for similar applications in fields struggling to make use of vast amounts of digital imagery.

"This is very specialized for dermatology but the one thing we want to establish is that this is maybe a better paradigm for developing systems in terms of involving the end-user in the development of these systems and some of the methodologies," Haake says. "Hopefully, some of the approaches where we use the domain expert will lead to more automated systems. When you have tens of thousands of images, you can't sit down and eye track every situation."

Explore further: Serotonin neuron subtypes: New insights could inform SIDS understanding, depression treatment

add to favorites email to friend print save as pdf

Related Stories

Acute artificial compound eyes

May 28, 2008

Insects are a source of inspiration for technological development work. For example, researchers around the world are working on ultra-thin imaging systems based on the insect eye. The principle of hyperacuity ...

Combating blindness is vision of University of Tennessee

Feb 08, 2005

Millions of people at risk of becoming blind could one day be helped by an Oak Ridge National Laboratory technology originally intended to understand semiconductor defects. The project takes advantage of the Department of ...

Watching me, watching you

Oct 21, 2009

(PhysOrg.com) -- Software that tracks shoppers' eye movements as they browse supermarket shelves may seem a bit Big Brother, but the latest technology in 'eye-tracking', which monitors what grabs a person's ...

Team IDs binocular vision gene

Sep 14, 2007

In work that could lead to new treatments for sensory disorders in which people experience the strange phenomena of seeing better with one eye covered, MIT researchers report that they have identified the gene responsible ...

Eye Movements May Help Detect Autism

Sep 14, 2009

(PhysOrg.com) -- Most parents will attest that infants convey their needs and interests in a variety of ways, many times without ever making a sound. For researchers in the School of Behavioral and Brain Sciences, ...

Recommended for you

User comments : 2

Adjust slider to filter visible comments by rank

Display comments: newest first

denschmitz
not rated yet Aug 02, 2010
"For many years computing/technical people have said we can write algorithms such that it will work, but people start to realize that machines are not all that powerful. At the end of the day we need to put the human back into it. What are the physicians looking at and how are they looking at it in order to make their decisions?"

This guy is only trying to elevate his own importance. Truth is, algorithms are getting more powerful and effective on an exponential curve and it's only a (short) matter of time before they outperform the smartest doctors.

I'm an electrical engineer. Sure, I'm a bit biased, but I'm not foolish enough to think my job won't be done better by an AI in 10-20 years.
denschmitz
not rated yet Aug 02, 2010

And the "put humans back into it" ignores the basic economics of any industry -- the more human time is in it, the more it costs. This is the true reason that medical costs are skyrocketing. Everyone wants to get their little piece of the action, so now it's hundreds of people getting a little piece of every medical treatment.

The only way to reign in medical costs is to get humans OUT of the equation.

The arguments here are completely biased and totally ignore the direction we're going. It's like the auto worker fighting against the robots. The robots do a better job. Get over it.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.