Technology would help detect terrorists before they strike

Oct 05, 2007

Are you a terrorist? Airport screeners, customs agents, police officers and members of the military who silently pose that question to people every day, may soon have much more than intuition to depend on to determine the answer.

Computer and behavioral scientists at the University at Buffalo are developing automated systems that track faces, voices, bodies and other biometrics against scientifically tested behavioral indicators to provide a numerical score of the likelihood that an individual may be about to commit a terrorist act.

“The goal is to identify the perpetrator in a security setting before he or she has the chance to carry out the attack,” said Venu Govindaraju, Ph.D., professor of computer science and engineering in the UB School of Engineering and Applied Sciences. Govindaraju is co-principal investigator on the project with Mark G. Frank, Ph.D., associate professor of communication in the UB College of Arts and Sciences.

The project, recently awarded an $800,000 grant by the National Science Foundation, will focus on developing in real-time an accurate baseline of indicators specific to an individual during extensive interrogations while also providing real-time clues during faster, routine security screenings.

“We are developing a prototype that examines a video in a number of different security settings, automatically producing a single, integrated score of malfeasance likelihood,” he said.

A key advantage of the UB system is that it will incorporate machine learning capabilities, which will allow it to “learn” from its subjects during the course of a 20-minute interview.

That’s critical, Govindaraju said, because behavioral science research has repeatedly demonstrated that many behavioral clues to deceit are person-specific.

“As soon as a new person comes in for an interrogation, our program will start tracking his or her behaviors, and start computing a baseline for that individual ‘on the fly’,” he said.

The researchers caution that no technology, no matter how precise, is a substitute for human judgment.

“No behavior always guarantees that someone is lying, but behaviors do predict emotions or thinking and that can help the security officer decide who to watch more carefully,” said Frank.

He noted that individuals often are randomly screened at security checkpoints in airports or at border crossings.

“Random screening is fair, but is it effective?” asked Frank. “The question is, what do you base your decision on -- a random selection, your gut reaction or science? We believe science is a better basis and we hope our system will provide that edge to security personnel.”

Govindaraju added that the UB system also would avoid some of the pitfalls that hamper a human screener’s effectiveness.

“Human screeners have fatigue and bias, but the machine does not blink,” he said.

The UB project is designed to solve one of the more challenging problems in developing accurate security systems -- fusing information from several biometrics, such as faces, voices and bodies.

“No single biometric is suited for all applications,” said Govindaraju, who also is founder and director of UB’s Center for Unified Biometrics and Sensors. “Here at CUBS, we take a unique approach to developing technologies that combine and ‘tune’ different biometrics to fit specific needs. In this project, we are focusing on how to analyze different behaviors and come up with a single malfeasance indicator.”

The UB project is among the first to involve computer scientists and behavioral scientists working together to develop more accurate detection systems based on research from each field.

Both researchers have spent their careers studying complementary areas. Since completing his doctoral dissertation on using computational tools to do facial recognition, Govindaraju has focused on problems in pattern recognition and artificial intelligence. Since founding CUBS in 2003, he has worked on a broad range of biometric technologies and devices.

Frank, a social psychologist, has spent his career conducting research on human nonverbal communication that strongly suggests whether or not an individual is feeling emotions or telling the truth. He founded the Communication Science Center at UB in 2005 and his work, recognized and utilized by security officials around the world, now provides important information for UB computer scientists.

Frank and Govindaraju began working together partly as a result of UB 2020, the university’s strategic plan, which emphasizes strengthening interdisciplinary research.

“What I like about working with Venu and his team at CUBS is that they are creating new algorithms that hold the exciting possibility of revealing information and patterns that will help us spot potential bad guys,” said Frank. “We expect that there will be an advantage to combining the behavioral understanding of people with algorithm development to make better predictions.”

They expect to have a working prototype of the full system within a few years.

Source: University at Buffalo

Explore further: Google looking at ways to rate websites based more on trustworthiness

add to favorites email to friend print save as pdf

Related Stories

Cyber thugs taking data hostage

2 hours ago

Marriage therapist Valerie Goss turned on her computer one day and found that all of her data was being held hostage.

Recommended for you

New paper focuses on degree centrality in networks

Feb 26, 2015

Social networks such as Facebook, LinkedIn and Twitter play an increasingly central role in our lives. Centrality is also an important concept in the theory of social networks. Centrality of an individual, called a "node" ...

Linguists tackle computational analysis of grammar

Feb 26, 2015

Children don't have to be told that "cat" and "cats" are variants of the same word—they pick it up just by listening. To a computer, though, they're as different as, well, cats and dogs. Yet it's computers ...

User comments : 1

Adjust slider to filter visible comments by rank

Display comments: newest first

HarryStottle
5 / 5 (1) Oct 15, 2007
Approaches like this are almost always doomed to fail because, ultimately, they rely on a statistical approach and try to answer the question (in this case): "What is the probability that this subject is concealing malicious intent?"

Let's imagine they can eventually achieve 99% accuracy. This implies a false negative rate of 1% which we can live with. But a false positive rate, also of 1%, is utterly unacceptable. It implies thousands of passengers a day being pulled out of the queues for hostile interrogation, probably causing them to miss their flights or the flights to be delayed. The level of disruption and hostility makes measures like this untenable.

Only when you have 100% accurate brainscanners capable of 100% accurate lie detection will we have a technological filter for malice. And if we ever get to that stage, the first people we will need to apply it to will be the politicians and police before we let it loose on the people...

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.