Technology would help detect terrorists before they strike

Oct 05, 2007

Are you a terrorist? Airport screeners, customs agents, police officers and members of the military who silently pose that question to people every day, may soon have much more than intuition to depend on to determine the answer.

Computer and behavioral scientists at the University at Buffalo are developing automated systems that track faces, voices, bodies and other biometrics against scientifically tested behavioral indicators to provide a numerical score of the likelihood that an individual may be about to commit a terrorist act.

“The goal is to identify the perpetrator in a security setting before he or she has the chance to carry out the attack,” said Venu Govindaraju, Ph.D., professor of computer science and engineering in the UB School of Engineering and Applied Sciences. Govindaraju is co-principal investigator on the project with Mark G. Frank, Ph.D., associate professor of communication in the UB College of Arts and Sciences.

The project, recently awarded an $800,000 grant by the National Science Foundation, will focus on developing in real-time an accurate baseline of indicators specific to an individual during extensive interrogations while also providing real-time clues during faster, routine security screenings.

“We are developing a prototype that examines a video in a number of different security settings, automatically producing a single, integrated score of malfeasance likelihood,” he said.

A key advantage of the UB system is that it will incorporate machine learning capabilities, which will allow it to “learn” from its subjects during the course of a 20-minute interview.

That’s critical, Govindaraju said, because behavioral science research has repeatedly demonstrated that many behavioral clues to deceit are person-specific.

“As soon as a new person comes in for an interrogation, our program will start tracking his or her behaviors, and start computing a baseline for that individual ‘on the fly’,” he said.

The researchers caution that no technology, no matter how precise, is a substitute for human judgment.

“No behavior always guarantees that someone is lying, but behaviors do predict emotions or thinking and that can help the security officer decide who to watch more carefully,” said Frank.

He noted that individuals often are randomly screened at security checkpoints in airports or at border crossings.

“Random screening is fair, but is it effective?” asked Frank. “The question is, what do you base your decision on -- a random selection, your gut reaction or science? We believe science is a better basis and we hope our system will provide that edge to security personnel.”

Govindaraju added that the UB system also would avoid some of the pitfalls that hamper a human screener’s effectiveness.

“Human screeners have fatigue and bias, but the machine does not blink,” he said.

The UB project is designed to solve one of the more challenging problems in developing accurate security systems -- fusing information from several biometrics, such as faces, voices and bodies.

“No single biometric is suited for all applications,” said Govindaraju, who also is founder and director of UB’s Center for Unified Biometrics and Sensors. “Here at CUBS, we take a unique approach to developing technologies that combine and ‘tune’ different biometrics to fit specific needs. In this project, we are focusing on how to analyze different behaviors and come up with a single malfeasance indicator.”

The UB project is among the first to involve computer scientists and behavioral scientists working together to develop more accurate detection systems based on research from each field.

Both researchers have spent their careers studying complementary areas. Since completing his doctoral dissertation on using computational tools to do facial recognition, Govindaraju has focused on problems in pattern recognition and artificial intelligence. Since founding CUBS in 2003, he has worked on a broad range of biometric technologies and devices.

Frank, a social psychologist, has spent his career conducting research on human nonverbal communication that strongly suggests whether or not an individual is feeling emotions or telling the truth. He founded the Communication Science Center at UB in 2005 and his work, recognized and utilized by security officials around the world, now provides important information for UB computer scientists.

Frank and Govindaraju began working together partly as a result of UB 2020, the university’s strategic plan, which emphasizes strengthening interdisciplinary research.

“What I like about working with Venu and his team at CUBS is that they are creating new algorithms that hold the exciting possibility of revealing information and patterns that will help us spot potential bad guys,” said Frank. “We expect that there will be an advantage to combining the behavioral understanding of people with algorithm development to make better predictions.”

They expect to have a working prototype of the full system within a few years.

Source: University at Buffalo

Explore further: Fighting the next generation of cyberattacks

Related Stories

Space open for business, says Electron launch system CEO

14 minutes ago

Space, like business, is all about time and money, said Peter Beck, CEO of Rocket Lab, a US company with a New Zealand subsidiary. The problem, he added, is that, in cost and time, space has remained an incredibly ...

EU action on Google marks divergence with Washington

24 minutes ago

The EU antitrust complaint filed Wednesday against Google represents a sharp divergence with Washington, which dropped a similar investigation two years ago, citing a lack of evidence against the Internet ...

Germanwings crash could prompt remote override tech review

24 minutes ago

The head of Germany's air traffic control agency says the crash of a Germanwings jet in France last month raises the question of whether technology should be put in place allowing authorities on the ground to take control ...

Netflix membership surges past 60 million

37 minutes ago

Netflix on Wednesday reported an unprecedented jump in subscribers in the first quarter of this year, pushing the streaming television service membership above 60 million.

Recommended for you

Fighting the next generation of cyberattacks

Apr 16, 2015

The next generation of cyberattacks will be more sophisticated, more difficult to detect and more capable of wreaking untold damage on the nation's computer systems.

Algorithm able to identify online trolls

Apr 14, 2015

A trio of researchers, two from Cornell the other from Stanford has developed a computer algorithm that is capable of identifying antisocial behavior as demonstrated in website comment sections. In their ...

User comments : 1

Adjust slider to filter visible comments by rank

Display comments: newest first

5 / 5 (1) Oct 15, 2007
Approaches like this are almost always doomed to fail because, ultimately, they rely on a statistical approach and try to answer the question (in this case): "What is the probability that this subject is concealing malicious intent?"

Let's imagine they can eventually achieve 99% accuracy. This implies a false negative rate of 1% which we can live with. But a false positive rate, also of 1%, is utterly unacceptable. It implies thousands of passengers a day being pulled out of the queues for hostile interrogation, probably causing them to miss their flights or the flights to be delayed. The level of disruption and hostility makes measures like this untenable.

Only when you have 100% accurate brainscanners capable of 100% accurate lie detection will we have a technological filter for malice. And if we ever get to that stage, the first people we will need to apply it to will be the politicians and police before we let it loose on the people...

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.