Web sites that can take a punch

Mar 17, 2010 by Larry Hardesty

The recent, well-publicized cyberattack on Google was just the latest skirmish in a long war. And like most long wars, this one features an arms race, as hackers seek out new security holes, and web site administrators try to close them.

Systems for detecting attacks against networked computers are commercially available, and academic and industrial researchers are constantly improving them. But when a web site is under attack, its only viable defense may be to take its servers offline, which, in the short term, can cost it money in lost revenue and productivity and, in the long term, could hurt its credibility. Indeed, knocking a site offline may be an attackers’ sole intention.

MIT researchers have developed a system to keep web servers — or, for that matter, any Internet-connected computers — running even when they’re under attack. The work was funded largely by the U.S. Defense Department’s Defense Advanced Research Projects Agency (DARPA), and in a pair of tests whose thoroughness is unusual in academia, DARPA hired a group of professionals outside MIT to try to bring down a test network protected by the new system. In both tests, says Martin Rinard, the professor of electrical engineering and science who led the research, the system exceeded all the performance criteria that DARPA set for it.

The MIT system was developed by a host of researchers, including not only Rinard but Jeff Perkins, a research scientist at MIT’s Computer Science and Lab, Postdoctoral Fellow Stelios Sidiroglou-Douskos and Professor Michael Ernst, who has since moved to the University of Washington. During normal operation, it monitors the programs running on an Internet-connected computer to determine their normal range of behavior, and during an attack, it simply refuses to let them wander outside that range.

To take a simple example, suppose that a program running on a web server routinely stores data in one of two memory locations — call them A and B. During an attack, malicious code tries to trick the program into storing data at location C instead. The MIT system won’t let it: instead, it sends the data to either location A or location B.

Of course, the data may not be of a type that belongs at either of those locations. And the system will modify behaviors that could be even more disruptive than data storage. But in sites with large banks of servers, the MIT system gets several chances to find the best response to an attack. If storing at location A causes one server in the bank to crash, the MIT system will tell the other servers to store it at location B, instead.

“The idea is that you’ve got hundreds of machines out there,” Rinard says. “We’re saying, ‘Okay, fine, you can take out six or 10 of my 200 machines.’” But, he adds, “by observing what happens with the executions of those six or 10 machines, we’ll be able to deploy patches out to protect the rest of the machines.” The entire process of recognizing an attack, testing a number of countermeasures and deploying the most effective ones can take a matter of seconds.

Baptism by fire

In the first of DARPA’s two field tests, engineers at a computer security firm — the so-called red team — were given the code for the MIT defense system. (In the real world, a company that marketed such a system would make every effort to keep its code secret, but Rinard says that it’s standard practice in the security field to consider the worst-case scenario.) The red team had several months in which to devise attacks against a hypothetical network protected by the system. During the test itself, no malicious code was allowed to execute on the protected computers, and in 70 percent of cases, the MIT system kept the applications running on those computers from going down. DARPA also set performance goals for the system, such as the amount of extra processing power it required, and the extent to which it altered the applications’ normal operation. In all cases, the system was well within DARPA’s prescribed limits.

The first red-team exercise considered cases in which hackers tried to infect computers with malicious code, and the MIT researchers presented the results of the test at the Association for Computing Machinery’s Symposium on Operating Systems Principles last fall. A second red-team exercise, testing an updated version of the defense system that the MIT researchers developed together with defense contractor BAE Systems, concluded at the end of January. That test evaluated the system’s ability to handle a different kind of attack, which seeks to circumvent security checks that web applications typically perform to ensure that users have permission to access protected information. Although the researchers are still sorting through the data from that test, Sidiroglou-Douskos says that the system’s success rate in keeping applications up and running rose from 70 percent to 90 percent.

Angelos Keromytis, an associate professor of computer science at Columbia University, who works on related techniques for combating cyberattacks, says that the MIT approach is “very original,” but cautions that Web developers may be reluctant to adopt it anytime soon. “They’re wary of a system that changes another system automatically,” Keromytis says. “When they manually make changes to their systems, they break them, so they think that automatically doing it is going to be worse.” Keromytis points out, however, that while DARPA has run a number of red-team exercises evaluating new technologies in a range of areas, “This is probably one of the most successful exercises that I have seen.” The mere fact that DARPA was willing to spend so much money testing the system, Keromytis says, indicates that “they think it’s close enough to a rough prototype that works, which is more than one can say for most academic research.”

Explore further: New algorithm identifies data subsets that will yield the most reliable predictions

Related Stories

Software That's Resilient Against Hacker Attack

Oct 29, 2009

(PhysOrg.com) -- A team of researchers headed by Martin Rinard, a professor of computer science at MIT, have developed new software that automatically patches errors in deployed software in a matter of minutes.

MIT wins Pentagon prize in social networking contest

Dec 07, 2009

A team from the Massachusetts Institute of Technology (MIT) has won a 40,000-dollar prize for using social networking tools to identify the locations of 10 large weather balloons in a contest sponsored by ...

How to Protect Your Web Server from Attacks

Oct 11, 2007

The National Institute of Standards and Technology has released a new publication that provides detailed tips on how to make web servers more resistant to potential attacks. Called “Guidelines on Securing Public Web Servers,” ...

Secure computers aren't so secure

Oct 30, 2009

(PhysOrg.com) -- Even well-defended computers can leak shocking amounts of private data. MIT researchers seek out exotic attacks in order to shut them down.

MIT's 'robocar' named a finalist in DARPA Urban Challenge

Nov 02, 2007

Team MIT has made it to the finals of the DARPA Urban Challenge, a competition for cars and trucks that run without human help. The qualification was announced Thursday, Nov. 1, by DARPA, the Defense Advanced ...

Recommended for you

Designing exascale computers

Jul 23, 2014

"Imagine a heart surgeon operating to repair a blocked coronary artery. Someday soon, the surgeon might run a detailed computer simulation of blood flowing through the patient's arteries, showing how millions ...

User comments : 5

Adjust slider to filter visible comments by rank

Display comments: newest first

NeilFarbstein
1 / 5 (2) Mar 18, 2010
No. sergey brin's hairbrain idea of cloud computing will be hacked to death by anyone who wants easy pickings. whole PC processors worth of info on every account in the cloud. And after they get quantum computers to crack the security codes, they will crack brin's dream wide open.
designmemetic
not rated yet Mar 18, 2010
It would probably cost less if the government just gave away free virus protection software with no strings attached. That would solve the zombie computer problem at it's source and prevent DDOS, spam and a lot of other problems at the same time. If it's really in our national security interest then the many companies selling virus protection software will just have to accept the loss of their victims, oops, I meant customers.
KBK
not rated yet Mar 18, 2010
They said the magic word. DARPA.

It's like politicians. If a politiian's mouth is moving-they are lying. If the word DARPA is involved, well.....figure it out.
Husky
not rated yet Mar 18, 2010
so, instead of one multithreaded server that could be brought down by hacking one of the threads, they split the server up into a bunch a virtualized pc's that you perhaps could hack, but the VPC window could be closed easily without affecting the host and all the other VPC\s still could handle web requests. Virtualized memory resources and IP stack passthroughs would be cleaned up when a bad VPC window closes, making the system more resistant to even massive DDOS nuking.
dutchman
not rated yet Mar 18, 2010
So, KBK, this whole internet thing, which started as DARPANET, so scientists could freely exchance information (until/before the lawyers found out abot it.) This whole internet thing really does not exist? Is that what you are trying to tell us?