Can control theory make software better?

March 19, 2013 by Larry Hardesty
The oscillation of a pendulum offers the simplest example of a Lyapunov function, a central concept in control theory. The pendulum's loss of energy with each swing guarantees that it won't lurch into a less stable state.

"Formal verification" is a set of methods for mathematically proving that a computer program does what it's supposed to do. It's universal in hardware design and in the development of critical control software that can't tolerate bugs; it's common in academic research; and it's beginning to make inroads in commercial software.

In the latest issue of the journal IEEE Transactions on Automatic Control, researchers from MIT's Laboratory for Information and Decision Systems (LIDS) and a colleague at Georgia Tech show how to apply principles from control theory—which analyzes ranging from robots to —to formal verification. The result could help expand their repertoire of formal-verification techniques, and it could be particularly useful in the area of approximate computation, in which designers of computer systems trade a little bit of computational accuracy for large gains in speed or .

In particular, the researchers adapted something called a Lyapunov function, which is a mainstay of control theory. The graph of a standard Lyapunov function slopes everywhere toward its minimum value: It can be thought of as looking kind of like a bowl. If the function characterizes the dynamics of a physical system, and the minimum value represents a stable state of the system, then the curve of the graph guarantees that the system will move toward greater stability.

"The most basic example of a Lyapunov function is a pendulum swinging and its energy decaying," says Mardavij Roozbehani, a principal research scientist in LIDS and lead author on the new paper. "This decay of energy becomes a certificate of stability, or ',' of the pendulum system."

Of course, most dynamical systems are more complex than pendulums, and finding Lyapunov functions that characterize them can be difficult. But there's a large literature on Lyapunov functions in , and Roozbehani and his colleagues are hopeful that much of it will prove applicable to software verification.

Skirting dangers

In their new paper, Roozbehani and his coauthors—MIT professor of electrical engineering Alexandre Megretski and Eric Feron, a professor of aerospace software engineering at Georgia Tech—envision a computer program as a set of rules for navigating a space defined by the variables in the program and the memory locations of the program instructions. Any state of the program—any values for the variables during execution of a particular instruction—constitutes a point in that space. Problems with a program's execution, such as dividing by zero or overloading the memory, can be thought of as regions in the space.

In this context, formal verification is a matter of demonstrating that the program will never steer its variables into any of these danger zones. To do that, the researchers introduce an analogue of Lyapunov functions that they call Lyapunov invariants. If the graph of a Lyapunov invariant is in some sense bowl shaped, then the task is to find a Lyapunov invariant such that the initial values of the program's variables lie in the basin of the bowl, and all of the danger zones lie farther up the bowl's walls. Veering toward the danger zones would then be analogous to a pendulum's suddenly swinging out farther than it did on its previous swing.

In practice, finding a Lyapunov invariant with the desired properties means systematically investigating different classes of functions. There's no general way to predict in advance what type of function it will be—or even that it exists. But Roozbehani imagines that, if his and his colleagues' approach catches on, researchers will begin to identify algorithms that lend themselves to particular types of Lyapunov invariants, as has happened with control problems and Lyapunov functions.

Fuzzy thinking

Moreover, many of the critical software systems that require formal verification implement control systems designed using Lyapunov functions. "So there are intuitive reasons to believe that, at least for control-system software, these methods will work well," Roozbehani says.

Roozbehani is also enthusiastic about possible applications in approximate computation. As he explains, many control systems are based on mathematical models that can't capture all of the complexity of real dynamical systems. So control theorists have developed analytic methods that can account for model inaccuracies and provide guarantees of stability even in the presence of uncertainty. Those techniques, Roozbehani argues, could be perfectly suited for verifying code that exploits approximate computation.

"Computer scientists are not used to thinking about robustness of software," says George Pappas, chair of the Department of Electrical and Systems Engineering at the University of Pennsylvania. "This is the first work that is formalizing the notion of robustness for software. It's a paradigm shift, from the more exhaustive, combinatorial view of checking for bugs in software, to a view where you try to see how robust your software is to changes in the input or the internal state of computation and so on."

"The idea may not apply in all possible kinds of software," Pappas cautions. "But if you're thinking about that implements, say, controller or sensor functionality, I think there's no question that these types of ideas will have a lot of impact."

Explore further: New software tools for railway signaling and energy distribution

More information: Paper: "Optimization of Lyapunov Invariants in Verification of Software Systems"

Related Stories

New methods keep bugs out of software for self-driving cars

June 21, 2011

Driver assistance technologies, such as adaptive cruise control and automatic braking, promise to someday ease traffic on crowded routes and prevent accidents. Proving that these automated systems will work as intended is ...

Recommended for you

Scientists write 'traps' for light with tiny ink droplets

October 23, 2017

A microscopic 'pen' that is able to write structures small enough to trap and harness light using a commercially available printing technique could be used for sensing, biotechnology, lasers, and studying the interaction ...

When words, structured data are placed on single canvas

October 22, 2017

If "ugh" is your favorite word to describe entering, amending and correcting data on the rows and columns on spreadsheets you are not alone. Coda, a new name in the document business, feels it's time for a change. This is ...

Enhancing solar power with diatoms

October 20, 2017

Diatoms, a kind of algae that reproduces prodigiously, have been called "the jewels of the sea" for their ability to manipulate light. Now, researchers hope to harness that property to boost solar technology.


Adjust slider to filter visible comments by rank

Display comments: newest first

4 / 5 (4) Mar 19, 2013
I would like to see a formal verification scheme for systems of codified law. Then we could finally get rid of all those laws that are in conflict with each other and there are ton of those.
1.5 / 5 (2) Mar 19, 2013
I would like to see a formal verification scheme for systems of codified law.
By "verification" do you mean validation that physical laws usually go through, or do you mean falsification? Verification is pretty useless. Falsifiability is the demarcation between science and nonsense non-science.
Then we could finally get rid of all those laws that are in conflict with each other and there are ton of those.
Mere "conflict" isn't enough. When one is formally contradicted then the issue must be resolved.

Read and understand, first, Karl Popper's Logic of Scientific Discovery and then E. T. Jaynes' Probability Theory: The Logic of Science.
5 / 5 (2) Mar 19, 2013
By "verification" do you mean validation that physical laws usually go through, or do you mean falsification?

No, he means civil laws, not physical laws. You sir, have been trolled.
3.4 / 5 (5) Mar 19, 2013
it's common in academic research

It is? The way I remember it most academic software is held together by spit and wishful thinking.
Stability isn't the point of academic software. You're always ripping out and replacing algorithms. It is in a constant state of "let's try THIS out".
Much software is also handed on from grad student to grad student (with sometimes wildly different programming styles and abilities). And don't even get me started on the commenting styles (or lack thereof).

I yet have to see academic software that stands up to ANY kind of test that is the norm in commercial production software. (But as noted: it isn't supposed to)
1 / 5 (1) Mar 19, 2013
At least to me, a "pendulum swing" is more a reflex or reaction to other events. Though it "does the same thing, swing," its action reflects more the nature, however vaguely, what made it "do that" in the first place. As for "predicting things and events," I only the one (98%?) way to do that; Invent it. Create it out of whole cloth if you have to, but then that makes you one of many driving the bus, and no longer just a passenger.
5 / 5 (1) Mar 19, 2013
however vaguely

I hate it when people use that phrase. Hiding behind rhetoric.

You have managed to completely miss the point, go on an entirely unrelated tangent, poorly apply an already overused metaphor, and couch it all defensive, self-deprecating rhetoric.

God I love the internet.
not rated yet Mar 20, 2013
I took a course in software verification and correctness theory. What I took away from it was the effort/cost to prove relatively small programs as correct would greatly exceed the effort/cost to make the program to begin with.

With large programs, I just don't see it being possible.
not rated yet Mar 24, 2013
What I took away from it was the effort/cost to prove relatively small programs as correct would greatly exceed the effort/cost to make the program to begin with.

That's already the reality in software engineering. Programming the software isn't the biggest effort/highest cost factor. Testing, support, IT infrastructure, and most of all: product owners who don't know what they want* all are major cost factors.

* to be fair: software engineers aren't all that good at communicating what is possible and with how much effort. But all too often product owners will take the easy route and just say "make some software"...and then start changing fundamental architecture requirements halfway through the project.

These are some things I've heard (more than a year) after the last project started:

"Oh, it should run onn Linux, too"
"Oh, it should run on an Ipad, too"
"Oh, it should also be able to run on a server farm and the client should just have a dumb terminal frontend"
... *sigh*

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.