Computer scientists are constantly searching for ways to squeeze ever more bandwidth from communications networks.

Now a new approach to understanding a basic concept in graph theory, known as "vertex connectivity," could ultimately lead to communications protocols—the rules that govern how digital messages are exchanged—that coax as much bandwidth as possible from networks.

Graph theory plays a central role in mathematics and computer science, and is used to describe the relationship between different objects. Each graph consists of a number of nodes, or vertices, which represent the objects, and connecting lines between them, known as edges, which signify the relationships between them. A communications network, for example, can be represented as a graph with each node in the network being one vertex, and a connection between two nodes depicted as an edge.

One of the fundamental concepts within graph theory is connectivity, which has two variants: edge connectivity and vertex connectivity. These are numbers that determine how many lines or nodes would have to be removed from a given graph to disconnect it. The lower the edge-connectivity or vertex-connectivity number of a graph, therefore, the easier it is to disconnect, or break apart.

In this way both concepts show how robust a network is against failure, and how much flow can pass through it—whether the flow of information in a communications network, traffic flow in a transportation system, or fluid flow in hydraulics.

**Reducing edge connectivity's edge**

However, while a great deal of research has been carried out in mathematics to solve problems associated with edge connectivity, there has been relatively little success in answering questions about vertex connectivity.

But at the ACM-SIAM Symposium on Discrete Algorithms in Portland, Ore., in January, Mohsen Ghaffari, a graduate student in the Computer Science and Artificial Intelligence Laboratory at MIT, will present a new technique for addressing vertex-connectivity problems.

"This could ultimately help us understand how to build more robust and faster networks," says Ghaffari, who developed the new approach alongside Keren Censor-Hillel at the Technion and Fabian Kuhn at the University of Freiburg.

In the 1960s, mathematicians William Tutte and Crispin Nash-Williams separately developed theories about structures called edge-disjoint spanning trees, which now serve as one of the key technical tools in many problems about edge connectivity.

A spanning tree is a subgraph—or a graph-within-a-graph—in which all of the nodes are connected by the smallest number of edges. A set of spanning trees within a graph are called "edge-disjoint" if they do not share any of these connecting lines.

If a network contains three edge-disjoint spanning trees, for example, information can flow in parallel along each of these trees at the same time, meaning three times more bandwidth than would be possible in a graph containing just one tree. The higher the number of edge-disjoint spanning trees, the larger the information flow, Ghaffari says. "The results of Tutte and Nash-Williams show that each graph contains almost as many spanning trees as its edge connectivity," he says.

Now the team has created an analogous theory about vertex connectivity. They did this by breaking down the graph into separated groups of nodes, known as connected dominating sets. In graph theory, a group of nodes is called a connected dominating set if all of the vertices within it are connected to one another, and any other node within the graph is adjacent to at least one of those inside the group.

In this way, information can be disseminated among the nodes of the set, and then passed to any other node in the network.

So, in a similar way to Tutte and Nash-Williams' results for edge connectivity, "each graph contains almost as many vertex-disjoint connected dominating sets as its vertex connectivity," Ghaffari says.

"So if you think of an application like broadcasting information through a network, we can now decompose the network into many groups, each being one connected dominating set," he says. "Each of these groups is then going to be responsible for broadcasting some set of the messages, and all groups work in parallel to broadcast all the messages fast—almost as fast as possible."

The team has now developed an algorithm that can carefully decompose a network into many connected dominating sets. In this way, it can structure so-called wireless ad hoc networks, in which individual nodes route data by passing it from one to the next to ensure the best possible speed of information flow. "We want to be able to spread as much information as possible per unit of time, to create faster and faster networks," Ghaffari says. "And when a graph has a better vertex connectivity, it allows a larger flow [of information]," he adds.

**Applications in assessing robustness**

The researchers can also use their new approach to analyze the robustness of a network against random failures. "These new techniques also allow us to analyze whether a network is likely to remain connected when its nodes fail randomly with some given probability," Ghaffari says. "Reliability against random edge failures is well understood, but we knew much less about that against node failures," he adds.

**Explore further:**
The hazards of presumptive computing

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.Sign in to get notified via email when new comments are made.