Explained: Matrices

December 6, 2013 by Larry Hardesty
A matrix multiplication diagram.

Among the most common tools in electrical engineering and computer science are rectangular grids of numbers known as matrices. The numbers in a matrix can represent data, and they can also represent mathematical equations. In many time-sensitive engineering applications, multiplying matrices can give quick but good approximations of much more complicated calculations.

Matrices arose originally as a way to describe systems of linear equations, a type of problem familiar to anyone who took grade-school algebra. "Linear" just means that the variables in the equations don't have any exponents, so their graphs will always be straight lines.

The equation x - 2y = 0, for instance, has an infinite number of solutions for both x and y, which can be depicted as a straight line that passes through the points (0,0), (2,1), (4,2), and so on. But if you combine it with the equation x - y = 1, then there's only one solution: x = 2 and y = 1. The point (2,1) is also where the graphs of the two equations intersect.

The matrix that depicts those two equations would be a two-by-two grid of numbers: The top row would be [1 -2], and the bottom row would be [1 -1], to correspond to the coefficients of the variables in the two equations.

In a range of applications from image processing to genetic analysis, computers are often called upon to solve systems of linear equations—usually with many more than two variables. Even more frequently, they're called upon to multiply matrices.

Matrix multiplication can be thought of as solving linear equations for particular variables. Suppose, for instance, that the expressions t + 2p + 3h; 4t + 5p + 6h; and 7t + 8p + 9h describe three different mathematical operations involving temperature, pressure, and humidity measurements. They could be represented as a matrix with three rows: [1 2 3], [4 5 6], and [7 8 9].

Now suppose that, at two different times, you take temperature, pressure, and humidity readings outside your home. Those readings could be represented as a matrix as well, with the first set of readings in one column and the second in the other. Multiplying these matrices together means matching up rows from the first matrix—the one describing the equations—and columns from the second—the one representing the measurements—multiplying the corresponding terms, adding them all up, and entering the results in a new matrix. The numbers in the final matrix might, for instance, predict the trajectory of a low-pressure system.

Of course, reducing the complex dynamics of weather-system models to a system of linear equations is itself a difficult task. But that points to one of the reasons that matrices are so common in computer science: They allow computers to, in effect, do a lot of the computational heavy lifting in advance. Creating a matrix that yields useful computational results may be difficult, but performing matrix multiplication generally isn't.

One of the areas of computer science in which matrix multiplication is particularly useful is graphics, since a digital image is basically a matrix to begin with: The rows and columns of the matrix correspond to rows and columns of pixels, and the numerical entries correspond to the pixels' color values. Decoding digital video, for instance, requires matrix multiplication; earlier this year, MIT researchers were able to build one of the first chips to implement the new high-efficiency video-coding standard for ultrahigh-definition TVs, in part because of patterns they discerned in the matrices it employs.

In the same way that matrix multiplication can help process digital video, it can help process digital sound. A digital audio signal is basically a sequence of numbers, representing the variation over time of the air pressure of an acoustic audio signal. Many techniques for filtering or compressing digital audio signals, such as the Fourier transform, rely on matrix multiplication.

Another reason that matrices are so useful in computer science is that graphs are. In this context, a graph is a mathematical construct consisting of nodes, usually depicted as circles, and edges, usually depicted as lines between them. Network diagrams and family trees are familiar examples of graphs, but in they're used to represent everything from operations performed during the execution of a computer program to the relationships characteristic of logistics problems.

Every graph can be represented as a , however, where each column and each row represents a node, and the value at their intersection represents the strength of the connection between them (which might frequently be zero). Often, the most efficient way to analyze graphs is to convert them to matrices first, and the solutions to problems involving graphs are frequently solutions to systems of linear equations.

Explore further: Discovery could lead to more difficult Sudoku puzzles

Related Stories

Discovery could lead to more difficult Sudoku puzzles

February 13, 2010

(PhysOrg.com) -- A new analysis of number randomness in Sudoku matrices could lead to the development of more difficult and multi-dimensional Sudoku puzzles. In a recent study, mathematicians have found that the way that ...

Unraveling the Matrix

July 29, 2010

A new way of analyzing grids of numbers known as matrices could improve signal-processing applications and data-compression schemes.

Fundamental algorithm gets first improvement in 10 years

September 27, 2010

The maximum-flow problem, or max flow, is one of the most basic problems in computer science: First solved during preparations for the Berlin airlift, it’s a component of many logistical problems and a staple of introductory ...

Researchers build Quad HD TV chip

February 20, 2013

It took only a few years for high-definition televisions to make the transition from high-priced novelty to ubiquitous commodity—and they now seem to be heading for obsolescence just as quickly. At the Consumer Electronics ...

Short algorithm, long-range consequences

March 1, 2013

In the last decade, theoretical computer science has seen remarkable progress on the problem of solving graph Laplacians—the esoteric name for a calculation with hordes of familiar applications in scheduling, image processing, ...

Recommended for you

Sydney makes its mark with electronic paper traffic signs

July 28, 2015

Visionect, which is in the business of helping companies build electronic paper display products, announced that Sydney has launched e-paper traffic signs. The traffic signage integrates displays from US manufacturer E Ink ...

1 comment

Adjust slider to filter visible comments by rank

Display comments: newest first

frajo
3 / 5 (2) Dec 06, 2013
"Linear" just means that the variables in the equations don't have any exponents ...

Their exponent is "1".

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.