Explained: Regression analysis

Mar 16, 2010 by Peter Dizikes

(PhysOrg.com) -- Regression analysis. It sounds like a part of Freudian psychology. In reality, a regression is a seemingly ubiquitous statistical tool appearing in legions of scientific papers, and regression analysis is a method of measuring the link between two or more phenomena.

Imagine you want to know the connection between the square footage of houses and their sale prices. A regression charts such a link, in so doing pinpointing “an average causal effect,” as MIT economist Josh Angrist and his co-author Jorn-Steffen Pischke of the London School of Economics put it in their 2009 book, “Mostly Harmless Econometrics.”

To grasp the basic concept, take the simplest form of a regression: a linear, bivariate regression, which describes an unchanging relationship between two (and not more) phenomena. Now suppose you are wondering if there is a connection between the time high school students spend doing French homework, and the grades they receive. These types of data can be plotted as points on a graph, where the x-axis is the average number of hours per week a student studies, and the y-axis represents exam scores out of 100. Together, the data points will typically scatter a bit on the graph. The regression analysis creates the single line that best summarizes the distribution of points.

Mathematically, the line representing a simple linear regression is expressed through a basic equation: Y = a0 + a1 X. Here X is hours spent studying per week, the “independent variable.” Y is the exam scores, the “dependent variable,” since — we believe — those scores depend on time spent studying. Additionally, a0 is the y-intercept (the value of Y when X is zero) and a1 is the slope of the line, characterizing the relationship between the two variables.

Using two slightly more complex equations, the “normal equations” for the basic linear regression line, we can plug in all the numbers for X and Y, solve for a0 and a1, and actually draw the line. That line often represents the lowest aggregate of the squares of the distances between all points and itself, the “Ordinary Least Squares” (OLS) method mentioned in mountains of academic papers.

To see why OLS is logical, imagine a regression line running 6 units below one data point and 6 units above another point; it is 6 units away from the two points, on average. Now suppose a second line runs 10 units below one data point and 2 units above another point; it is also 6 units away from the two points, on average. But if we square the distances involved, we get different results: 62 + 62 = 72 in the first case, and 102 + 22 = 104 in the second case. So the first line yields the lower figure — the “least squares” — and is a more consistent reduction of the distance from the data points. (Additional methods, besides OLS, can find the best line for more complex forms of regression analysis.)

In turn, the typical distance between the line and all the points (sometimes called the “standard error”) indicates whether the regression analysis has captured a relationship that is strong or weak. The closer a line is to the data points, overall, the stronger the relationship.

Regression analysis, again, establishes a correlation between phenomena. But as the saying goes, correlation is not causation. Even a line that fits the data points closely may not say something definitive about causality. Perhaps some students do succeed in French class because they study hard. Or perhaps those students benefit from better natural linguistic abilities, and they merely enjoy studying more, but do not especially benefit from it. Perhaps there would be a stronger correlation between test scores and the total time students had spent hearing French spoken before they ever entered this particular class. The tale that emerges from good data may not be the whole story.

So it still takes critical thinking and careful studies to locate meaningful cause-and-effect relationships in the world. But at a minimum, helps establish the existence of connections that call for closer investigation.

Explore further: 'Moral victories' might spare you from losing again

Related Stories

Speed cameras do reduce accidents, say researchers

Sep 12, 2008

(PhysOrg.com) -- Scientists at the University of Liverpool have developed an accident prediction model which proves that speed cameras are effective in reducing the number of road traffic accidents by 20 per cent.

When it's more than the 'terrible twos'

Dec 09, 2008

We all know how infants can act up during their terrible twos, but when these behaviors are accompanied by developmental setbacks, they could point to something more serious.

Warning over polyclinics and super-surgeries

Sep 22, 2008

Research carried out at the University of Leicester by Carolyn Tarrant and Tim Stokes, of the Department of Health Sciences, and Andrew Colman, of the School of Psychology, suggests that polyclinics and super-surgeries are ...

Piling on the homework -- Does it work for everyone?

Aug 18, 2008

While U.S students continue to lag behind many countries academically, national statistics show that teachers have responded by assigning more homework. But according to a joint study by researchers at Binghamton University ...

Recommended for you

Affirmative action elicits bias in pro-equality Caucasians

Jul 25, 2014

New research from Simon Fraser University's Beedie School of Business indicates that bias towards the effects of affirmative action exists in not only people opposed to it, but also in those who strongly endorse equality.

Narcissistic CEOs and financial performance

Jul 24, 2014

Narcissism, considered by some as the "dark side of the executive personality," may actually be a good thing when it comes to certain financial measures, with companies led by narcissistic CEOs outperforming those helmed ...

Election surprises tend to erode trust in government

Jul 24, 2014

When asked who is going to win an election, people tend to predict their own candidate will come out on top. When that doesn't happen, according to a new study from the University of Georgia, these "surprised losers" often ...

User comments : 4

Adjust slider to filter visible comments by rank

Display comments: newest first

sstritt
Mar 16, 2010
This comment has been removed by a moderator.
thermodynamics
not rated yet Mar 16, 2010
It would be freshman stats class if the numbers were right. Since they cannot show exponentiation in the text they have 62 instead of 6^2 and the same for the other squares. So, they don't add up as typeset. You would think that a web site devoted to science could have some sort of math typesetting.
Roj
5 / 5 (1) Mar 16, 2010
Available to most browsers and text editors, ALT-0178 ² or X² has bean the standard ^2 ASCII code for several years.
toyo
5 / 5 (2) Mar 17, 2010
This 'filler' article is hardly news.
It should be in a beginner's or Basics section.
Lift you game, Physorg!
bernie_beckerman
5 / 5 (1) Mar 17, 2010
I have to agree that this is by far one of the most useless articles I've seen here for a number of reasons - in the order of importance

1) This is not news!!!
2) This is a really poor explanation of regression analysis.
3) The fact that OLS is "mentioned in mountains of academic papers" does not necessarily make it an appropriate method of analysing data.
4) Regression does not 'establish correlation', it established association. Language is important in scientific writing no matter what its purpose - so be precise! Correlation is an entirely different statistical tool.
5) And if I had more than 1000 characters for this comment I would expound on how articles of this type give individuals without statistical training a false sense of what they understand making them prone to deriving false inference from a study/article they don't really understand.