(Phys.org) —A team of scientists, led by ecologist Lucas Joppa of Microsoft Research, has published a commentary piece in the journal Science, highlighting what they say is a growing problem in research efforts. They suggest that an overreliance on source code that has not been properly vetted is increasingly leading to incorrect research effort results.

The problem, Joppa et al, say, is that increasingly, researchers are relying on existing software to perform their research, despite the fact that no one has peer reviewed the software itself. It's a problem, they say, that is particularly troubling when big applications are used because small coding errors can be compounded. A rounding error in a spreadsheet generally won't cause much problem, they note, but when a rounding error is repeated over and over again, perhaps millions of times, it can lead to completely inaccurate results.

In a Podcast interview with Science, Joppa explains the problems with software use in research have come about mainly due to the software being written by researchers themselves, rather than by trained . Software written by one research group can very easily become the standard for use by many other groups, despite the fact that it has never been thoroughly tested to ensure it's giving accurate results.

He said another problem is that sometimes, there is a mismatch between equations that have been worked out by researchers and the way they are implemented in software. It can become truly problematic, he points out, when a catch-22 situation arises—when researchers use a to find answers to questions they have no other way to find, or verify. If it's the only way to get the answer, how do they know it's correct?

Resarchers for the current study pulled data from a survey conducted among fellow ecologists. It's a field, they note, that relies very heavily on big number-crunching applications. Among other findings, the team reports that just 8 percent of 400 scientists who responded reported validating results (from a black-box computer system) with more than one system.

The researchers don't just point out problems with the way is used in current research efforts; they offer ways to improve the situation as well. The first are the most obvious—make open-source and require it to be peer reviewed before journals will accept research articles based on their use. They also suggest journals could help by publishing more articles educating researchers about the problem and how to deal with it. Encouraging colleges and universities to educate students on the issue (and perhaps require more computer science courses) would be helpful too, they add.

More information: www.sciencemag.org/content/340/6134/814

Journal information: Science