Researchers announce master plan for better science

January 10, 2017
Credit: Petr Kratochvil/Public Domain

An international team of experts has produced a "manifesto" setting forth steps to improve the quality of scientific research.

"There is a way to perform good, reliable, credible, reproducible, trustworthy, useful science," said John Ioannidis, MD, DSc, professor of medicine and of health research and policy at the Stanford University School of Medicine.

"We have ways to improve compared with what we're doing currently, and there are lots of scientists and other stakeholders who are interested in doing this," said Ioannidis, who is senior author of the article, which will be published Jan. 10 in the inaugural issue of Nature Human Behavior. The lead author is Marcus Munafò, PhD, professor of biological psychology at the University of Bristol.

What's holding science back?

Each year, the U.S. government spends nearly $70 billion on nondefense research and development, including a budget of more than $30 billion for the National Institutes of Health. Yet research on how science is conducted—so-called meta-research—has made clear that a substantial number of published scientific papers fail to move science forward. One analysis, wrote the authors, estimated that as much as 85 percent of the biomedical research effort is wasted.

One reason for this is that scientists often find patterns in noisy data, the way we see whales or faces in the shapes of clouds. This effect is more likely when researchers apply hundreds or even thousands of different analyses to the same data set until statistically significant effects appear.

The manifesto suggests it's not just scientists themselves who are responsible for improving the quality of science, but also other stakeholders, including research institutions, scientific journals, funders and regulatory agencies. All, said Ioannidis, have important roles to play.

"It's a multiplicative effect," he said, "so you have all of these players working together in the same direction." If any one of the stakeholders doesn't participate in creating incentives for transparency and reproducibility, he said, it makes it harder for everyone else to improve.

"Most of the changes that we propose in the manifesto are interrelated, and the stakeholders are connected as if by rubber bands. If you have one of them move, he or she may pull the others. At the same time, he or she may be restricted because others don't move," said Ioannidis, who is also co-director of the Meta-Research Innovation Center at Stanford.

Manifesto

The eight-page paper describing ways to improve science includes four major categories: methods, reporting and dissemination, reproducibility, and evaluation and incentives.

Methods could be improved, the authors reported, by designing studies to minimize bias—by blinding patients, doctors and other participants, and by registering the study design, outcome measures and analysis plan before the research begins—to prevent subsequent deviations from the study design, regardless of intriguing, serendipitous results.

The authors also state that reporting and dissemination might be improved by eliminating "the file drawer problem," the tendency of researchers to publish results that are novel, statistically significant or supportive of a particular hypothesis, while not publishing other valid but less interesting results. "The consequence," wrote the authors, "is that the published literature indicates stronger evidence for findings than exists in reality."

The file drawer effect is fueled, though, from the behavior of universities, journals, reviewers and funding agencies—not just that of individual scientists, the authors write. One way funders and journals can help is by requiring all researchers to meet certain standards. For example, the Cure Huntington Disease Initiative has created an independent standing committee to evaluate proposals and provide disinterested advice to grantees on experimental design and statistical analysis. This committee doesn't just set standards; it actually helps researchers meet those standards.

The ultimate goal is to get to the truth, Ioannidis said. "When we are doing , we are trying to arrive at the truth. In many disciplines, we want that truth to translate into something that works. But if it's not true, it's not going to speed up computer software, it's not going to save lives and it's not going to improve quality of life."

He said the goal of the manifesto is to increase the speed at which researchers get closer to the truth. "All these measures are intended to expedite the process of validation—the circle of generating, testing and validating or refuting hypotheses in the scientific machine."

Explore further: One reason so many scientific studies may be wrong

Related Stories

Recommended for you

Study into who is least afraid of death

March 24, 2017

A new study examines all robust, available data on how fearful we are of what happens once we shuffle off this mortal coil. They find that atheists are among those least afraid of dying... and, perhaps not surprisingly, ...

Scientists make new discovery about bird evolution

March 24, 2017

In a new paper published in National Science Review, a team of scientists from the Institute of Vertebrate Paleontology and Paleoanthropology, the Shandong Tianyu Museum of Nature, and the Nanjing Institute of Geology and ...

Mathematical framework explains diverse plant stem forms

March 23, 2017

It is well known that as plants grow, their stems and shoots respond to outside signals like light and gravity. But if plants all have similar stimuli, why are there so many different plant shapes? Why does a weeping willow ...

37 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

syndicate_51
2 / 5 (8) Jan 10, 2017
Wha!?

You say money has been having an effect on the narrative of the scientific community in places!

Who knew!

Scoff....

Where money flows corruption rolls.... ALWAYS!
antialias_physorg
4.3 / 5 (11) Jan 10, 2017
Where money flows corruption rolls.... ALWAYS!

Did you even read the article? None of the effects cited are those of corruption.

And what good would 'corruption' do you as a researcher? No matter whether you get zero dollars of grants or a billion dollars of grants: your salary doesn't change by one cent. Scientists aren't paid like people in other jobs - i.e. where corruption makes 'sense'.
rhugh1066
5 / 5 (1) Jan 10, 2017
Just for the record, corruption is also defined as containing alterations or errors. It's not necessarily just the result of depravity or bribe-taking.
antialias_physorg
4.1 / 5 (9) Jan 10, 2017
I don't think you understand how science works. Most especially in the medical/pharmaceutical sector the threshold for a finding is 95% confidence. That means that up to 5% of papers will report false positives - despite having followed the scientific method to the letter. That's not 'wrongdoing'.

It's also not wrongdoing when you don't publish stuff that doesn't pan out - because you usually just abort the project midway through (i.e. you don't even get to the stage where you do the full in-post analyses - which would be essential for any publication). Reason: Continuing with the work at that point would waste time and money. Money (and especially time) is limited in research. Not to mention that journals pick the stuff that shows best results (And why wouldn't they? Would you leave out great results to put in some humdrum "didn't work" stuff instead? Certainly not.)

Don't get me wrong. The negative results are just as important. But I can see why they don't get published.
Chris_Reeve
2.6 / 5 (5) Jan 10, 2017
The article states ...

"The manifesto suggests it's not just scientists themselves who are responsible for improving the quality of science, but also other stakeholders, including research institutions, scientific journals, funders and regulatory agencies."

The scientists seem to not realize that Dr. Gerald Pollack already went through this process with the top funding agencies. He made his recommendations, and then it was time for them to implement them ...

Nothing changed.

What these guys seem to not get is that you have to alter the rewards system, and you have to focus on controversial science. That seems to be what Pollack got out of the process, because that failure was then the inspiration for his Institute for Venture Science -- which, if it can get funded, will radically alter science forever.
Chris_Reeve
1.6 / 5 (7) Jan 10, 2017
This problem cannot be solved without adequately dealing with challenges to textbook theory. From a science perspective, scientists find signal in noise -- and all of these other bad practices -- for the simple reason that they are working with bad models. A more thoughtful, deeper approach would ask why this happens so often? At this point, we are in the territory of how worldviews inform models. That subject, in turn brings us to culture, science education, science journalism and funding practices. But, none of this stuff matters at all if the scientists have good models, to begin with.

It is only when our worldviews are pushing us towards bad models that scientific methodology tries to make up the difference with questionable practices.

This is why controversial science matters so much. If all we did was teach ABOUT controversies, we'd be a step further in the right direction.
TMDurand
5 / 5 (1) Jan 10, 2017
Researchers need to publish all their results, good or bad. Some day we may figure out why a given result was good or went bad (perhaps it was a mistake in procedure - Edison had to do a 1000 steps to build a light bulb, and a failure in any one step ruined the bulb). We may then need to review all the bad cases again (and then some bad cases may turn into good cases). Edison was quoted as saying, "I have not failed 700 times. I have not failed once. I have succeeded in proving that those 700 ways will not work. When I have eliminated the ways that will not work, I will find the way that will work."
radicalbehaviorist2
5 / 5 (1) Jan 11, 2017
The first step to improvement is to ban null hypothesis significance testing. The main issue here is not that it is often misused and misunderstood (it is) but, rather, it does not tell us what we want to know. A p-value gives, by itself, not one iota of quantitative information about the truth or falsity of the null hypothesis. Second, we should use so-called single-subject designs everywhere they are relevant and they are relevant quite often. Such designs provide data relevant to individual subjects (and not just those in the study) because they involve replication within- and between-subject.
antialias_physorg
3.7 / 5 (6) Jan 11, 2017
The first step to improvement is to ban null hypothesis significance testing.

And the alternative is ...?

It's all fine and dandy to claim that X isn't a perfect approach. But if you don't have an approach Y up your sleeve that is better then that's not really helpful.
radicalbehaviorist2
Jan 11, 2017
This comment has been removed by a moderator.
bschott
3.7 / 5 (3) Jan 11, 2017
The first step to improvement is to ban null hypothesis significance testing.

And the alternative is ...?

It's all fine and dandy to claim that X isn't a perfect approach. But if you don't have an approach Y up your sleeve that is better then that's not really helpful.

This whole article is written about avoiding X when it yields nothing helpful instead of proceeding with X knowing the results are unlikely to produce useful results. Better to not use X again and work on figuring Y out.
radicalbehaviorist2
5 / 5 (2) Jan 11, 2017
OK...I'll try again. In my original comments I said: "we should use so-called single-subject designs everywhere they are relevant and they are relevant quite often. Such designs provide data relevant to individual subjects (and not just those in the study) because they involve replication within- and between-subject." As to what should replace the almighty p-value when group-designs are necessary, the answer is this: nothing. "Nothing" because the notion that a p-value is an arbiter of what is or is not "an effect" is a myth. I would say that, in general, one should present the data in graphical form in a variety of ways and make your case for an effect. That's the responsibility of the author, just as it is the reader's responsibility to decide for her or himself if there was an effect. No magic bullet to ease the burden I'm afraid. It's time to take responsibility instead of depending on the magic p-value.
antialias_physorg
3.7 / 5 (6) Jan 13, 2017
You still have to define a measure of significance for single-subject design. Visual inspection of graphs alone is open to individual interpretation.

Single-subject design is also not particularly effective in pharmaceutical trials (because you cannot do intra-subject replication). They also only work if you have a time-stable variable to work on. This is not the case in most pharmaceutical issues. Also if you manage to show an effect (i.e. you cure the patient) then removing the treatment will not tell you anything.

For chemistry or particle physics single-study doesn't work at all, because you never have the same 'subject' for two test runs available.
radicalbehaviorist2
5 / 5 (1) Jan 13, 2017
The "gold standard" for judging an effect would be that the ranges do not overlap in different phases of the experiment. Short of that (e.g., ranges occasionally overlap) effects are generally quite clear. Be aware that behavior analysis has few problems with failures to replicate. And while visual inspection is open to interpretation, the reader sees the same data as the researcher.

Why can you not "do intra-subject replication" in "pharmaceutical trials"?

It is generally true that you need a stable-state (although it is possible to do research with a dependent-variable that is systematically increasing or decreasing). But why would you say that obtaining stable baselines is not possible (that is the implication) "in most pharmaceutical issues"? And, yes, irreversible effects limit the power of SSDs since you cannot do a reversal, but you have, at least, an indication that there was an effect.

I'll have to think about your claims concerning chemistry and physics.

Glen
Benni
3 / 5 (4) Jan 13, 2017
One analysis, wrote the authors, estimated that as much as 85 percent of the biomedical research effort is wasted.


......and not just in biomedical research, but everything else where block grants are given for every tinfoil hat idea that big Academia (like MIT) conjures up.

These same Academics will parade their beggarly attitudes before Congress under the guise that they are about to come up with major breakthroughs if only they just had another billion to spend on studying the sex lives of fruit flies.

Yeah, we see them in this chatroom on a daily basis, Axemaster, Shavera, RNP, etc, all whining for the privilege to confiscate a portion of our paychecks & diverted to their paychecks, it's the reason they're all bent out of shape that Trump won the election, and now they have a real fear the gravy train ride they had in the recent past is about to come to an end, we can only hope. Academics need to start looking for real jobs where they must produce REAL results.
antialias_physorg
3.9 / 5 (7) Jan 13, 2017
Be aware that behavior analysis has few problems with failures to replicate.

Sure. In psychology experiments it's a good way to set up experiments. Psychology is, in any case, very soft on numbers as the results of diagnosis are always a bit subjective. I was reading your comment as if you were advocating single-subject studies for all areas of science - and (apart from psychology/sociology) - I see no area of science where they make much sense.

Why can you not "do intra-subject replication" in "pharmaceutical trials"?

When you remove the drug - and the drug was effective - then the patient will not revert to being sick. You also cannot do multiple runs on the same subject because possible lingering effects of the earlier treatment (adaptation to the drug by the patient or the pathogen). You just don't have a stable state.
Benni
2.6 / 5 (5) Jan 13, 2017
Sure. In psychology experiments it's a good way to set up experiments. Psychology is, in any case, very soft on numbers as the results of diagnosis are always a bit subjective. I was reading your comment as if you were advocating single-subject studies for all areas of science - and (apart from psychology/sociology) - I see no area of science where they make much sense.


..........pure unadulterated psychobabble, even you don't know what you just wrote. To you these words you managed to cobble together had such a fluent tone to them that you thought, "What the heck, it sounds so damn good, I guess I'll just run with it".

Nothing you post rises above the level of a gossip columnist.

radicalbehaviorist2
5 / 5 (1) Jan 13, 2017
To A:

Part I.
The methods were actually developed by Claude Bernard - often called the Father of Modern Experimental Medicine. "Soft on numbers"? Psychology is a big field. That phrase does not characterize, for example, behavior analysis, and much of what would be called "neuroscience" takes place in psychology departments, the researchers often having Ph.D.s in psychology. And, for the record, if you treat observers as any transduction device, they can be calibrated. In all of what I did, though, the behavior of nonhumans was measured automatically when they operated some manipulandum. As far as I can see, the method (SSD) is good any time the subject matter can be measured frequently, the dependent-variable becomes stable, and independent-variables can alter the stable-state. Tell me where that is lacking in sense.

End Part I
radicalbehaviorist2
5 / 5 (1) Jan 13, 2017
Part II.

The effects of drugs are not always permanent and, thus, the baseline is often recoverable. There is a lot of very high quality data published regarding behavioral pharmacology using SSDs. So..."you just don't have a stable-state" simply is not always true. You'd have a better case talking about "order-effects" where the baseline is, nonetheless, recoverable. But if some independent-variables are subject to order-effects, then that is a fact of Nature and cannot be by-passed. Randomizing, say, order of doses across subjects doesn't eliminate order-effects...it obscures them. If there are order-effects, they constitute limits on generality that will "come out in the wash" upon systematic replication. Finally, none of your "criticisms" of SSD can make NHST any better. A p-value will still not tell you the probability that Ho is true or false. Where group designs must be used, there is no magic calculation that will absolve one of the responsibility of judging the data.
antialias_physorg
3.4 / 5 (5) Jan 14, 2017
Well, I can tell you a bit about my own PhD (quantifying osteoarthritic changes from CT images - mainly develping algorithms for segmentation and analysis). I used both approaches in parts of my work.

We had patients from several different sites in a longitudinal study to quantify change in bone in the knee. While osteoarthritis is a change of cartilage the changing loads (due to asymmetric cartilage degradation and sometimes changed walking habits due to pain) leads to remodeling of the bone. CT is cheaper than MR (which optimally would be used to image the cartilage direct) so we wanted to see if we could get good quantification using CT.
In this work highly accurate segmentation is paramount because the changes are very small and subtle (mineral density, fractal dimensions of bone trabeculae, trabeculae anisotropy, bone surface morphology.... )

For verifying the semiautomatic algorithm I did intra (and inter) operator tests, because there it is repeatable.
antialias_physorg
3 / 5 (4) Jan 14, 2017
There it is fine to have one operator do the same datasets over and over because you can revert to a prefdefined state. But you still want to compare this to inter-operator tests (several operators segmenting the same dataset). It's important because that is what will happen in reality. Many operators will use this algorithm and they all must come to the same conclusion.

However, for the patients removing the drug tested isn't going to be sensible for an intra-subject/repeat test, because the prevalence of osteoarthritic change differs with age (and patients do age during clinical trials.) Also osteoarthritis is a non-reversible/progressive disease. So you can't revert to the pre-trial state in any case. Now one might argue that this is particular to osteoarthritis, but even in something like a flu trial the patient isn't the same after the first test. His immune system has adapted - making comparisons to 'another run' iffy (even if that were ethical, BTW - which it's not)
antialias_physorg
3.4 / 5 (5) Jan 14, 2017
As for the "soft on numbers". In this work I came accross the various ways to quantify states of osteoarthritis(e.g. Kellgren Lawrence scale) or pain/stiffness (WOMAC scale). While these scales give you numbers the assignement of these numbers is very subjective (What is a KL 2 for one can be a KL 3 for another. What is a WOMAC 7 can be a WOMAC 13 for someone else).
It is very easy to cheat yourself into taking these numbers as absolute and plugging them into statistics programs.
You really need to get a lot of data before the statistical power is adequate to account for the variability in patient perception (WOMAC) or even physician perception (K-L). A single-subject setup would leave you incredibly at the mercy of single subject bias.
Benni
2.3 / 5 (3) Jan 14, 2017
You really need to get a lot of data before the statistical power is adequate to account for the variability in patient perception (WOMAC) or even physician perception (K-L). A single-subject setup would leave you incredibly at the mercy of single subject bias.
...............and this translates to what? More psychobabble on your part.

Is it possible for you to write ANYTHING that has even the smallest fragment of comprehensibility to it? The fact that you continuously cobble together insensible words & sentences with zero discernible meaning only further demonstrates how desperate you are in looking for attention. You have spent years in this chatroom claiming to have a degree of whatever field of the subject matter is under discussion.

After you claimed to have a Masters degree in Electrical Engineering, I caught you making so much screwed up Commentary here about EE topics, that finally you gave up trying to convince the Chatroom you had such a degree.

antialias_physorg
3.4 / 5 (5) Jan 14, 2017
and this translates to what?

Exactly what it says: The more unsure your measurement (in this case the score) the more data you need to get a significant result.

Is it possible for you to write ANYTHING that has even the smallest fragment of comprehensibility to it?

Why doesn't it surprise me that you can't even grasp a dumbed down synopsis of a thesis? I didn't even use any math or any difficult words.

You have spent years in this chatroom claiming to have a degree of whatever field of the subject matter is under discussion.

No. I have always said I have a degree in human biology. If you can back your lie up then do so. Please. Let's see you try.

After you claimed to have a Masters degree in Electrical Engineering,

I do. So what? I specialized in biomedical electrical engineering at uni. The avenues for a grad student aren't just to PhD of EE. I could have gone for one in CS or physics or math or ...
antialias_physorg
5 / 5 (1) Jan 16, 2017
So what's it gonna be Benni? Run-and-hide as usual? Or are you going to cough up some proof for your allegations?
radicalbehaviorist2
5 / 5 (1) Jan 16, 2017
A.,

Your long post was interesting...it seemed, though, somewhat unresponsive to the original issue. I have a good idea what your endeavor is, but not precisely by any means. Is it the case that you are trying to design some "instrument" (you are calling it an "algorithm") that results in people who read CT scans able to spot problems with the system (cartilage degeneration etc.) of interest? There you wouldn't want to average across subjects, though that is a sin that is part-and-parcel of NHST. I should have noted, come to think of it, that even where group designs are necessary, one needn't average data across subjects. If N is very large, one should, I think, show the distributions each type of data. Anyway...where did you use NHST in that project? As to changes in the dependent-variable over time, if the changes are systematic one can still use SSD often. Ditto for times when the baseline is not recoverable. Finally, you have not addressed my general criticism of NHST.

Glen
Da Schneib
not rated yet Jan 16, 2017
This conversation between @anti and @radical is very interesting and informative, regarding these types of studies. The types of tests reliably appropriate to data from large studies of behavior of complex systems like animals or humans are, however, I think, somewhat different than those appropriate to simple systems in physics and chemistry, and I think the article rather glosses over these differences.

The number of degrees of freedom of such complex systems are enormously greater than the simple systems physical scientists work with. I think the article reaches beyond its field of competence. It would be better if the author didn't use sweeping generalizations of systems of different levels of complexity.
antialias_physorg
not rated yet Jan 17, 2017
Is it the case that you are trying to design some "instrument" (you are calling it an "algorithm") that results in people who read CT scans able to spot problems with the system (cartilage degeneration etc.) of interest?

It's an algorithm that quantifies change in the bone. The user interaction is minimal (user must set one point in the femur and the tibia close to the center of the growth plate in the CT dataset). However minimal user input is still user input, so I had to provide evidence that the algorithm is robust. That is why I had to do inter and intra operator testing.
Intra operator testing would conform to the 'single subject' experimental setup we discussed. But it isn't enough to JUST do intra operator testing - even in such a situation where you can reliably go back to the initial conditions.

(In this study no datasets are averaged)
antialias_physorg
not rated yet Jan 17, 2017
if the changes are systematic one can still use SSD often.

That's an issue, because you almost never know if it is systematic. Come to think of it: there are no varaibles which are fully systematic in medicine that I can think of.

Anyway...where did you use NHST in that project?

I didn't. I was responding to your criticism that p values don't tell you anything and that single subject setups are superior. I just happen to disagree on these two points, that's all. As DaSchneib notes: The systems in medicine are complex. You can't just take the human out and only look at the pathology. Every human has some special characteristics that might influence the result, so you have to go for multivariate studies (which in turn necessitate large sample sizes).
radicalbehaviorist2
not rated yet Jan 17, 2017
"But it isn't enough to JUST do intra operator testing - even in such a situation where you can reliably go back to the initial conditions."

GS: That might be pertinent...if I had said, or any else (with legitimate training) had said, that SSDs rely on only intra-subject data. Any experiment with the goal of producing data relevant to more than a single subject uses more than one subject in an experiment (despite the name "single-subject designs"), and the reliability is judged by both intra- inter-subject data. I pointed this out when I said that a superiority of SSDs lie in the fact that both intra- and inter-subject reliability is shown before the data are published. This is why there is little failure to replicate in behavior analysis.

Cordially,
Glen
antialias_physorg
not rated yet Jan 17, 2017
A weird thing I noticed when doing these intra subject tests: Even they have their problems. Operators sometimes remember datasets (e.g. those with 'weird' pathologies) and will adjust to what they know worked/didn't work before. I think this would only really work if one could mind-wipe the subjects. Alas, that will have to be part of a future project.
radicalbehaviorist2
not rated yet Jan 17, 2017
"Operators sometimes remember datasets (e.g. those with 'weird' pathologies) and will adjust to what they know worked/didn't work before. I think this would only really work if one could mind-wipe the subjects."

GS: I would worry less about "mind wipes" and more about understanding the science behind what you are talking about (which has not much to do with general methods or NHST). What you want is to produce, speaking technically, a broadly generalized response class (I added "broadly" because all response classes are...well...classes). Any time an organism acquires behavior said to require "possession of a concept" it will first, speaking colloquially, "remember particulars." Only after training with multiple positive and negative exemplars of the concept does the broadly generalized response emerge.

antialias_physorg
not rated yet Jan 17, 2017
Only after training with multiple positive and negative exemplars of the concept does the broadly generalized response emerge.

I don't agree on this. a single exposure to stiumulus is enough. Especially when we're dealing with medical studies where a single exposure will change the immune system response on further exposures. You can't roll back to a state where the host has never encountered the pathogen. Just stopping to administer the drug and letting an illness resurface doesn't reset your immune system.
(and as noted: that's an approach that would not get past any ethics board)

On top of that you'll get placebo/nocebo effects on repeat trials with a single subject. The other way seems a lot less prone to bias to me.
radicalbehaviorist2
not rated yet Jan 17, 2017
I don't agree on this. a single exposure to stiumulus is enough. Especially when we're dealing with medical studies where a single exposure will change the immune system response on further exposures."

GS: But that doesn't have anything to do with the current issue. You want to get your human "operators" to respond one way when a stimulus is a member of a class, and another way when it is not. This is a matter of establishing a broadly generalized response class (i.e., like getting a person to call trees "tree"). Training a child how to "use the term 'tree'" correctly won't fly if you expose the child only to one positive exemplar (or only one negative exemplar). This is true, even with humans, if there is no simple checklist that can be applied. If there were, one could just teach the humans a set of rules. For most stuff in the world there is no "checklist" that can be applied. I am beginning to see that you are far more interested in saving face than learning anything.
radicalbehaviorist2
not rated yet Jan 17, 2017
"I was responding to your criticism that p values don't tell you anything and that single subject setups are superior. I just happen to disagree on these two points, that's all."

GS: First of all, I didn't say that "p values don't tell you anything"; I said (perhaps not in so many words) that they don't tell you the probability that the null is true or false. They tell you the p of obtaining data as extreme of more extreme as what you collected given that the null is true, that is p(data|null true) not p(null true|data). Second, to be blunt, who cares about your opinion in a forum like this? What are important are your arguments...not a mere statement of your opinion. Obviously, your "opinion" could be the product of misinformation.

Cordially,
Glen
antialias_physorg
not rated yet Jan 17, 2017
Second, to be blunt, who cares about your opinion in a forum like this?

No one. So what? We're not discussing in any kind of official capacity, here. If you have a gripe with how science is done then you need to publish it in a journal - not discuss this on some comment section on a site about journalist pieces about scientific papers.
I can only tell you how I have seen it done in real-life, and why I think SSD doesn't have many areas where it can be applied the way it's intended (i.e. why I think the way it's usually done is superior).
radicalbehaviorist2
not rated yet Jan 17, 2017
GS: Second, to be blunt, who cares about your opinion in a forum like this?

A: No one.

GS (new): Well...thank goodness for small favors - no one cares about your opinion.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.