When 'exciting' trumps 'honest', traditional academic journals encourage bad science

Aug 04, 2014 by Robert De Vries
One more corner, then I’ll answer your questions. Credit: campuspartymexico, CC BY

Imagine you're a scientist. You're interested in testing the hypothesis that playing violent video games makes people more likely to be violent in real life. This is a straightforward theory, but there are still many, many different ways you could test it. First you have to decide which games count as "violent". Does Super Mario Brothers count because you kill Goombas? Or do you only count "realistic" games like Call of Duty? Next you have to decide how to measure violent behaviour. Real violence is rare and difficult to measure, so you'll probably need to look at lower-level "aggressive" acts – but which ones?

Any scientific study in any domain from astronomy to biology to social science contains countless decisions like this, large and small. On a given project a scientist will probably end up trying many different permutations, generating masses and masses of data.

The problem is that in the final published paper – the only thing you or I ever get to read – you are likely to see only one result: the one the researchers were looking for. This is because, in my experience, scientists often leave complicating information out of published papers, especially if it conflicts with the overall message they are trying to get across.

In a large recent study around a third of scientists (33.7%) admitted to things like dropping data points based on a "gut feeling" or selectively reporting results that "worked" (that showed what their theories predicted). About 70% said they had seen their colleagues doing this. If this is what they are prepared to admit to a stranger researching the issue, the real numbers are probably much, much higher.

It is almost impossible to overstate how big a problem this is for science. It means that, looking at a given paper, you have almost no idea of how much the results genuinely reflect reality (hint: probably not much).

Pressure to be interesting

At this point, scientists probably sound pretty untrustworthy. But the scientists aren't really the problem. The problem is the way science research is published. Specifically the pressure all scientists are under to be interesting.

This problem comes about because science, though mostly funded by taxpayers, is published in academic journals you have to pay to read. Like newspapers, these journals are run by private, for-profit companies. And, like newspapers, they want to publish the most interesting, attention-grabbing articles.

This is particularly true of the most prestigious journals like Science and Nature. What this means in practice is that journals don't like to publish negative or mixed results – studies where you predicted you would find something but actually didn't, or studies where you found a mix of conflicting results.

Let's go back to our video game study. You have spent months conducting a rigorous investigation but, alas, the results didn't quite turn out as your predicted. Ideally, this shouldn't be a problem. If your methods were sound, your results are your results. Publish and be damned, right? But here's the rub. The top journals won't be interested in your boring negative results, and being published in these journals has a huge impact on your future career. What do you do?

If your results are unambiguously negative, there is not much you can do. Foreseeing long months of re-submissions to increasingly obscure journals, you consign your study to the file-drawer for a rainy day that will likely never come.

But if your results are less clear-cut? What if some of them suggest your theory was right, but some don't? Again, you could struggle for months or years, scraping the bottom of the journal barrel to find someone to publish the whole lot.

Or you could "simplify". After all, most of your results are in line with your predictions, so your theory is probably right. Why not leave those "aberrant" results out of the paper? There is probably a good reason why they turned out like that. Some anomaly. Nothing to do with your theory really.

Nowhere in this process do you feel like you are being deceptive. You just know what type of papers are easiest to publish, so you chip off the "boring" complications to achieve a clearer, more interesting picture. Sadly, the complications are probably closer to messy reality. The picture you publish, while clearer, is much more likely to be wrong.

Science is supposed to have a mechanism for correcting these sorts of errors. It is called replication, and it is one of the cornerstones of the scientific method. Someone else replicates what you did to see if they get the same results. Unfortunately, replication is another thing the science journals consider "boring" – so no one is doing it anymore. You can publish your tweaked and nudged and simplified results, safe in the knowledge that no-one will ever try exactly the same thing again and find something different.

This has enormous consequences for the state of science as a whole. When we ask "Is drug A effective for disease B?" or "Is policy X a good idea?", we are looking at a body of evidence that is drastically incomplete. Crucially, it is missing a lot of studies that said "No, it isn't", and includes a lot of studies which should have said "Maybe yes, maybe no", but actually just say "Yes".

We are making huge, life-altering decisions on the basis of bad information. All because we have created a system which treats scientists like journalists; which tells them to give us what is interesting instead of what is true.

Publish more papers, even boring ones

This seems like a big, abstract, hard-to-fix problem. But we actually have a solution right in front of us. All we have to do is continue changing the scientific publishing model so it no longer has anything to do with "interest" and is more open to publishing everything, as long as the methodology is sound. Open Access journals like PLOS ONE already do this. They publish everything they receive that is methodologically sound, whether it is straightforward or messy, headline-grabbing or mind-numbingly boring.

Extending this model to every academic journal would, at a stroke, remove the single biggest incentive for scientists to hide inconvenient results. The main objection to this is that the resulting morass of published articles would be tough to sort through. But this is the internet age. We have become past masters at sorting through masses of crap to get to the good stuff – the internet itself would be unusable if we weren't.

Explore further: What lesson do rising retraction rates hold for peer review?

add to favorites email to friend print save as pdf

Related Stories

Flawed sting operation singles out open access journals

Oct 04, 2013

In a sting operation, John Bohannon, a correspondent of Science, claims to have exposed dodgy open access journals. His argument seems to be that, because of their business model, some journals are biased ...

How science can beat the flawed metric that rules it

Jul 30, 2014

In order to improve something, we need to be able to measure its quality. This is true in public policy, in commercial industries, and also in science. Like other fields, science has a growing need for quantitative ...

Nobel winning scientist to boycott top science journals

Dec 10, 2013

(Phys.org) —Randy Schekman winner (with colleagues) of the Nobel Prize this year in the Physiology or Medicine category for his work that involved describing how materials are carried to different parts ...

Recommended for you

Ig Nobel winner: Using pork to stop nosebleeds

Sep 19, 2014

There's some truth to the effectiveness of folk remedies and old wives' tales when it comes to serious medical issues, according to findings by a team from Detroit Medical Center.

History books spark latest Texas classroom battle

Sep 16, 2014

As Texas mulls new history textbooks for its 5-plus million public school students, some academics are decrying lessons they say exaggerate the influence of Christian values on America's Founding Fathers.

Flatow, 'Science Friday' settle claims over grant

Sep 16, 2014

Federal prosecutors say radio host Ira Flatow and his "Science Friday" show that airs on many National Public Radio stations have settled civil claims that they misused money from a nearly $1 million federal ...

User comments : 31

Adjust slider to filter visible comments by rank

Display comments: newest first

antialias_physorg
5 / 5 (1) Aug 04, 2014
This is because, in my experience, scientists often leave complicating information out of published papers

If you think that then you don't know how to read scientific papers. It's not like reading newspapers or magazines. Scientific papers are very precise. What's in there was done, what's not in there was not done.
As soon as a reader starts to interpret based on things that aren't expressly stated (i.e. just inferred as a reader) you can be sure of one thing: you're wrong.

you have almost no idea of how much the results genuinely reflect reality

Scientists don't try to cheat. But yes: sometimes you can't use a dataset because it's corrupted. You don't drop datasets because you don't like them but BECAUSE they would skew the results in a way that's not in line with reality.

E.g. you do research based on photographs. One photograph is overexposed and so your algorithm doesn't work. Do you include it? No, because it isn't part of the problem space.
LariAnn
not rated yet Aug 04, 2014
I've seen this for myself when working in a cancer research lab. The research was funded, in part, by tobacco companies and, predictably, when one of the investigators tried to present the results as obtained, the chief investigator told him that his work was "weak" and that he had to rewrite the results. The rewritten results dialed down the detrimental effects of tobacco smoke. Part of the problem may be pressure from the funding source (i.e. if they don't like your results, they won't fund any more work) and the bias inherent in the research (i.e. if the scientist wants to find proof that their hypothesis is valid). If the scientist is either married or engaged to their hypothesis, or if their career progress hinges upon their results agreeing with the consensus,, then there is the very human possibility that the results reported are not going to reflect what was really observed. The moral here is that scientists are human, not automatons.
Sigh
not rated yet Aug 05, 2014
scientists often leave complicating information out of published papers

What's in there was done, what's not in there was not done.

You include what you believe to be relevant. There was recent work finding that the smell of male experimenters made mice anxious, which changed their responses to pain. I have not seen the sex of the experimenter being mentioned before. People didn't know it mattered.

You don't drop datasets because you don't like them but BECAUSE they would skew the results in a way that's not in line with reality.

Which depends on what you believe reality to be, and may not agree with others' perceptions, e.g. there are people who reject any data that show global warming. They think those data can't be in line with reality.

One photograph is overexposed and so your algorithm doesn't work. Do you include it? No, because it isn't part of the problem space.

It's rarely that simple.
Pexeso
1 / 5 (1) Aug 05, 2014
scientists often leave complicating information out of published papers, especially if it conflicts with the overall message they are trying to get across

The problem here is, the scientists aren't supposed to promote some messages, only facts.
It is almost impossible to overstate how big a problem this is for science (hint: probably not much)
It depends. It may lead into huge conceptual bias, when this bias gets systematical. That is to say, each of scientists biases his conclusions just a bit, but a huge amount of such a systematical biases generates huge bias. The result for example is, whole generations of physicists ignore cold fusion finding and they're all waiting, when Chinese build megawatt scale factory based on phenomena, which officially cannot work by mainstream science, which usually bothers about picowatt scale phenomena. This is indeed a huge bias in many orders of magnitude, not just infinitesimal one.

Pexeso
1 / 5 (1) Aug 05, 2014
Open Access journals like PLOS ONE already do this. They publish everything they receive that is methodologically sound, whether it is straightforward or messy, headline-grabbing or mind-numbingly boring.
Open Access publishing was essentially my proposal here too. This system will ruin the labeling based on impact factor too.
Science is supposed to have a mechanism for correcting these sorts of errors. It is called replication, and it is one of the cornerstones of the scientific method.
The problem is, science has a method, but no enforcement of it. Nobody can convince the scientists for replication of findings, which they don't believe in (like the cold fusion). In this moment this supposedly pretty scientific method fails. No to say, the replication of foreign findings is sorta charity for most scientists involved - you can never get a fame with it, you'll always remain "just a replicator".
Pexeso
1 / 5 (1) Aug 05, 2014
Especially in the times of financial crisis for most of scientists becomes more advantageous to continue in their own research and not to bother with replication of foreign findings. As the result, during financial crisis the science becomes even more boring than before.The scientists forget the natural human inquisitiveness and everyone is struggling to find reliable and intellectually unrewarding topic for research - just to survive. The overemployment from previous wealthy period becomes a huge brake of another progress now.
Publish more papers, even boring ones
Is it really a good advice in the times of informational explosion based on incremental if not trivial "duh science"? The whole citation based labeling is based on number of citations, so that the publishing of articles which no one will cite is not a solution here. We need to attract the scientists to really important topics.
antialias_physorg
3 / 5 (2) Aug 05, 2014
It's rarely that simple.

When the quality of the data is borderline you include it. Anything else is fudging (and as a scientist you're well aware of the distinction).
Since science is a vocation and not so much a career choice for "the big bucks, fame and hot chicks" it'd be anathema to what you're doing as a scientist.

It may lead into huge conceptual bias, when this bias gets systematical.

Translation: You don't know what the f**k you're talking about. Systemic bias doesn't increase bias. And since every scientist is doing their own work bias doesn't become 'systematic' in any case.
That#s the entire point of independent groups doing research: bias shows up pretty quickly (unless you're into that whole 'global scientist cabal conspiracy' shtick).
Pexeso
1 / 5 (1) Aug 05, 2014
That#s the entire point of independent groups doing research: bias shows up pretty quickly (unless you're into that whole 'global scientist cabal conspiracy' shtick).
It doesn't explain, why for example no finding of cold fusion at nickel has been attempted to replicate during last twenty years in mainstream press. Just "attempted to replicate". The groupthink phenomena like the pluralistic ignorance and spiral of silence are apparently in work there.

The problem is, that this bias can show up "pretty quickly" only when some attempts for replications exist there. When no replications are present, then the whole feedback mechanism freezes - both in positive, both in negative direction. The contemporary scientists have many mechanisms against mutual bias, but nothing can save them against collective ignorance.
Pexeso
1 / 5 (1) Aug 05, 2014
Ironically, there are many opportunities for breakthrough research. For example, in this study the injecting of deuterons into molten lithium has lead into huge excess of fusion events. These experiments are simple, cheap and easy to replicate. Did someone of mainstream science bothered about it? No way. These findings even didn't get their way to mainstream press.

Recently the EMDrive device has been validated with NASA. Why the PhysOrg is quiet about it? Which of mainstream research basis is willing to replicate it by now? The scientists already realized, that this device violates their beloved theories and now the're just waiting as a single man, when someone will disprove it finally. They don't actually care about it, despite they're spending more and more money for experimental proofs of extradimensions and various quantum gravity phenomena. They just want to find them in "their own way" - that's all.
Pexeso
1 / 5 (1) Aug 05, 2014
and since every scientist is doing their own work bias doesn't become 'systematic' in any case
If we look at findings, which are getting ignored most obstinately, then we realize easily, it's just the findings, which would threat the jobs of another scientists in many alternative areas already. Nobody prohibits the graphene finding for example, because nobody does research alternative to graphene. But for example the nickel based fusion is another story, as it's energetic research. And search for alternative energy sources is a common denominator for many research centers (from batteries over solar nuclear energy to fossil fuel research, etc.)

This common denominator means a common, additive bias exist here, which cannot be compensated randomly between various research peers. Are you saying, I don't know what I'm talking about? Well, I'm already an expert to bias in mainstream science. I know perfectly where to look at it, how - and why.
Sigh
not rated yet Aug 06, 2014
It's rarely that simple.

When the quality of the data is borderline you include it.

The reason why I don't think it's that simple is that often you don't have that independent information on data quality. You have data collected in the same way, as far as you know, some of it looks weird, but you don't know why.

Even when you can judge data quality, how do you decide what is borderline? Intuition? Or some more principled criterion?
antialias_physorg
5 / 5 (2) Aug 06, 2014
some of it looks weird, but you don't know why.

And what do you think happens when you have weird data? The first thing you do is look at WHY it is weird.
You go check with the instruments. You recalibrate and try to get reference data from the instrument if no obvious error is found. And when you cannot find any failure with the instrument you include the data. No researcher goes "that's weird - let's not include it" and then lays it aside. Not one.

'Weird' data can be the gateway to the most astonishing discoveries or to introducing error from an hitherto undetected source.

how do you decide what is borderline?

That depends on the problem you're trying to solve. Often you do research against a gold standard (e.g. diagnosis by human of medical images. If the images are so bad that no human could use them for diagnosis then it makes no sense to include them in your trial)
antialias_physorg
5 / 5 (2) Aug 06, 2014
cont.
Often you define data quality standards in advance (and most certainly do you define how the instruments are to be set up to a great degree of detail)

The reason why I don't think it's that simple is that often you don't have that independent information on data quality.

Two cases:
1) When you do research on something that has been done before (with the aim to improve some aspect of the process. E.g. get better sensitivity) then you have data quality metrics. This is the overwhelming part of what happens in science

2) When you do research based on a kind of data that has never been collected before then you have to rely on that data (barring obvious instrument failure) and you can't throw anything out.
Pexeso
1 / 5 (1) Aug 06, 2014
The discussions about data quality have no merit at the moment, when the scientists aren't simply interested about data in any form. For example, the EMDrive is simple and cheap device and it's known already for twelve years. If the physicists would be really interested about better data, they would collect them already in the same way, like they're already doing with WIMPs and in many other areas. Not all problems existing here are the problems of journals or quality of data collected. Yes, the journalism of journals enforces many negative traits of scientific community - but the journal editors cannot be really taken into responsibility for studies, which were never submitted for publishing.
antialias_physorg
5 / 5 (1) Aug 06, 2014
the EMDrive is simple and cheap device and it's known already for twelve years.

If it exists then build it and sell it. What do you need scientists for at that point?

As for your general gripe about "scientists not being interested in data": Since you wouldn't recognize science if it hit you in the face - how would you know?
You have neither experience of science nor scientists apart from your self-created fantasies. So yeah: in your fantasy world all you say may be true. But don't be surprised if we, who live in the real world, don't really give a damn about that.

..realizing this would save you endless frustrations (and probably a pretty penny in blood pressure medicine)
Pexeso
1 / 5 (1) Aug 06, 2014
The recent discussions about EMDrive replication with NASA demonstrate well the opposite attitudes of mainstream physics theorists and layman community. Just at reddit the single article got over 30 duplicates, i.e. laymen are extremely interested about this subject. But conversely the reactions of experts were quite sparse and generally dismissive (1, 2). Their opinions in this matter even didn't get the space of dedicated posts at their blogs.

The memo is, you cannot fake the interest of experts, where no interest actually exists.
Pexeso
1 / 5 (1) Aug 06, 2014
If it exists then build it and sell it. What do you need scientists for at that point?
This is just an example of double standards. We as a tax payers can say as easily: "I see, you do need some money for WIMPs or gravity wave research? Well, if exists, then build some device with it and sell it. You don't need our grant support for it."

Why some topic of scientific research require commercialization first for to attract the interest of scientists, whereas some others not? What makes the dividing line between them? I see, it's their support of existing theories and the number of theorists dealing with competitive research.
antialias_physorg
5 / 5 (2) Aug 06, 2014
Why some topic of scientific research require commercialization first for to attract the interest of scientists, whereas some others not?

You misunderstand (as usual. Are you willfully obtuse or just stupid I sometimes wonder)
If the stuff works and exists as you claim then there's no need to have it researched.
Research is what you do when you don't know.
What you want is called 'development' of an already existing technology. And that is what private enterprise is for - not physicist.
Pexeso
1 / 5 (1) Aug 06, 2014
Personally I don't see any difference between research of EMDrive effect and some other basics research, like the research of gravitational waves or WIMPs. Actually, at the case of later the physicists have at least some idea, how these subjects should behave. At the case of EMDrive everything is new for them. No research could be more "basic" under such a situation. The EMDrive case just illustrates, that the contemporary scientists have actually no interest about solely new phenomena, until they're not covered with some theory, i.e. their scientific inquisitiveness equals zero.

But as Wernher Von Braun already said: "Research is what I'm doing when I don't know what I'm doing. Everything else is just a collectiing of stamps". And the actual research is just what the scientists are payed for? We aren't paying them for circlejerking, don't you think?
Pexeso
1 / 5 (1) Aug 06, 2014
If the stuff works and exists as you claim then there's no need to have it researched.
This is another application of double standard. With such an attitude we could tell the scientists: "Well, the existence of electron has been proven finally. Now backoff and don't waste our money for its further research". At this moment at least 99% of research subjects would simply disappear.
What you want is called 'development' of an already existing technology
The gravity is already existing and practically exploited force as well. We don't need to research it anymore, the private subjects will handle it better.

It just seems, the bunch of parasites, which you're representing with your opinions here already have a good evasion, why not to research something, if it could threat the status quo, which has been established with them. In the same way, like the theologists of Holy Church warned the people against research of electricity and another phenomena, which are "intentions of God"
Sigh
not rated yet Aug 07, 2014
some of it looks weird, but you don't know why.

And what do you think happens when you have weird data? The first thing you do is look at WHY it is weird.
You go check with the instruments.

In the example I mentioned, instruments are not the problem. Imagine the research on how the sex of the experimenter influences the pain response of mice had not been done. You work on pain, you find a sudden change in the data. You may even notice it happened when someone else started handling the mice, but you instructed that person, and you know the procedures are the same. You do check your instruments, and find no explanation there, either. What do you do?

Checking instruments covers only some stuff, like the neutrinos apparently faster than light some time ago.
RealityCheck
1 / 5 (4) Aug 09, 2014
Hi antialias, Forum.

Antialias, you still have your rose-colored glasses (or should that be blinkers) on?

Even after my cautionary tale re BICEP2 'work/paper' fiasco?

Have you learned nothing from that?

Even after I pointed out that the literature is long infested by false assumptions/interpretative/methodological 'results/conclusions' which 'cascade' up and affect subsequent 'work/papers' treatments etc; and become 'in-built' confirmation-biased FLAWED bases which make subsequent 'work/paper' FLAWED from the get-go?

If you want a perfect example of the BIASED NATURE of the 'modern scientist' and the 'modern treatments/interpretations', then you have no further to look than YOURSELF right now. :)

Even after I pointed out SERIOUS problems/biases which make current 'work/papers' exercise like BICEP2 an inherently confirmation-biased FLAWED-base 'work/paper' (which mainstream has since confirmed), you still pretend 'it can't happen'?

How many realitychecks does it take? Rethink. :)
Whydening Gyre
5 / 5 (2) Aug 09, 2014
Okay. After reading the comments here, I have a solution.
Remove the ROI aspect from the research. It seems that is more of a driver than the actual knowledge and experience gained.
antialias_physorg
5 / 5 (4) Aug 09, 2014
Even after my cautionary tale re BICEP2 'work/paper' fiasco?

Yeah, yeah, yeah. You pointed all this out. However you never were a scientist, and you just talk about things you know nothing about. So I tell you something from having been-and-done: You are 100% wrong. About everything.

It's like this: Science is hard. It's stuff no one has done before. Of course will you get the occasional mistake (it would be weird if that DIDN'T happen). However the number of redactions is still in the sub percent region. So don't get your panties in a bunch over nothing.

That's just how knowledge gain happens. Occasionally we get a "rule of 48" - but that doesn't mean that science is fundamentally flawed or biased.

How many RealityChecks does it take?

None. They're a waste of oxygen. Your rants about non-existent problems won't change anything (not the world and not the opionion of a single other human)
RealityCheck
1 / 5 (4) Aug 10, 2014
Hi antialias. :) If you recall your own faux pas, and my correction...
http://phys.org/n...ple.html

...and my caution re BICEP2 'publish or perish' work/paper, then you must admit you are not acting/observing like a REAL scientist. I always do, as a 'first response' in any situation (hence my name "RealityCheck").

Your 'schoolgirl excitement' and 'ooh ah' swallowing of everything if it comes from your 'approved mainstream sources' may be 'cute groupie' approach, but not REALLY objective and scientific, is it? :)

And because science IS hard, it takes more than 'cute groupie' approach (and swallowing everything from a 'source' rather than thinking for yourself) to make a real scientist.

The redactions (in cosmology theory field) will come thick and fast once I publish my REALITY based (not obviously flawed 'mainstream assumptions/fantasies' based) complete ToE 'from scratch', founded in reality 'from go to whoa'.

Take care! :)

antialias_physorg
5 / 5 (4) Aug 11, 2014
then you must admit you are not acting/observing like a REAL scientist.

I'm posting on a comment section.

But the article you link to: You still didn't get what the article was about so I stopped posting. I tried to explain it to you a number of times, but it seems you're missing some very basic knowledge of quantum physics - so I stopped. I'm not here to hold your hand and lead you into physics-land.

Your 'schoolgirl excitement' and 'ooh ah' swallowing of everything

Of course I'm excited by new discoveries. Or I wouldn't be here. That occasionally one of them turns out to be based on a mistake isn't a problem. Since I'm not directly involved in the research (as my experties isn't in those particular fields) it doesn't matter how I react to them.

once I publish my REALITY based

Here we go again. Forgive me to be thoroughly underwhelmed by your non-theory without quantitative power. That#s not science. It's brain-farts.
Captain Stumpy
5 / 5 (4) Aug 11, 2014
Even after my cautionary tale re BICEP2 'work/paper' fiasco?
you mean your denigration of science without evidence or even the ability to comprehend what was going on, right?

you made a claim of 8 fatal flaws
YOU HAVE STILL NEVER POSTED, ON ANY COMMENT THREAD HERE, WHAT THOSE 8 FLAWS WERE
Therefore you lied (and continue to do so)

there is NO PROOF supporting your assertion that there were 8 fatal flaws, per your post

NO PROOF means you don't know
never DID know
STILL DON'T KNOW

RealityCheck
1 / 5 (5) Aug 11, 2014
Hi antialias. :)
I'm posting on a comment section.
Purporting to make objective observations as to the science.

I'm not here to hold your hand and lead you into physics-land.
Sure, mate, sure. So it was from your "physics-land" that you made your facile inappropriate cross-assumptions/comparisons?

Of course I'm excited by new discoveries. ... That occasionally one of them turns out to be based on a mistake isn't a problem....it doesn't matter how I react to them.
And so are we all. But the "new discoveries" in question were no such thing when looked at objectively, were they?

Hence it matters when you let your 'giddy schoolgirl' uncritical acceptance of flawed work/claims lead you to improperly attack 'cranks' objectively rejecting flawed work/claims.

And long-inbuilt 'systemic flaws' are far from 'occasional' in nature and effect.

...your non-theory without quantitative power.
You haven't seen it, so your opinion is hardly objective or scientific.

Rethink.
RealityCheck
1 / 5 (4) Aug 11, 2014
Hi CapS. :) How's your pet Toad and your bot-sock-operating Uncle Ira douche 'friends', going? Some 'scientific ideals' your/their 'tactics' represent, hey?

you mean your denigration of science without evidence or even the ability to comprehend what was going on, right?
What science? There wasn't any in there, only confirmation-biased 'publish-or-perish' crap even 'cranks' would be ashamed to try and pass off as 'science'.

you made a claim of 8 fatal flaws
No. I cautioned and suggested you should check it out for yourselves to see if YOU could find the obvious flaws I saw. Period.

YOU HAVE STILL NEVER POSTED, ON ANY COMMENT THREAD HERE, WHAT THOSE 8 FLAWS WERE
Therefore you lied (and continue to do so)
Still hysterically shrieking, CapS?

What about "I've withdrawn from detailed physics discourse so not to risk plagiarism before ToE published" do you still not comprehend?

never DID know
STILL DON'T KNOW
Obviously I did. Everyone does too, now. Peace.
antialias_physorg
5 / 5 (3) Aug 12, 2014
And so are we all. But the "new discoveries" in question were no such thing when looked at objectively, were they?

The criticism leveled at it is still at the "could have caused" stage. Work is ongoing on eliminating any effects (if they are present). So you're jumping the gun in saying that the results aren't signs of gravity waves. Currently we just don't know.

Obviously I did. Everyone does too, now.

Obviously not. Since you seem to be the only one to 'know' and you're not telling (the point of which being exactly ...what?)
I call 'liar' on that one.

You haven't seen it, so your opinion is hardly objective or scientific.

Neither has anyone else. Stuff based on mere claims without substantiation (i.e. evidence...read: "that which can be seen") is one thing: not scientific.

Go back to reading the Bible. That's more your style of 'science' (and THAT one is at least published)
RealityCheck
1 / 5 (2) Aug 12, 2014
Hi antialias. :)
The criticism leveled at it is still at the "could have caused" stage. Work is ongoing on eliminating any effects (if they are present). So you're jumping the gun in saying that the results aren't signs of gravity waves. Currently we just don't know.
Yet the claim of 'significant' signal was made. But now you agree can't claim that based on the 'work' presented at the time? You 'want it both ways'!

Obviously not. Since you seem to be the only one to 'know' and you're not telling (the point of which being exactly ...what?)
I call 'liar' on that one.
You calling mainstreamers (who also confirmed flawed basis for claims made) 'liars'? Foolish.

Neither has anyone else.
So, YOU admit to not having seen my complete ToE, but call ME 'unscientific' because YOU made opinions on it based on only YOUR OWN false impressions? Convoluted 'projection' there!

Go back to reading the Bible.
You have me mixed up with someone else. I'm atheist since age nine. :)