What is the best way to measure a researcher's scientific impact?

Jun 13, 2013 by Lisa Zyga feature
This table shows how credit is divided among up to six coauthors depending on their relative contributions. If all coauthors contributed equally (“Equal A”), the credit is divided equally. Otherwise, each coauthor is assigned to a group and the credit is divided according to the A-index. Credit: Stallings, et al. ©2013 PNAS

(Phys.org) —From a qualitative perspective, it's relatively easy to define a good researcher as one who publishes many good papers. But quantitatively measuring these papers is more complicated, since they can be measured in several different ways. In the past few years, several different metrics have been proposed that determine an individual's scientific caliber based on the quantity and quality of the individual's peer-reviewed publications. However, most of these metrics assume that all authors contribute equally when a paper has multiple authors. In a new study, researchers have argued that this assumption causes bias in these metrics, and they have proposed a new metric that accounts for the relative contributions of all coauthors, resulting in a rational way to capture a researcher's scientific impact.

The , Jonathan Stallings, et al., have published their paper "Determining scientific impact using a collaboration index" in a recent issue of PNAS.

"Since we all have credit cards, it goes without saying that measuring credit is important in daily life," corresponding author Ge Wang, the Clark & Crossan Endowed Chair Professor in the Department of Biomedical Engineering at Rensselaer Polytechnic Institute in Troy, New York, told Phys.org, "How to measure intellectual credit is a hot topic, but a way has been missing to individualize scientific impact rigorously for teamwork such as a joint peer-reviewed publication. Our recent PNAS paper provides an axiomatic answer to this fundamental question."

Currently, one of the most common measures of an individual's scientific impact is the H-index, which reflects both a researcher's number of publications and number of citations per publication (a measure of the publication's quality). Specifically, a scientist has a value h if h of their papers have at least h citations each, and their other papers are less frequently cited. The H-index does not account for the possibility that some collaborators may have contributed more than others on a paper. There are also many situations where the H-index falls short. For example, when a researcher has only a few publications but they are highly cited, the researcher's h value is limited by the small number of publications regardless of their high quality.

The scientist who originally proposed the H-index, Jorge E. Hirsch, noted that the index is best used when comparing researchers of similar scientific age and that highly collaborative researchers may have inflated values. He suggested normalizing the H-index based on the average number of coauthors. However, the researchers in the new study want to account for the coauthors' relative contributions axiomatically in order to minimize .

This table shows how credit is divided among the nine coauthors of the current paper using different methods. Credit: Stallings, et al. ©2013 PNAS

"Any quantitative measure of scientific productivity and impact is necessarily biased because intellect is the most complicated wonder that should not be absolutely measurable," Wang said. "Any measurement will miss something, which makes research interesting. When we have to measure a paper for multiple reasons, our axiomatic bibliometric approach is the best choice one can hope for."

The new measure of scientific impact is based on a set of axioms that determine the space of possible coauthors' credits and a most likely probabilistic distribution of what the researchers call a credit vector, which determines the relative credits of each coauthor of a given paper. Because this method is derived from the axioms, it is called the A-index.

In the A-index, each coauthor is assigned to a group. For a publication with just one author, that author always has an A-index of 1. Multiple coauthors who contribute equally to a publication would all be in the same group and split the credit equally. For example, four coauthors who contribute equally to a publication would each have an A-index of 0.25. But if each coauthor contributes a different amount, then they would not be in the same group, and the credit would be distributed in a weighted fashion. For example, four coauthors with decreasing credits would have A-indexes of 0.521, 0.271, 0.146, and 0.063, respectively.

The sum of a researcher's A-indexes, called the C-index, gives a weighted count of publications based on that researcher's relative contributions. The A-index (a single-paper metric) can also be used to weight an individual's share of the quality of a publication, whether quality is defined in terms of the journal's impact factor or the number of citations of the publication. The sum of these values is the productivity index, or P-index.

When testing the C-index and P-index on 186 biomedical engineering researchers and in simulation tests, the researchers found that these provide a fairer and more balanced way of measuring scientific impact compared with the the N-index and H-index, the former of which is simply the number of a researcher's publications.

One important point of comparison is that, while a high H-index requires a large number of publications, a researcher can achieve a high P-index with just a few publications if they are published in journals with high impact factors or receive lots of citations. A researcher can also achieve a high P-index by publishing many moderately important papers. In this way, the P-index balances quantity and quality by accounting for relative contributions and not only relying on a researcher's total number of publications. This advantage makes the P-index useful for young researchers and for comparing researchers with different collaborative tendencies.

"Our axiomatic framework is a fair and sensitive playground," Wang said. "It should encourage smoother and greater collaboration instead of discouraging it, because it is well known that 1+1>2 in many cases and especially so for increasingly important interdisciplinary projects."

The researchers point out that a main criticism with the new metrics is the lack of a well-defined system of coauthorship ranking, which is a problem of all metrics. They emphasize that developing a well-defined system of coauthorship ranking is necessary for realizing the full potential of these metrics.

The researchers also add that the A-index can be used to weight other metrics of scientific impact, such as the H-index. They hope to further investigate these possibilities in the future.

Explore further: Precarious work schedules common among younger workers

More information: Jonathan Stallings, et al. "Determining scientific impact using a collaboration index." PNAS Early Edition. DOI: 10.1073/pnas.1220184110

Related Stories

New formula predicts if scientists will be stars

Sep 12, 2012

A medical school committee is weighing whether to hire a promising young neuroscientist. Will she have a brilliant future as a researcher, publish in top journals and nab abundant research funds?

Facebook to be part of Nasdaq 100 index

Dec 05, 2012

Facebook will become part of the Nasdaq 100 index of the largest non-financial companies listed on the electronic exchange, the market operator said Wednesday.

Blood tests as good as biopsy for HCV-related disease

Jun 04, 2013

(HealthDay)—Compared to a liver biopsy, available blood tests are accurate for diagnosing fibrosis and cirrhosis in patients with hepatitis C virus (HCV), according to a review published in the June 4 issue ...

Recommended for you

Precarious work schedules common among younger workers

Aug 29, 2014

One wish many workers may have this Labor Day is for more control and predictability of their work schedules. A new report finds that unpredictability is widespread in many workers' schedules—one reason ...

Girls got game

Aug 29, 2014

Debi Taylor has worked in everything from construction development to IT, and is well and truly socialised into male-dominated workplaces. So when she found herself the only female in her game development ...

Computer games give a boost to English

Aug 28, 2014

If you want to make a mark in the world of computer games you had better have a good English vocabulary. It has now also been scientifically proven that someone who is good at computer games has a larger ...

Saddam Hussein—a sincere dictator?

Aug 28, 2014

Are political speeches manipulative and strategic? They could be – when politicians say one thing in public, and privately believe something else, political scientists say. Saddam Hussein's legacy of recording private discussions ...

User comments : 13

Adjust slider to filter visible comments by rank

Display comments: newest first

antialias_physorg
3.7 / 5 (6) Jun 13, 2013
However, most of these metrics assume that all authors contribute equally when a paper has multiple authors.

Oh boy - that is seriously flawed. There are big differences in what type of people will wind up on the author lists (in some disciplines it's even different who is author and who is coauthor. In some disciplines the head of the department is always in the first author spot - whether he contributed or not. And it even differs from nation to nation what the customs on author/coauthorship are)
It's also nearly impossible to infer how much a coauthor contributed to a paper. In some cases it's substantive work, in others it's 'merely' testing or data collection.

The number of possible bias factors is huge. (not that I have a better idea. But such impact factor algorithms should always be taken with a grain - or better a lump - of salt)
Jeddy_Mctedder
2 / 5 (8) Jun 13, 2013
Getting ones name on a publication is part of the 'game' of advancing oneself in the sciences. Quantifying this game for the purpose of generating conclusions about the underlying substance is a grave error.

You cannot evaluate cpntributions without understanding the context and background science. Only scientists in the same fields of discovery are qualified to give opinions of substance.

This ISNT moneyball or sabermetrics. The ART of doing science research is subjective. You dont judge a paintings quality by its price.....its price is derived from judgements of quality experts and tastemakers.
Q-Star
2.3 / 5 (9) Jun 13, 2013
"What is the best way to measure a researcher's scientific impact?"


Best way? There is only one way,,,,,, time. What science in the future will be constructed on it? There is no other metric for scientific impact.

This sounds like another "no more geniuses or Einsteins" sort of article.
antialias_physorg
3.7 / 5 (6) Jun 13, 2013
here is only one way,,,,,, time.

While I agree that only time tells - that's probably not a good metric for filling positions now.
Waiting until someone is dead until you hire them tends not to do the job.
Q-Star
2.3 / 5 (9) Jun 13, 2013
here is only one way,,,,,, time.

While I agree that only time tells - that's probably not a good metric for filling positions now.
Waiting until someone is dead until you hire them tends not to do the job.


He would give the administrators less problems, eh?

Naaa, my only point was that a "brilliant" up and coming person, oft times might become a dud. While every now and again, some truly new science comes from an unexpected quarter.

"Impacts" are by definition "after" the fact. Instead of "measure" maybe they should have used the word "predict"?
antialias_physorg
3.4 / 5 (5) Jun 13, 2013
He would give the administrators less problems, eh?

I know a couple of administrators who would go for that line of reasoning ;)

Naaa, my only point was that a "brilliant" up and coming person, oft times might become a dud. While every now and again, some truly new science comes from an unexpected quarter.

Agreed. But I'd rather hire 10 billiant up-and-coming scientists and live with the one dud instead of hiring 10 from an unexpected quarter on the off chance one will turn out to be brilliant.

It's not perfect - but the numbers game favors the one with the track record (quite heavily in my experience). But mostly it's decided based on how they present themselves, their work and their planned work. Impact factor just gets you the invitation to the interview - not the job.

beleg
1 / 5 (4) Jun 13, 2013
How much credit does the wheel inventor have?
Where did Perelman publish. Was it peer reviewed?
Why is Perelman not your Vorbild?
antialias_physorg
3.9 / 5 (7) Jun 13, 2013
He actually is. I think his stance that science shouldn't be awarded with prizes is pretty cool.

But then again we live in a real world - and not all sciences can be done by pen and pencil (like his math). And with real world issues come real world problems. How do you choose the head of an institute? Would Perelman be the right man for the job? Despite his genius I'd say: No way.

Impact factors are important for the interface between scientists and the institutions where they have their jobs (be it in the industry or at universities).

AMONGST scientists (e.g. when they discuss their science at conferences) you will find that impact factors don't matter one bit (most scientists don't even KNOW their own impact factor).

I found that the most well known guy at a conference will happily discuss theories with the 'lowliest grad student' as readily as with one of his 'career peers'.
KBK
1 / 5 (4) Jun 13, 2013
And then some people simply seed ideas into people.... and let others grab the fame, fortune, whatever.

There is at least one more layer than this article alludes to. The world of the little seen and the little noticed. The world of the movers and shakers, who are not always seen, or known.

The center of the herd is a follower and it knows nothing new, nothing that has not been discovered before.

This website and everything on it, is essentially the middle of the herd. No matter what it looks like, that is the case, the true reality, the bigger reality.

in reality, everything happens at the edge of the herd, and this website and anything on it would be aware of none of that.
antialias_physorg
3.4 / 5 (5) Jun 13, 2013
in reality, everything happens at the edge of the herd, and this website and anything on it would be aware of none of that.

You'd be surprised.
Some of us here are actually directly in touch with people on the very edge of our respective specialty.

Science isn't so mysterious. People in science are also just people. You can go and talk to them like you can go and talk to most anyone if you get up the nerve to actually do it.

Heck, I even got PMs from authors of papers I commented on, here, twice, asking for review in one case (which I couldn't because the paper was over my head) and discussing my comment on another occasion because it actually seemed relevant as a qualifier for the statement the paper seemed to make. On other occasions I asked the authors directly for the paper or discussed an idea based on their work with them.

And I'm sure others have had similar experiences here.

This isn't the edge, I agree, but the way to the very edge is just an email away.
ValeriaT
1.4 / 5 (10) Jun 13, 2013
What the scientific impact is supposed to mean? For example, the cold fusion finding is quite fundamental from human society perspective, but it never appeared in high impacted mainstream journals. Most of mainstream physics still denies it. The contemporary impact system is valued in the way, in which it contributes to subsequent occupation of scientists, not by its the contribution for the rest of human civilization. Such a criterions are not just harmful, but they represent the brake of the further progress. For example the scientists avoid research of new areas, because they have no one to cite here (no citation, no grants and salary). Aren't we paying the scientists just for original research instead?
Q-Star
2.5 / 5 (10) Jun 13, 2013
What the scientific impact is supposed to mean? For example, the cold fusion finding is quite fundamental from human society perspective, but it never appeared in high impacted mainstream journals. Most of mainstream physics still denies it. The contemporary impact system is valued in the way, in which it contributes to subsequent occupation of scientists, not by its the contribution for the rest of human civilization.


Damn my eyes Zephyr, I agree with ya, the cold fusion guys haven't made an impact,,,, it seems the mainstream physics is the culprit, no one is writing papers on it,,,

We must pass a law: "Every 2nd Paper On The Nuclear Physics Must Be On The Cold Fusion For Ninety Years Now",,,,,,,, or for short we could just refer to it as the "Mainstream Fairness to Bunk Science Act"
VendicarE
5 / 5 (2) Jun 14, 2013
I question the need for such a metric.