All This And More

I was looking at the latest issue of Cell and marvelling at how many authors there are on each paper. It’s no secret that the raison d’être of Cell is to publish the “last word” on a topic (although whether it fulfils that objective is debatable). Definitive work needs to be comprehensive. So it follows that this means lots of techniques and ergo lots of authors. This means it is even more impressive when a dual author paper turns up in the table of contents for Cell. Anyway, I got to thinking: has it always been the case that Cell papers have lots of authors and if not, when did that change?

I downloaded the data for all articles published by Cell (and for comparison, J Cell Biol) from Scopus. The records required a bit of cleaning. For example, SnapShot papers needed to be removed and also the odd obituary etc. had been misclassified as an article. These could be quickly removed. I then went back through and filtered out ‘articles’ that were less than three pages as I think it is not possible for a paper to be two pages or fewer in length. The data could be loaded into IgorPro and boxplots generated per year to show how author number varied over time. Reviews that are misclassified as Articles will still be in the dataset, but I figured these would be minimal.

Authors1First off: Yes, there are more authors on average for a Cell paper versus a J Cell Biol paper. What is interesting is that both journals had similar numbers of authors when Cell was born (1974) and they crept up together until the early 2000s, when the number of Cell authors kept increasing, or JCell Biol flattened off, whichever way you look at it.

I think the overall trend to more authors is because understanding biology has increasingly required multiple approaches and the bar for evidence seems to be getting higher over time. The initial creep to more authors (1974-2000) might be due to a cultural change where people (technicians/students/women) began to get proper credit for their contributions. However, this doesn’t explain the divergence between J Cell Biol and Cell in recent years. One possibility is Cell takes more non-cell biology papers and that these papers necessarily have more authors. For example, the polar bear genome was published in Cell (29 authors), and this sort of paper would not appear in J Cell Biol. Another possibility is that J Cell Biol has a shorter and stricter revision procedure, which means that multiple rounds of revision, collecting new techniques and new authors is more limited than it is at Cell. Any other ideas?

AuthorI also quickly checked whether more authors means more citations, but found no evidence for such a relationship. For papers published in the years 2000-2004, the median citation number for papers with 1-10 authors was pretty constant for J Cell Biol. For Cell, these data mere more noisy. Three-author papers tended to be cited a bit more than those with two authors, but then four author papers were also lower.

The number of authors on papers from our lab ranges from 2-9 and median is 3.5. This would put an average paper from our lab in the bottom quartile for JCB and in the lower 10% for Cell in 2013. Ironically, our 9 author paper (an outlier) was published in J Cell Biol. Maybe we need to get more authors on our papers before we can start troubling Cell with our manuscripts…


The Post title is taken from ‘All This and More’ by The Wedding Present from their LP George Best.

Blast Off!

This post is about metrics and specifically the H-index. It will probably be the first of several on this topic.

I was re-reading a blog post by Alex Bateman on his affection for the H-index as a tool for evaluating up-and-coming scientists. He describes Jorge Hirsch’s H-index, its limitations and its utility quite nicely, so I won’t reiterate this (although I’ll probably do so in another post). What is under-appreciated is that Hirsch also introduced the m quotient, which is the H-index divided by years since the first publication. It’s the m quotient that I’ll concentrate on here. The TL;DR is: I think that the H-index does have some uses, but evaluating early career scientists is not one of them.

Anyone of an anti-metrics disposition should look away now.

Alex proposes that the scientists can be judged (and hired) by using m as follows:

  • <1.0 = average scientist
  • 1.0-2.0 = above average
  • 2.0-3.0 = excellent
  • >3.0 = stellar

He says “So post-docs with an m-value of greater than three are future science superstars and highly likely to have a stratospheric rise. If you can find one, hire them immediately!”.

From what I have seen, the H-index (and therefore m) is too noisy for early stage career scientists to be of any use for evaluation. Let’s leave that aside for the moment. What he is saying is you should definitely hire a post-doc who has published ≥3 papers with ≥3 citations each in their first year, ≥6 with ≥6 citations each in their second year, ≥9 papers with ≥9 in their third year…

Do these people even exist? A candidate with 3 year PhD and a 3 year postdoc (6 would mean ≥18 papers with ≥18 citations each! In my field (molecular cell biology), it is unusual for somebody to publish that many papers, let alone accrue citations at that rate*.

This got me thinking: using Alex’s criteria, how many stellar scientists would we miss out on and would we be more likely to hire the next Jan Hendrik Schön. To check this out I needed to write a quick program to calculate H-index by year (I’ll describe this in a future post). Off the top of my head I thought of a few scientists that I know of, who are successful by many other measures, and plotted their H-index by year. The dotted line shows a constant m of 1,  “average” by Alex’s criteria. I’ve taken a guess at when they became a PI. I have anonymised the scholars, the information is public and anyone can calculate this, but it’s not fair to identify people without asking (hopefully they can’t recognise themselves – if they read this!).

This is a small sample taken from people in my field. You can see that it is rare for scientists to have a big m at an early stage in their careers. With the exception of Scholar C, who was just awesome from the get-go, panels appointing any of these scholars would have had trouble divining the future success of these people on the basis of H-index and m alone. Scholar D and Scholar E really saw their careers take-off by making big discoveries, and these happened at different stages of their careers. Both of these scholars were “below average” when they were appointed as PI. The panel would certainly not have used metrics in their evaluation (the databases were not in wide use back then), probably just letters of recommendation and reading the work. Clearly, they could identify the potential in these scientists… or maybe they just got lucky. Who knows?!

There may be other fields where publication at higher rates can lead to a large m but I would still question the contribution of the scientist to the papers that led to the H-index. Are they first or last author? One problem with the H-index is that the 20th scientist in a list of 40 authors gets the same credit as the first author. Filtering what counts in the list of articles seems sensible, but this would make the values even more noisy for early stage scientists.

 

*In the comments section, somebody points out that if you publish a paper very early then this affects your m value. This is something I sympathise with. My first paper was in 1999 when I was an undergrad. This dents my m value as it was a full three years until my next paper.

The post title is taken from ‘Blast Off!’ by Rivers Cuomo from ‘Songs from the Black Hole’ the unreleased follow-up to Pinkerton.

Give, Give, Give Me More, More, More

A recent opinion piece published in eLife bemoaned the way that citations are used to judge academics because we are not even certain of the veracity of this information. The main complaint was that Google Scholar – a service that aggregates citations to articles using a computer program – may be less-than-reliable.

There are three main sources of citation statistics: Scopus, Web of Knowledge/Science and Google Scholar; although other sources are out there. These are commonly used and I checked out how comparable these databases are for articles from our lab.

The ratio of citations is approximately 1:1:1.2 for Scopus:WoK:GS. So Google Scholar is a bit like a footballer, it gives 120%.

I first did this comparison in 2012 and again in 2013. The ratio has remained constant, although these are the same articles, and it is a very limited dataset. In the eLife opinion piece, Eve Marder noted an extra ~30% citations for GS (although I calculated it as ~40%, 894/636=1.41). Talking to colleagues, they have also noticed this. It’s clear that there is some inflation with GS, although the degree of inflation may vary by field. So where do these extra citations come from?

  1. Future citations: GS is faster than Scopus and WoK. Articles appear there a few days after they are published, whereas it takes several weeks or months for the same articles to appear in Scopus and WoK.
  2. Other papers: some journals are not in Scopus and WoK. Again, these might be new journals that aren’t yet included at the others, but GS doesn’t discriminate and includes all papers it finds. One of our own papers (an invited review at a nascent OA journal) is not covered by Scopus and WoK*. GS picks up preprints whereas the others do not.
  3. Other stuff: GS picks up patents and PhD theses. While these are not traditional papers, published in traditional journals, they are clearly useful and should be aggregated.
  4. Garbage: GS does pick up some stuff that is not a real publication. One example is a product insert for an antibody, which has a reference section. Another is duplicate publications. It is quite good at spotting these and folding them into a single publication, but some slip through.

OK, Number 4 is worrying, but the other citations that GS detects versus Scopus and WoK are surely a good thing. I agree with the sentiment expressed in the eLife paper that we should be careful about what these numbers mean, but I don’t think we should just disregard citation statistics as suggested.

GS is free, while the others are subscription-based services. It did look for a while like Google was going to ditch Scholar, but a recent interview with the GS team (sorry, I can’t find the link) suggests that they are going to keep it active and possibly develop it further. Checking out your citations is not just an ego-trip, it’s a good way to find out about articles that are related to your own work. GS has a nice feature that send you an email whenever it detects a citation for your profile. The downside of GS is that its terms of service do not permit scraping and reuse, whereas downloading of subsets of the other databases is allowed.

In summary, I am a fan of Google Scholar. My page is here.

 

* = I looked into this a bit more and the paper is actually in WoK, it has no Title and it has 7 citations (versus 12 in GS). Although it doesn’t come up in a search for Fiona or for me.

hood

 

However, I know from GS that this paper was also cited in a paper by the Cancer Genome Atlas Network in Nature. WoK listed this paper as having 0 references and 0 citations(!). Does any of this matter? Well, yes. WoK is a Thomson Reuters product and is used as the basis for their dreaded Impact Factor – which (like it or not) is still widely used for decision making. Also many Universities use WoK information in their hiring and promotions processes.

The post title comes from ‘Give, Give, Give Me More, More, More’ by The Wonder Stuff from the LP ‘Eight Legged Groove Machine’. Finding a post title was difficult this time. I passed on: Pigs (Three Different Ones) and Juxtapozed with U. My iTunes library is lacking songs about citations…