Voice Your Opinion: Editors shopping for preprints is the future

Today I saw a tweet from Manuel Théry (an Associate Ed at Mol Biol Cell). Which said that he heard that the Editor-in-Chief of MBoC, David Drubin shops for interesting preprints on bioRxiv to encourage the authors to submit to MBoC. This is not a surprise to me. I’ve read that authors of preprints on bioRxiv have been approached by journal Editors previously (here and here, there are many more). I’m pleased that David is forward-thinking and that MBoC are doing this actively.

I think this is the future.

Why? If we ignore for a moment the “far future” which may involve the destruction of most journals, leaving a preprint server and a handful of subject-specific websites which hunt down and feature content from the server and co-ordinate discussions and overviews of current trends… I actually think this is a good idea for the “immediate future” of science and science publishing. Two reasons spring to mind.

  1. Journals would be crazy to miss out: The manuscripts that I am seeing on bioRxiv are not stuff that’s been dumped there with no chance of “real publication”. This stuff is high profile. I mean that in the following sense: the work in my field that has been posted is generally interesting, it is from labs that do great science, and it is as good as work in any journal (obviously). For some reason I have found myself following what is being deposited here more closely than at any real journal. Journals would be crazy to miss out on this stuff.
  2. Levelling the playing field: For better-or-worse papers are judged on where they are published. The thing that bothers me most about this is that manuscripts are only submitted to 1 or more journals before “finding their home”. This process is highly noisy and it means that if we accept that there is a journal hierarchy, your paper may or may not be deserving of the kudos it receives in its resting place. If all journals actively scour the preprint server(s), the authors can then pick the “highest bidder”. This would make things fairer in the sense that all journals in the hierarchy had a chance to consider the paper and its resting place may actually reflect its true quality.

I don’t often post opinions here, but I thought this would take more than 140 characters to explain. If you agree or disagree, feel free to leave a comment!

Edit @ 11:46 16-05-26 Pedro Beltrao pointed out that this idea is not new, a post of his from 2007.

Edit 16-05-26 Misattributed the track to Extreme Noise Terror (corrected). Also added some links thanks to Alexis Verger.

The post title comes from “Voice Your Opinion” by Unseen Terror. The version I have is from a Peel sessions compilation “Hardcore Holocaust”.

If I Can’t Change Your Mind

I have written previously about Journal Impact Factors (here and here). The response to these articles has been great and earlier this year I was asked to write something about JIFs and citation distributions for one of my favourite journals. I agreed and set to work.

Things started off so well. A title came straight to mind. In the style of quantixed, I thought The Number of The Beast would be amusing. I asked for opinions on Twitter and got an even better one (from Scott Silverman @sksilverman) Too Many Significant Figures, Not Enough Significance. Next, I found an absolute gem of a quote to kick off the piece. It was from the eminently quotable Sydney Brenner.

Before we develop a pseudoscience of citation analysis, we should remind ourselves that what matters absolutely is the scientific content of a paper and that nothing will substitute for either knowing it or reading it.

looseEndsThat quote was from a Loose Ends piece that Uncle Syd penned for Current Biology in 1995. Wow, 1995… that is quite a few years ago I thought to myself. Never mind. I pressed on.

There’s a lot of literature on JIFs, research assessment and in fact there are whole fields of scholarly activity (bibliometrics) devoted to this kind of analysis. I thought I’d better look back at what has been written previously. The “go to” paper for criticism of JIFs is Per Seglen’s analysis in the BMJ, published in 1997. I re-read this and I can recommend it if you haven’t already seen it. However, I started to feel uneasy. There was not much that I could add that hadn’t already been said, and what’s more it had been said 20 years ago.

Around about this time I was asked to review some fellowship applications for another EU country. The applicants had to list their publications, along with the JIF. I found this annoying. It was as if SF-DORA never happened.

There have been so many articles, blog posts and more written on JIFs. Why has nothing changed? It was then that I realised that it doesn’t matter how many things are written – however coherently argued – people like JIFs and they like to use them for research assessment. I was wasting my time writing something else. Sorry if this sounds pessimistic. I’m sure new trainees can be reached by new articles on this topic, but acceptance of JIF as a research assessment tool runs deep. It is like religious thought. No amount of atheist writing, no matter how forceful, cogent, whatever, will change people’s minds. That way of thinking is too deeply ingrained.

As the song says, “If I can’t change your mind, then no-one will”.

So I declared defeat and told the journal that I felt like I had said all that I could already say on my blog and that I was unable to write something for them. Apologies to all like minded individuals for not continuing to fight the good fight.

But allow me one parting shot. I had a discussion on Twitter with a few people, one of whom said they disliked the “JIF witch hunt”. This caused me to think about why the JIF has hung around for so long and why it continues to have support. It can’t be that so many people are statistically illiterate or that they are unscientific in choosing to ignore the evidence. What I think is going on is a misunderstanding. Criticism of a journal metric as being unsuitable to judge individual papers is perceived as an attack on journals with a high-JIF. Now, for good or bad, science is elitist and we are all striving to do the best science we can. Striving for the best for many scientists means aiming to publish in journals which happen to have a high JIF. So an attack of JIFs as a research assessment tool, feels like an attack on what scientists are trying to do every day.

JIFDistBecause of this intense focus on high-JIF journals… what people don’t appreciate is that the reality is much different. The distribution of JIFs is as skewed as that for the metric itself. What this means is that focussing on a minuscule fraction of papers appearing in high-JIF journals is missing the point. Most papers are in journals with low-JIFs. As I’ve written previously, papers in journals with a JIF of 4 get similar citations to those in a journal with a JIF of 6. So the JIF tells us nothing about citations to the majority of papers and it certainly can’t predict the impact of these papers, which are the majority of our scientific output.

So what about those fellowship applicants? All of them had papers in journals with low JIFs (<8). The applicants’ papers were indistinguishable in that respect. What advice would I give to people applying to such a scheme? Well, I wouldn’t advise not giving the information asked for. To be fair to the funding body they also asked for number of citations for each paper, but for papers that are only a few months old, this number is nearly always zero. My advice would be to try and make sure that your paper is available freely for anyone to read. Many of the applicants’ papers were outside my expertise and so the title and abstract didn’t tell me much about the significance of the paper. So I looked at some of these papers to look at the quality of the data in there… if I had access. Applicants who had published in closed access journals are at a disadvantage here because if I couldn’t download the paper then it was difficult to assess what they had been doing.

I was thinking that this post would be a meta-meta-blogpost. Writing about an article which was written about something I wrote on my blog. I suppose it still is, except the article was never finished. I might post again about JIFs, but for now I doubt I will have anything new to say that hasn’t already been said.

The post title is taken from “If I Can’t Change Your Mind” by Sugar from their LP Copper Blue. Bob Mould was once asked about song-writing and he said that the perfect song was like a maths puzzle (I can’t find a link to support this, so this is from memory). If you are familiar with this song, songwriting and/or mathematics, then you will understand what he means.

Edit @ 08:22 16-05-20 I found an interview with Bob Mould where he says song-writing is like city-planning. Maybe he just compares song-writing to lots of different things in interviews. Nonetheless I like the maths analogy.

Throes of Rejection: No link between rejection rates and impact?

I was interested in the analysis by Frontiers on the lack of a correlation between the rejection rate of a journal and the “impact” (as measured by the JIF). There’s a nice follow here at Science Open. The Times Higher Education Supplement also reported on this with the line that “mass rejection of research papers by selective journals in a bid to achieve a high impact factor is an enormous waste of academics’ time”.

First off, the JIF is a flawed metric in a number of ways but even at face value, what does this analysis really tell us?

IF-vs-Rej-Rate-1-1-768x406

This plot is taken from the post by Jon Tennant at Science Open.

As others have pointed out:

  1. The rejection rate is dominated by desk rejects, which although very annoying, don’t take that much time.
  2. Without knowing the journal name it is difficult to know what to make of the plot.

The data are available from Figshare and – thanks to Thomson-Reuters habit of reporting JIF to 3 d.p. – we can easily pull the journal titles from a list using JIF as a key. The list is here. Note that there may be errors due to this quick-and-dirty method.

The list takes on a different meaning when you can see the Journal titles alongside the numbers for rejection rate and JIF.

rjxn

 

Looking for familiar journals – whichever field you are in – you will be disappointed. There’s an awful lot of noise in there. By this, I mean journals that are outside of your field.

This is the problem with this analysis as I see it. It is difficult to compare Nature Neuroscience with Mineralium Deposita…

My plan with this dataset was to replot rejection rate versus JIF2014 for a few different journal categories, but I don’t think there’s enough data to do this and make a convincing case one way or the other. So, I think the jury is still out on this question.

It would be interesting to do this analysis on a bigger dataset. Journals releasing their numbers on rejection rates would be a step forward to doing this.

One final note:

The Orthopedic Clinics of North America is a tough journal. Accepts only 2 papers in every 100 for an impact factor of 1!

 

The post title is from “Throes of Rejection” by Pantera from their Far Beyond Driven LP. I rejected the title “Satan Has Rejected my Soul” by Morrissey for obvious reasons.

The Great Curve II: Citation distributions and reverse engineering the JIF

There have been calls for journals to publish the distribution of citations to the papers they publish (1 2 3). The idea is to turn the focus away from just one number – the Journal Impact Factor (JIF) – and to look at all the data. Some journals have responded by publishing the data that underlie the JIF (EMBO J, Peer JRoyal Soc, Nature Chem). It would be great if more journals did this. Recently, Stuart Cantrill from Nature Chemistry actually went one step further and compared the distribution of cites at his journal with other chemistry journals. I really liked this post and it made me think that I should just go ahead and harvest the data for cell biology journals and post it.

This post is in two parts. First, I’ll show the data for 22 journals. They’re broadly cell biology, but there’s something for everyone with Cell, Nature and Science all included. Second, I’ll describe how I “reverse engineered” the JIF to get to these numbers. The second part is a bit technical but it describes how difficult it is to reproduce the JIF and highlights some major inconsistencies for some journals. Hopefully it will also be of interest to anyone wanting to do a similar analysis.

Citation distributions for 22 cell biology journals

The JIF for 2014 (published in the summer of 2015) is worked out by counting the total number of 2014 cites to articles in that journal that were published in 2012 and 2013. This number is divided by the number of “citable items” in that journal in 2012 and 2013. There are other ways to look at citation data, different windows to analyse, but this method is used here because it underlies the impact factor. I plotted out histograms to show the citation distributions at these journals from 0-50 citations, inset shows the frequency of papers with 50-1000 cites.

Dist1

Dist2

As you can see, the distributions are highly skewed and so reporting the mean is very misleading. Typically ~70% papers pick up less than the mean number of citations. Reporting the median is safer and is shown below. It shows how similar most of the journals are in this field in terms of citations to the average paper in that journal. Another metric, which I like, is the H-index for journals. Google Scholar uses this as a journal metric (using citation data from a 5-year window). For a journal, this is a number, h, which reveals how many papers got >=h citations. A plot of h-indices for these journals is shown below.

medianplusH

Here’s a summary table of all of this information together with the “official JIF” data, which is discussed below.

Journal Median H Citations Items Mean JIF Cites JIF Items JIF
Autophagy 3 18 2996 539 5.6 2903 247 11.753
Cancer Cell 14 37 5241 274 19.1 5222 222 23.523
Cell 19 72 28147 1012 27.8 27309 847 32.242
Cell Rep 6 26 6141 743 8.3 5993 717 8.358
Cell Res 3 19 1854 287 6.5 2222 179 12.413
Cell Stem Cell 14 37 5192 302 17.2 5233 235 22.268
Cell Mol Life Sci 4 19 3364 596 5.6 3427 590 5.808
Curr Biol 4 24 6751 1106 6.1 7293 762 9.571
Development 5 25 6069 930 6.5 5861 907 6.462
Dev Cell 7 23 3986 438 9.1 3922 404 9.708
eLife 5 20 2271 306 7.4 2378 255 9.3212
EMBO J 8 27 5828 557 10.5 5822 558 10.434
J Cell Biol 6 25 5586 720 7.8 5438 553 9.834
J Cell Sci 3 23 5995 1157 5.2 5894 1085 5.432
Mol Biol Cell 3 16 3415 796 4.3 3354 751 4.466
Mol Cell 11 37 8669 629 13.8 8481 605 14.018
Nature 12 105 69885 2758 25.3 71677 1729 41.296
Nat Cell Biol 13 35 5381 340 15.8 5333 271 19.679
Nat Rev Mol Biol Cell 8.5 43 5037 218 23.1 4877 129 37.806
Oncogene 5 26 6973 1038 6.7 8654 1023 8.459
Science 14 83 54603 2430 22.5 56231 1673 33.611
Traffic 3 11 1020 252 4.0 1018 234 4.350

 

Reverse engineering the JIF

The analysis shown above was straightforward. However, getting the data to match Thomson-Reuters’ calculations for the JIF was far from easy.

I downloaded the citation data from Web of Science for the 22 journals. I limited the search to “articles” and “reviews”, published in 2012 and 2013. I took the citation data from papers published in 2014 with the aim of plotting out the distributions. As a first step I calculated the mean citation for each journal (a.k.a. impact factor) to see how it compared with the official Journal Impact Factor (JIF). As you can see below, some were correct and others were off by some margin.

Journal Calculated IF JIF
Autophagy 5.4 11.753
Cancer Cell 14.8 23.523
Cell 23.9 32.242
Cell Rep 8.2 8.358
Cell Res 5.7 12.413
Cell Stem Cell 13.4 22.268
Cell Mol Life Sci 5.6 5.808
Curr Biol 5.0 9.571
Development 6.5 6.462
Dev Cell 7.5 9.708
eLife 6.0 9.322
EMBO J 10.5 10.434
J Cell Biol 7.6 9.834
J Cell Sci 5.2 5.432
Mol Biol Cell 4.1 4.466
Mol Cell 11.8 14.018
Nature 25.1 41.296
Nat Cell Biol 15.1 19.679
Nat Rev Mol Cell Biol 15.3 37.806
Oncogene 6.7 8.459
Science 18.6 33.611
Traffic 4.0 4.35

For most journals there was a large difference between this number and the official JIF (see below, left). This was not a huge surprise, I’d found previously that the JIF was very hard to reproduce (see also here). To try and understand the difference, I looked at the total citations in my dataset vs those from the official JIF. As you can see from the plot (right), my numbers are pretty much in agreement with those used for the JIF calculation. Which meant that the difference comes from the denominator – the number of citable items.

JifCalc

What the plots show is that, for most journals in my dataset, there are fewer papers considered as citable items by Thomson-Reuters. This is strange. I had filtered the data to leave only journal articles and reviews (which are citable items), so non-citable items should have been removed.

It’s no secret that the papers cited in the sum on the top of the impact factor calculation are not necessarily the same as the papers counted on the bottom.

Now, it’s no secret that the papers cited in the sum on the top of the impact factor calculation are not necessarily the same as the papers counted on the bottom (see here, here and here). This inconsistency actually makes plotting a distribution impossible. However, I thought that using the same dataset, filtering and getting to the correct total citation number meant that I had the correct list of citable items. So, what could explain this difference?

missingPapersI looked first at how big the difference in number of citable items is. Journals like Nature and Science are missing >1000 items(!), others are less and some such as Traffic, EMBO J, Development etc. have the correct number. Remember that journals carry different amounts of papers. So as a proportion of total papers the biggest fraction of missing papers was actually from Autophagy and Cell Research which were missing ~50% of papers classified in WoS as “articles” or “reviews”!

My best guess at this stage was that items were incorrectly tagged in Web of Science. Journals like Nature, Science and Current Biology carry a lot of obituaries, letters and other stuff that can fairly be removed from the citable items count. But these should be classified as such in Web of Science and therefore filtered out in my original search. Also, these types of paper don’t explain the big disparity in journals like Autophagy that only carry papers, reviews with a tiny bit of front matter.

PubmedCompI figured a good way forward would be to verify the numbers with another database – PubMed. Details of how I did this are at the foot of this post. This brought me much closer to the JIF “citable items” number for most journals. However, Autophagy, Current Biology and Science are still missing large numbers of papers. As a proportion of the size of the journal, Autophagy, Cell Research and Current Biology are missing the most. While Nature Cell Biology and Nature Reviews Molecular Cell Biology now have more citable items in the JIF calculation than are found in PubMed!

This collection of data was used for the citation distributions shown above, but it highlights some major discrepancies at least for some journals.

How does Thomson Reuters decide what is a citable item?

Some of the reasons for deciding what is a citable item are outlined in this paper. Of the six reasons that are revealed, all seem reasonable, but they suggest that they do not simply look at the classification of papers in the Web of Science database. Without wanting to pick on Autophagy – it’s simply the first one alphabetically – I looked at which was right: the PubMed number of 539 or the JIF number of 247 citable items published in 2012 and 2013. For the JIF number to be correct this journal must only publish ~10 papers per issue, which doesn’t seem to be right at least from a quick glance at the first few issues in 2012.

Why Thomson-Reuters removes some of these papers as non-citable items is a mystery… you can see from the histogram above that for Autophagy only 90 or so papers are uncited in 2014, so clearly the removed items are capable of picking up citations. If anyone has any ideas why the items were removed, please leave a comment.

Summary

Trying to understand what data goes into the Journal Impact Factor calculation (for some, but not all journals) is very difficult. This makes JIFs very hard to reproduce. As a general rule in science, we don’t trust things that can’t be reproduced, so why has the JIF persisted. I think most people realise by now that using this single number to draw conclusions about the excellence (or not) of a paper because it was published in a certain journal, is madness. Looking at the citation distributions, it’s clear that the majority of papers could be reshuffled between any of these journals and nobody would notice (see here for further analysis). We would all do better to read the paper and not worry about where it was published.

The post title is taken from “The Great Curve” by Talking Heads from their classic LP Remain in Light.

In PubMed, a research paper will have the publication type “journal article”, however other items can still have this publication type. These items also have additional types which can therefore be filtered. I retrieved all PubMed records from the journals published in 2012 and 2013 with publication type = “journal article”. This worked for 21 journals, eLife is online only so the ppdat field code had to be changed to pdat.


("Autophagy"[ta] OR "Cancer Cell"[ta] OR "Cell"[ta] OR "Cell Mol Life Sci"[ta] OR "Cell Rep"[ta] OR "Cell Res"[ta] OR "Cell Stem Cell"[ta] OR "Curr Biol"[ta] OR "Dev Cell"[ta] OR "Development"[ta] OR "Elife"[ta] OR "Embo J"[ta] OR "J Cell Biol"[ta] OR "J Cell Sci"[ta] OR "Mol Biol Cell"[ta] OR "Mol Cell"[ta] OR "Nat Cell Biol"[ta] OR "Nat Rev Mol Cell Biol"[ta] OR "Nature"[ta] OR "Oncogene"[ta] OR "Science"[ta] OR "Traffic"[ta]) AND (("2012/01/01"[PPDat] : "2013/12/31"[PPDat])) AND journal article[pt:noexp]

I saved this as an XML file and then pulled the values from the “publication type” key using Nokogiri/ruby (script). I then had a list of all the publication type combinations for each record. As a first step I simply counted the number of journal articles for each journal and then subtracted anything that was tagged as “biography”, “comment”, “portraits” etc. This could be done in IgorPro by making a wave indicating whether an item should be excluded (0 or 1) using the DOI as a lookup. This wave could then be used exclude papers from the distribution.

For calculation of the number of missing papers as a proportion of size of journal, I used the number of items from WoS for the WoS calculation, and the JIF number for the PubMed comparison.

Related to this, this IgorPro procedure will read in csv files from WoS/WoK. As mentioned in the main text, data were downloaded 500 records at a time as csv from WoS, using journal titles as a search term and limiting to “article” or “review” and limiting to 2012 and 2013. Note that limiting the search at the outset by year, limits the citation data you get back. You need to search first to get citations from all years and then refine afterwards. The files can be stitched together with the cat command.


cat *.txt > merge.txt

Edit 8/1/16 @ 07:41 Jon Lane told me via Twitter that Autophagy publishes short commentaries of papers in other journals called “Autophagic puncta” (you need to be a cell biologist to get this gag). He suggests these could be removed by Thomson Reuters for their calculation. This might explain the discrepancy for this journal. However, these items 1) cite other papers (so they contribute to JIF calculations), 2) they get cited (Jon says his own piece has been cited 18 times) so they are not non-citable items, 3) they’re tagged as though they are a paper or a review in WoS and PubMed.

What Difference Does It Make?

A few days ago, Retraction Watch published the top ten most-cited retracted papers. I saw this post with a bar chart to visualise these citations. It didn’t quite capture what the effect (if any) a retraction has on citations. I thought I’d quickly plot this out for the number one article on the list.

Retract

The plot is pretty depressing. The retraction has no effect on citations. Note that the retraction notice has racked up 125 citations, which could mean that at least some of the ~1000 citations to the original article that came after the retraction, acknowledge the fact that the article has been pulled.

The post title is taken from “What Difference Does it Make?” by The Smiths from ‘The Smiths’ and ‘Hatful of Hollow’

White label: the growth of bioRxiv

bioRxiv, the preprint server for biology, recently turned 2 years old. This seems a good point to take a look at how bioRxiv has developed over this time and to discuss any concerns sceptical people may have about using the service.

Firstly, thanks to Richard Sever (@cshperspectives) for posting the data below. The first plot shows the number of new preprints deposited and the number that were revised, per month since bioRxiv opened in Nov 2013. There are now about 200 preprints being deposited per month and this number will continue to increase. The cumulative article count (of new preprints) shows that, as of the end of last month, there are >2500 preprints deposited at bioRxiv. overall2

subject2

What is take up like across biology? To look at this, the number of articles in different subject categories can be totted up. Evolutionary Biology, Bioinformatics and Genomics/Genetics are the front-running disciplines. Obviously counting articles should be corrected for the size of these fields, but it’s clear that some large disciplines have not adopted preprinting in the same way. Cell biology, my own field, has some catching up to do. It’s likely that this reflects cultures within different fields. For example, genomics has a rich history of data deposition, sharing and openness. Other fields, less so…

So what are we waiting for?

I’d recommend that people wondering about preprinting go and read Stephen Curry’s post “just do it“. Any people who remain sceptical should keep reading…

Do I really want to deposit my best work on bioRxiv?

I’ve picked six preprints that were deposited in 2015. This selection demonstrates how important work is appearing first at bioRxiv and is being downloaded thousands of times before the papers appear in the pages of scientific journals.

  1. Accelerating scientific publishing in biology. A preprint about preprinting from Ron Vale, subsequently published in PNAS.
  2. Analysis of protein-coding genetic variation in 60,706 humans. A preprint summarising a huge effort from ExAC Exome Aggregation Consortium. 12,366 views, 4,534 downloads.
  3. TP53 copy number expansion correlates with the evolution of increased body size and an enhanced DNA damage response in elephants. This preprint was all over the news, e.g. Science.
  4. Sampling the conformational space of the catalytic subunit of human γ-secretase. CryoEM is the hottest technique in biology right now. Sjors Scheres’ group have been at the forefront of this revolution. This paper is now out in eLife.
  5. The genome of the tardigrade Hypsibius dujardini. The recent controversy over horizontal gene transfer in Tardigrades was rapidfire thanks to preprinting.
  6. CRISPR with independent transgenes is a safe and robust alternative to autonomous gene drives in basic research. This preprint concerning biosafety of CRISPR/Cas technology could be accessed immediately thanks to preprinting.

But many journals consider preprints to be previous publications!

Wrong. It is true that some journals have yet to change their policy, but the majority – including Nature, Cell and Science – are happy to consider manuscripts that have been preprinted. There are many examples of biology preprints that went on to be published in Nature (ancient genomes) and Science (hotspots in birds). If you are worried about whether the journal you want to submit your work to will allow preprinting, check this page first or the SHERPA/RoMEO resource. The journal “information to authors” page should have a statement about this, but you can always ask the Editor.

I’m going to get scooped

Preprints establish priority. It isn’t possible to be scooped if you deposit a preprint that is time-stamped showing that you were the first. The alternative is to send it to a journal where no record will exist that you submitted it if the paper is rejected, or sometimes even if they end up publishing it (see discussion here). Personally, I feel that the fear of scooping in science is overblown. In fields that are so hot that papers are coming out really fast the fear of scooping is high, everyone sees the work if its on bioRxiv or elsewhere – who was first is clear to all. Think of it this way: depositing a preprint at bioRxiv is just the same as giving a talk at a meeting. Preprints mean that there is a verifiable record available to everyone.

Preprints look ugly, I don’t want people to see my paper like that.

The depositor can format their preprint however they like! Check out Christophe Leterrier’s beautifully formatted preprint, or this one from Dennis Eckmeier. Both authors made their templates available so you can follow their example (1 and 2).

Yes but does -insert name of famous scientist- deposit preprints?

Lots of high profile scientists have already used bioRxiv. David Bartel, Ewan Birney, George Church, Ray Deshaies, Jennifer Doudna, Steve Henikoff, Rudy Jaenisch, Sophien Kamoun, Eric Karsenti, Maria Leptin, Rong Li, Andrew Murray, Pam Silver, Bruce Stillman, Leslie Vosshall and many more. Some sceptical people may find this argument compelling.

I know how publishing works now and I don’t want to disrupt the status quo

It’s paradoxical how science is all about pushing the frontiers, yet when it comes to publishing, scientists are incredibly conservative. Physics and Mathematics have been using preprinting as part of the standard route to publication for decades and so adoption by biology is nothing unusual and actually, we will simply be catching up. One vision for the future of scientific publishing is that we will deposit preprints and then journals will search out the best work from the server to highlight in their pages. The journals that will do this are called “overlay journals”. Sounds crazy? It’s already happening in Mathematics. Terry Tao, a Fields medal-winning mathematician recently deposited a solution to the Erdos discrepency problem on arXiv (he actually put them on his blog first). This was then “published” in Discrete Analysis, an overlay journal. Read about this here.

Disclaimer: other preprint services are available. F1000 Research, PeerJ Preprints and of course arXiv itself has quantitative biology section. My lab have deposited work at bioRxiv (1, 2 and 3) and I am an affiliate for the service, which means I check preprints before they go online.

Edit 14/12/15 07:13 put the scientists in alphabetical order. Added a part about scooping.

The post title comes from the term “white label” which is used for promotional vinyl copies of records ahead of their official release.

The Great Curve: Citation distributions

This post follows on from a previous post on citation distributions and the wrongness of Impact Factor.

Stephen Curry had previously made the call that journals should “show us the data” that underlie the much-maligned Journal Impact Factor (JIF). However, this call made me wonder what “showing us the data” would look like and how journals might do it.

What citation distribution should we look at? The JIF looks at citations in a year to articles published in the preceding 2 years. This captures a period in a paper’s life, but it misses “slow burner” papers and also underestimates the impact of papers that just keep generating citations long after publication. I wrote a quick bit of code that would look at a decade’s worth of papers at one journal to see what happened to them as yearly cohorts over that decade. I picked EMBO J to look at since they have actually published their own citation distribution, and also they appear willing to engage with more transparency around scientific publication. Note that, when they published their distribution, it considered citations to papers via a JIF-style window over 5 years.

I pulled 4082 papers with a publication date of 2004-2014 from Web of Science (the search was limited to Articles) along with data on citations that occurred per year. I generated histograms to look at distribution of citations for each year. Papers published in 2004 are in the top row, papers from 2014 are in the bottom row. The first histogram shows citations in the same year as publication, in the next column, the following year and so-on. Number of papers is on y and on x the number of citations. Sorry for the lack of labelling! My excuse is that my code made a plot with “subwindows”, which I’m not too familiar with.

allPlot

What is interesting is that the distribution changes over time:

  • In the year of publication, most papers are not cited at all, which is expected since there is a lag to publication of papers which can cite the work and also some papers do not come out until later in the year, meaning the likelihood of a citing paper coming out decreases as the year progresses.
  • The following year most papers are picking up citations: the distribution moves rightwards.
  • Over the next few years the distribution relaxes back leftwards as the citations die away.
  • The distributions are always skewed. Few papers get loads of citations, most get very few.

Although I truncated the x-axis at 40 citations, there are a handful of papers that are picking up >40 cites per year up to 10 years after publication – clearly these are very useful papers!

To summarise these distributions I generated the median (and the mean – I know, I know) number of citations for each publication year-citation year combination and made plots.

citedist

The mean is shown on the left and median on the right. The layout is the same as in the multi-histogram plot above.

Follow along a row and you can again see how the cohort of papers attracts citations, peaks and then dies away. You can also see that some years were better than others in terms of citations, 2004 and 2005 were good years, 2007 was not so good. It is very difficult, if not impossible, to judge how 2013 and 2014 papers will fare into the future.

What was the point of all this? Well, I think showing the citation data that underlie the JIF is a good start. However, citation data are more nuanced than the JIF allows for. So being able to choose how we look at the citations is important to understand how a journal performs. Having some kind of widget that allows one to select the year(s) of papers to look at and the year(s) that the citations came from would be perfect, but this is beyond me. Otherwise, journals would probably elect to show us a distribution for a golden year (like 2004 in this case), or pick a window for comparison that looked highly favourable.

Finally, I think journals are unlikely to provide this kind of analysis. They should, if only because it is a chance for a journal to show how it publishes many papers that are really useful to the community. Anyway, maybe they don’t have to… What this quick analysis shows is that it can be (fairly) easily harvested and displayed. We could crowdsource this analysis using standardised code.

Below is the code that I used – it’s a bit rough and would need some work before it could be used generally. It also uses a 2D filtering method that was posted on IgorExchange by John Weeks.
cdcode

The post title is taken from “The Great Curve” by Talking Heads from their classic LP Remain in Light.

Creep Diets: Fewer papers published at JCB

JCBdietA couple of years ago, a colleague sent me this picture* to say “who put J Cell Biol on a diet?”. I joked that maybe they publish too many autophagy papers and didn’t think much more of it.

Recently, Ron Vale put up this very interesting piece on bioRxiv discussing what it takes to publish a paper in the field of cell biology these days. In the main, he questions whether this is now out of reach of many trainees in our labs. It raises some great points and I recommend reading it.

One (of many) interesting stats in the article is that J Cell Biol now publishes fewer papers than it used to. Which made me think back to the photo and wonder why there has been a decline. Elsewhere, Vale notes that a cell biology paper now contains >2 the amount of data than papers of yesteryear. I’ve also written before about the creeping increase in the number of authors per paper at J Cell Biol and (more so) at Cell. Publication in Science is something of an arms race and his point is really that the amount of data, the time taken, the effort/people involved has got to an untenable level.

The data in the preprint is a bit limited as he only looks at two snapshots in time – because he looks at two cohorts of students at UCSF. So I thought I’d look at the decrease in JCB papers over time – did it really fall off? by how much? when did it start?.

JCBNCBHist

Getting the data is straightforward. In fact, PubMed will give you a csv of frequency of papers for a given search term (it even shows you a snapshot in the main search window). I wanted a bit more control, so I exported the records for JCB and NCB. I filtered out interviews and commentary as best as I could and plotted out the records as two histograms using a bin width of 6 months. It’s pretty clear that J Cell Biol is indeed publishing fewer papers now than it used to. It looks like the trend started around 2002, possibly accelerating in the last 5 years (the photo agrees with this). The six month output at JCB in 2015 is similar to what it was in 1975!

In the comments section of the preprint, there is a bit of discussion of why this may be. Overall, there are more and more papers being published every year. There’s no reason to think that the number of cell biology papers has remained static or fallen. So if J Cell Biol have not taken a decision to limit the number of papers, why is there a decline? One commenter suggests  Nature Cell Biology has “taken” some of these papers. So I plotted those numbers out too. The number of papers at NCB is capped and has been constant since the launch of the journal. It does look like NCB could be responsible, but it’s a complex question. Personally, I think it’s unlikely. When NCB was launched this marked a period of expansion in the number of scientific journals and it’s likely that the increase in number of venues that a paper can go to (rather than the creation of NCB per se) has affected publication at JCB. One simple cause could be financial, i.e. the page number being limited by RUP. If this is true, why not move the journal online? There’s so many datasets and movies in papers these days that it barely makes sense to print JCB any more.

I love reading papers in JCB. They are sufficiently detailed so that you know what’s going on. They’re definitely on Cell Biology, not some tangential area of molecular biology. The Editors are active cell biologists and it has had a long history of publishing some truly landmark discoveries in our field. For these reasons, I’m sad that there are fewer JCB papers these days. If it’s an editorial decision to try to make the journal more exclusive, this is even more regrettable. I wonder if the Editors feel that they just don’t get enough high quality papers. If this is the case, then maybe the expectations for what a paper “should be” need to be brought back in line with reality. Which is one of the points that Ron Vale is making in his article.

* I cropped the picture to remove some identifying things on the bookshelf.

Update @ 07:07 17/7/15: Rebecca Alvinia from JCB had left a comment on Ron Vale’s piece on bioRxiv to say that JCB are not purposely limiting the number of papers. Fillip Port then asked why JCB does not take preprints. Rebecca has now replied saying that following a change of policy, J Cell Biol and the other RUP journals will take preprinted papers. This is great news!

Creep Diets is the title track from the second album by the oddly named Fudge Tunnel, released on Earache Records in 1993

Pull Together: our new paper on “The Mesh”

We have a new paper out! You can access it here.

Title of the paper: The mesh is a network of microtubule connectors that stabilizes individual kinetochore fibers of the mitotic spindle

bundle1What’s it about? When a cell divides, the two new cells need to get the right number of chromosomes. If this process goes wrong, it is a disaster which may lead to disease e.g. cancer. The cell shares the chromosomes using a “mitotic spindle”. This is a tiny machine made of microtubules and other proteins. We have found that the microtubules are held together by something called “the mesh”. This is a weblike structure which connects the microtubules and gives them structural support.

Does this have anything to do with cancer? Some human cancer cells have high levels of  proteins called TACC3 and Aurora A kinase. We know that TACC3 is changed by Aurora A kinase. This changed form of TACC3 is part of the mesh. In our paper we mimic the cancer condition by increasing TACC3 levels. The mesh changes and the microtubules become wonky. This causes problems for dividing cells. It might be possible to target TACC3 using drugs to treat certain types of cancer, but this is a long way in the future.

Who did the work? Faye Nixon, a PhD student in the lab did most of the work. She used a method to look at mitotic spindles in 3D to study the mesh. My lab actually discovered the mesh by accident. A previous student, Dan Booth – back in 2011 – was looking at mitotic spindles to try and get 3D electron microscopy (tomography) working in the lab. Tomography works just like a CAT scan in a hospital, but on a much smaller scale. The mesh is found in the gaps between microtubules that are 25 nanometre wide (1 nanometre is 1 billionth of a metre), this is about 3,000 times smaller than a human hair, so it is very small! It was Dan who found the mesh and gave it the name. Other people in the lab did some really nice work which helped us to understand how the mesh works in dividing cells. Cristina Gutiérrez-Caballero did some experiments using a different type of microscope and Fiona Hood contributed some test tube experiments. Ian Prior at University of Liverpool, co-supervises Faye and helped with electron microscopy.

Have you discovered a new structure in cells? Yes and No. All cell biologists dream of finding a new structure in cells. It’s so unlikely though. Scientists have been looking at cells since the 17th Century and so the chances of seeing something that no-one has seen before are very small. In the 1970s, “inter-microtubule bridges” in the mitotic spindle were described using 2D electron microscopy. What we have done is to look at these structures in 3D for the first time and find that they are a network rather than individual connectors.

The work was funded by Cancer Research UK and North West Cancer Research Fund.

References

Nixon, F.M., Gutiérrez-Caballero, C., Hood, F.E., Booth, D.G., Prior, I.A. & Royle, S.J. (2015) The mesh is a network of microtubule connectors that stabilizes individual kinetochore fibers of the mitotic spindle eLife, doi: 10.7554/eLife.07635

This post is written in plain English to try to describe what is in the paper. I’m planning on writing a more technical post on some of the spatial statistics we developed as part of this paper.

The post title is from “Pull Together” a track from Shack’s H.M.S. Fable album.

Wrong Number: A closer look at Impact Factors

This is a long post about Journal Impact Factors. Thanks to Stephen Curry for encouraging me to post this.

tl;dr

  • the JIF is based on highly skewed data
  • it is difficult to reproduce the JIFs from Thomson-Reuters
  • JIF is a very poor indicator of the number of citations a random paper in the journal received
  • reporting a JIF to 3 d.p. is ridiculous, it would be better to round to the nearest 5 or 10.

I really liked this recent tweet from Stat Fact

It’s a great illustration of why reporting means for skewed distributions is a bad idea. And this brings us quickly to Thomson-Reuters’ Journal Impact Factor (JIF).

I can actually remember the first time I realised that the JIF was a spurious metric. This was in 2003, after reading a letter to Nature from David Colquhoun who plotted out the distribution of citations to a sample of papers in Nature. Up until that point, I hadn’t appreciated how skewed these data are. We put it up on the lab wall.

dcif

Now, the JIF for a given year is calculated as follows:

A JIF for 2013 is worked out by counting the total number of 2013 cites to articles in that journal that were published in 2011 and 2012. This number is divided by the number of “citable items” in that journal in 2011 and 2012.

There are numerous problems with this calculation that I don’t have time to go into here. If we just set these aside for the moment, the JIF is still used widely today and not for the purpose it was originally intended. Eugene Garfield, created the metric to provide librarians with a simple way to prioritise subscriptions to Journals that carried the most-cited scientific papers. The JIF is used (wrongly) in some institutions in the criteria for hiring, promotion and firing. This is because of the common misconception that the JIF is a proxy for the quality of a paper in that journal. Use of metrics in this manner is opposed by the SF-DORA and I would encourage anyone that hasn’t already done so, to pledge their support for this excellent initiative.

Why not report the median rather than the mean?

With the citation distribution in mind, why do Thomson-Reuters calculate the mean rather than the median for the JIF? It makes no sense at all. If you didn’t quite understand why from the @statfact tweet above, then look at this:

ActaJIFThe Acta Crystallographica Section A effect. The plot shows that this journal had a JIF of 2.051 in 2008 which jumped to 49.926 in 2009 due to a single highly-cited paper. Did every other paper in this journal suddenly get amazingly awesome and highly-cited for this period? Of course not. The median is insensitive to outliers like this.

The answer to why Thomson-Reuters don’t do this is probably for ease of computation. The JIF (mean) requires only three numbers for each journal, whereas calculating the median would require citation information for each paper under consideration for each journal. But it’s not that difficult (see below). There’s also a mismatch in the items that bring in citations to the numerator and those that count as “citeable items” in the denominator. This opacity is one of the major criticisms of the Impact Factor and this presents a problem for them to calculate the median.

Let’s crunch some citation numbers

I had a closer look at citation data for a small number of journals in my field. DC’s citation distribution plot was great (in fact, superior to JIF data) but it didn’t capture the distribution that underlies the JIF. I crunched the IF2012 numbers (released in June 2013) sometime in December 2013. This is shown below. My intention was to redo this analysis more fully in June 2014 when the IF2013 was released, but I was busy, had lost interest and the company said that they would be more open with the data (although I’ve not seen any evidence for this). I wrote about partial impact factors instead, which took over my blog. Anyway, the analysis shown here is likely to be similar for any year and the points made below are likely to hold.

I mainly looked at Nature, Nature Cell Biology, Journal of Cell Biology, EMBO Journal and J Cell Science. Using citations in 2012 articles to papers published in 2010 and 2011, i.e. the same criteria as for IF2012.

The first thing that happens when you attempt this analysis is that you realise how unreproducible the Thomson-Reuters JIFs are. This has been commented on in the past (e.g. here), yet I had the same data as the company uses to calculate JIFs and it was difficult to see how they had arrived at their numbers. After some wrangling I managed to get a set of papers for each journal that gave close to the same JIF.

2012IFMeanMedian

From this we can look at the citation distribution within the dataset for each journal. Below is a gallery of these distributions. You can see that the data are highly skewed. For example, JCB has kurtosis of 13.5 and a skewness of 3. For all of these journals ~2/3 of papers had fewer than the mean number of citations. With this kind of skew, it makes more sense to report the median (as described above). Note that Cell is included here but was not used in the main analysis.

So how do these distributions look when compared? I plotted each journal compared to JCB. They are normalised to account for the differing number of papers in each dataset. As you can see they are largely overlapping.

2012CitationDist

If the distributions overlap so much, how certain can we be that a paper in a journal with a high JIF will have more citations than a paper in a journal with a lower JIF? In other words, how good is the JIF (mean or median) at predicting how many citations a paper published in a certain journal is likely to have?

To look at this, I ran a Monte Carlo analysis comparing a random paper from one journal with a random one from JCB and looked at the difference in number of citations. Papers in EMBO J are indistinguishable from JCB. Papers in JCS have very slightly fewer citations than JCB. Most NCB papers have a similar number of cites to papers in JCB, but there is a tail of papers with higher cites, a similar but more amplified picture for Nature.

1paperSubtract

Thomson-Reuters quotes the JIF to 3 d.p. and most journals use this to promote their impact factor (see below). The precision of 3 d.p. is ridiculous when two journals with IFs of 10.822 and 9.822 are indistinguishable when it comes to the number of citations to randomly sampled papers in that journal.

So how big do differences in JIF have to be in order to be able to tell a “Journal X paper” from a “Journal Y paper” (in terms of citations)?

To look at this I ran some comparisons between the journals in order to get some idea of “significant differences”. I made virtual issues of each journal with differing numbers of papers (5,10,20,30) and compared the citations in each via Wilcoxon rank text and then plotted out the frequency of p-values for 100 of these tests. Please leave a comment if you have a better idea to look at this. I liked this method over the head-to-head comparison for two papers as it allows these papers the benefit of the (potential) reflected glory of other papers in the journal. In other words, it is closer to what the JIF is about.

OK, so this shows that sufficient sample size is required to detect differences, no surprise there. But at N=20 and N=30 the result seems pretty clear. A virtual issue of Nature trumps a virtual issue of JCB, and JCB beats JCS. But again, there is no difference between JCB and EMBO J. Finally, only ~30% of the time would a virtual issue of NCB trump JCB for citations! NCB and JCB had a difference in JIF of  almost 10 (20.761 vs 10.822). So not only is quoting the JIF to 3 d.p. ridiculous, it looks like rounding the JIF to the nearest 5 (or 10) might be better!

This analysis supports the idea that there are different tiers of journal (in Cell Biology at least). But the JIF is the bluntest of tools to separate these journals. A more rigorous analysis is needed to demonstrate this more clearly but it is not feasible to do this while having a dataset which agrees with that of Thomson-Reuters (without purchasing the data from the company).

If you are still not convinced about how shortcomings of the JIF, here is a final example. The IF2013 for Nature increased from 38.597 to 42.351. Let’s have a look at the citation distributions that underlie this rise of 3.8! As you can see below they are virtually identical. Remember that there’s a big promotion that the journal uses to pull in new subscribers, seems a bit hollow somehow doesn’t it? Disclaimer: I think this promotion is a bit tacky, but it’s actually a really good deal… the News stuff at the front and the Jobs section at the back alone are worth ~£40.

Show us the data!

CellBiolIFDist
More skewed distributions: The distribution of JIFs in the Cell Biology Category for IF2012 is itself skewed. Median JIF is 3.2 and Mean JIF is 4.8.

Recently, Stephen Curry has called for Journals to report the citation distribution data rather than parroting their Impact Factor (to 3 d.p.). I agree with this. The question is though – what to report?

  • The IF window is far too narrow (2 years + 1 year of citations) so a broader window would be more useful.
  • A comparison dataset from another journal is needed in order to calibrate ourselves.
  • Citations are problematic – not least because they are laggy. A journal could change dramatically and any citation metric would not catch up for ~2 years.
  • Related to this some topics are hot and others not. I guess we’re most interested in how a paper in Journal X compares to others of its kind.
  • Any information reported needs to be freely available for re-analysis and not in the hands of a company. Google Scholar is a potential solution but it needs to be more open with its data. They already have a journal ranking which provides a valuable and interesting alternative view to the JIF.

One solution would be to show per article citation profiles comparing these for similar papers. How do papers on a certain topic in Journal X compare to not only those in Journal Y but to the whole field? In my opinion, this metric would be most useful when assessing scholarly output.

Summary

Thanks for reading to the end (or at least scrolling all the way down). The take home points are:

  • the JIF is based on highly skewed data.
  • the median rather than the mean is better for summarising such distributions.
  • JIF is a very poor indicator of the number of citations a random paper in the journal received!
  • reporting a JIF to 3 d.p. is ridiculous, it would be better to round to the nearest 5 or 10.
  • an open resource for comparing citation data per journal would be highly valuable.

The post title is taken from “Wrong Number” by The Cure. I’m not sure which album it’s from, I only own a Greatest Hits compilation.