“Yeah” Is What We Had

When it comes to measuring the impact of our science, citations are pretty much all we have. And not only that but they only say one thing – yeah – with no context. How can we enrich citation data?

Much has been written about how and why and whether or not we should use metrics for research assessment. If we accept that metrics are here to stay in research assessment (of journals, Universities, departments and of individuals), I think we should be figuring out better ways to look at the available information.

8541947962_6853dd9786_zCitations to published articles are the key metric under discussion. This is because they are linked to research outputs (papers), have some relation to “impact” and they can be easily computed and a number of metrics have been developed to draw out information from the data (H-index, IF etc.). However there are many known problems with citations such as: they are heavily influenced by the size of the field. What I want to highlight here is what a data-poor resource they are and think of ways we could enrich the dataset with minimal modification to our existing databases.

1. We need a way to distinguish a yeah from a no

The biggest weakness of using citations as a measure of research impact is that a citation is a citation. It just says +1. We have no idea if +1 means “the paper stinks” or “the work is amazing!”.  It’s incredible that we can rate shoelaces on Amazon or eBay but we haven’t figured out a way to do this for scientific papers. Here’s a suggestion:

  • A neutral citation is +1
  • A positive citation is +2
  • A negative citation is -1

A neutral citation would be stating a fact and adding reference to support it, e.g. DNA is a double helix (Watson & Crick, 1953).

A positive citation would be something like: in agreement with Bloggs et al. (2010), we also find x.

A negative citation might be: we have tested the model proposed by Smith & Jones (1977) and find that it does not hold.

One further idea (described here) is to add more context to citation using keywords. Such as “replicating”, “using”, “consistent with”. This would also help with searching the scientific literature.

2. Multiple citations in one article

Because currently, citations are a +1, there is no way to distinguish whether the paper giving the citation was mentioning the cited paper in passing or was entirely focussed on that one paper.

Another way to think about this is that there are multiple reasons to cite a paper: maybe the method or reagent is being used, maybe they are talking about Figure 2 showing X or Figure 5 showing Y. What if a paper is talking about all of these things? In other words, the paper was very useful. Shouldn’t we record that interest?

Suggestion: A simple way to do this is to count the number of mentions in the text of the paper rather than just if the paper appears in the reference list.

3. Division of a citation unit for fair credit to each author

Calculations such as the H-index make no allowance for the position of the author in the author list (used in biological sciences and some other fields to denote contribution to the paper). It doesn’t make sense that the 25th author on a 50 author paper receives 100% of the citation credit as the first or last author. Similarly, the first author on a two author paper is only credited in the same way as the middle author on a multi-author paper. The difference in contribution is clear, but the citation credit is not. This is because the citation credit for the former paper is worth 25 times that of the latter! This needs to be equalised. The citation unit, c could be divided to achieve fair credit for authors. At the moment, c=1, but could be multiples (or negative values) as described above. Here’s a suggestion:

  • First (and multiple first) and last (and co-last) authors get 0.5c divided by number of authors.
  • The remainder, 0.5c, is divided between all authors.

For a two author paper: first author gets 0.5c and last author gets 0.5c. (0.5c/2+0.5c/2)=0.5c

For a ten author paper with one first author and one last author, first and last author each get (0.5c/2+0.5c/10)=0.3c and the 5th author gets (0c+0.5c/10)=0.05c.

Note that the sum for all authors will equal c. So this is equalised for all papers. These citation credits would then be the basis for H-index and other calculations for individuals.

Most simply, the denominator would be the number of authors, or – if we can figure out a numerical credit system – each author could be weighted according to their contribution.

4. Citations to reviews should be downgraded

A citation to a review is not equal to a citation to a research paper. For several reasons. First, they are cited at a higher rate, because they are a handy catchall citation particularly for the Introduction section in papers. This isn’t fair either and robs credit from the people who did the work that actually demonstrated what is being discussed. Second, the achievement of publishing a review is nothing in comparison to publishing a paper. Publishing a review involves 1) being asked, 2) writing it, 3) light peer review and some editing and that’s it! Publishing a research paper involves much more effort: having the idea, getting the money, hiring the people, training the people, getting a result – and we are only at the first panel in Fig 1A. Not to mention the people-hours and arduous peer review process. It’s not fair that citations to reviews are treated as equal to papers when it comes to research assessment.

Suggestion: a citation to a review should be worth a fraction (maybe 1/10th) of a citation to a research paper.

In addition, there are too many reviews written at the moment. I think this is not because they are particularly useful. Very few actually contribute a new view or new synthesis of an area, most are just a summary of the area. Journals like them because they drive up their citation metrics. Authors like them because it is nice to be invited to write something – it means people are interested in what you have to say… If citations to reviews were downgraded, there would be less incentive to publish them and we would have more space for all those real papers that are getting rejected at journals that claim that space is a limitation for publication.

5. Self-citations should be eliminated

If we are going to do all of the above, then self-citation would pretty soon become a problem. Excessive self-citation would be difficult to police, and not many scientists would go for a -1 citation to their own work. So, the simplest thing to do is to eliminate self-citation. Author identification is crucial here. At the moment this doesn’t work well. In ISI and Scopus, whatever algorithm they use keeps missing some papers of mine (and my name is not very common at all). I know people who have been grouped with other people that they have published one or two papers with. For authors with ambiguous names, this is a real problem. ORCID is a good solution and maybe having an ORCID (or similar) should be a requirement for publication in the future.

Suggestion: the company or body that collates citation information needs to accurately assign authors and make sure that research papers are properly segregated from reviews and other publication types.

These were five things I thought of to enrich citation data to improve research assessment, do you have any other ideas?

The post title is taken from ‘”Yeah” Is What We Had’ by Grandaddy from their album Sumday.

Round and Round

I thought I’d share a procedure for rotating a 2D set of coordinates about the origin. Why would you want do this? Well, we’ve been looking at cell migration in 2D – tracking nuclear position over time. Cells migrate at random and I previously blogged about ways to visualise these tracks more clearly. Part of this earlier procedure was to set the start of each track at (0,0). This gives a random hairball of tracks moving away from the origin. Wouldn’t it be a good idea to orient all the tracks so that the endpoint lies on the same axis? This would simplify the view and allow one to assess how ‘directional’ the cell tracks are. To rotate a set of coordinates, you need to use a rotation matrix. This allows you to convert the x,y coordinates to their new position x’,y’. This rotation is counter-clockwise.

\(x’ = x \cos \theta – y \sin \theta\,\)

\(y’ = x \sin \theta + y \cos \theta\,\)

However, we need to find theta first. To do this we need to find the angle between two lines, using this formula.

\(\cos \theta = \frac {\mathbf a \cdot \mathbf b}{\left \Vert {\mathbf a} \right \Vert \cdot \left \Vert {\mathbf b} \right \Vert} \)

The maths is kept to a minimum here. If you are interested, look at the code at the bottom.

beforeThe two lines (a and b) are formed by the x-axis (origin to some point on the x-axis, i.e. y=0) and by a line running from the origin to the last coordinate in the series. This calculation can be done for each track with theta for each track being used to rotate the that whole track (x,y changed to x’,y’ for each point).

Here is an example of just a few tracks from an experiment. Typically we have hundreds of tracks for each experimental group and the code will blast through them all very quickly (<1 s).


After rotation, the tracks are now aligned so that the last point is on the x-axis at y=0. This allows us to see how ‘directional’ the tracks are. The end points are now aligned, when they migrated there, how convoluted was their path.

The code to do this is up on Igor Exchange code snippets. A picture of the code is below (markup for code in WordPress is not very clear). See the code snippet if you want to use it.


The weakness of this method is that acos (arccos) only gives results from 0 to Pi (0 to 180°). There is a correction in the procedure, but everything needs editing if you want to rotate the co-ordinates to some other plane. Feedback welcome.

Edit Jim Prouty and A.G. have suggested two modifications to the code. The first is to use complex waves rather than 2D real waves. Then use two native Igor functions r2polar or p2rect. The second suggestion is to use Matrix operations! As is often the case with Igor there are several ways of doing things. The method described here is long-winded compared to a MatrixOp and if the waves were huge these solutions would be much, much faster. As it is, our migration movies typically have 60 points and as mentioned rotator() blasts through them very quickly. More complex coordinate sets would need something more sophisticated.

The post title is taken from “Round & Round” by New Order from their Technique LP.

Sure To Fall

What does the life cycle of a scientific paper look like?

It stands to reason that after a paper is published, people download and read the paper and then if it generates sufficient interest, it will begin to be cited. At some point these citations will peak and the interest will die away as the work gets superseded or the field moves on. So each paper has a useful lifespan. When does the average paper start to accumulate citations, when do they peak and when do they die away?

Citation behaviours are known to be very field-specific. So to narrow things down, I focussed on cell biology and in one area “clathrin-mediated endocytosis” in particular. It’s an area that I’ve published in – of course this stuff is driven by self-interest. I downloaded data for 1000 papers from Web of Science that had accumulated the most citations. Reviews were excluded, as I assume their citation patterns are different from primary literature. The idea was just to take a large sample of papers on a topic. The data are pretty good, but there are some errors (see below).

Number-crunching (feel free to skip this bit): I imported the data into IgorPro making a 1D wave for each record (paper). I deleted the last point corresponding to cites in 2014 (the year is not complete). I aligned all records so that year of publication was 0. Next, the citations were normalised to the maximum number achieved in the peak year. This allows us to look at the lifecycle in a sensible way. Next I took out records to papers less than 6 years old as I reasoned these would have not have completed their lifecycle and could contaminate the analysis (it turned out to make little difference). The lifecycles were plotted and averaged. I also wrote a quick function to pull out the peak year for citations post hoc.

So what did it show?

Citations to a paper go up and go down, as expected (top left). When cumulative citations are plotted most of the articles have an initial burst and then level off. The exception are ~8 articles that continue to rise linearly (top right). On average a paper generates its peak citations three years after publication (box plot). The fall after this peak period is pretty linear and it’s apparently all over somewhere >15 years after publication (bottom left). To look at the decline in more detail I aligned the papers so that year 0 was the year of peak citations. The average now loses almost 40% of those peak citations in the following year and then declines steadily (bottom right).

Edit: The dreaded Impact Factor calculation takes the citations to articles published in the preceding 2 years and divides by the number of citable items in that period. This means that each paper only contributes to the Impact Factor in years 1 and 2. This is before the average paper reaches its peak citation period. Thanks to David Stephens (@david_s_bristol) for pointing this out. The alternative 5 year Impact Factor gets around this limitation.

Perhaps lifecycle is the wrong term: papers in this dataset don’t actually ‘die’, i.e. go to 0 citations. There is always a chance that a paper will pick up the odd citation. Papers published 15 years ago are still clocking 20% of their peak citations. Looking at papers cited at lower rates would be informative here.

Two other weaknesses that affect precision is that 1) a year is a long time and 2) publication is subject to long lag times. The analysis would be improved by categorising the records based on the month-year when the paper was published and the month-year when each citation comes in. Papers published in January in one year probably have a different peak than those published in December of the same year, but this is lost when looking at year alone. Secondly, due to publication lag, it is impossible to know when the peak period of influence for a paper truly is.
MisCytesProblems in the dataset. Some reviews remained despite being supposedly excluded, i.e. they are not properly tagged in the database. Also, some records have citations from years before the article was published! The numbers of citations are small enough to not worry for this analysis, but it makes you wonder about how accurate the whole dataset is. I’ve written before about how complete citation data may or may not be. These sorts of things are a concern for all of us who are judged by these things for hiring and promotion decisions.

The post title is taken from ‘Sure To Fall’ by The Beatles, recorded during The Decca Sessions.

Tips from the Blog I

What is the best music to listen to while writing a manuscript or grant proposal? OK, I know that some people prefer silence and certainly most people hate radio chatter while trying to concentrate. However, if you like listening to music, setting an iPod on shuffle is no good since a track by Napalm Death can jump from the speakers and affect your concentration. Here is a strategy for a randomised music stream of the right mood and with no repetition, using iTunes.

For this you need:
A reasonably large and varied iTunes library that is properly tagged*.

1. Setup the first smart playlist to select all songs in your library that you like to listen to while writing. I do this by selecting genres that I find conducive to writing.
Conditions are:
-Match any of the following rules
-Genre contains jazz
-add as many genres as you like, e.g. shoegaze, space rock, dream pop etc.
-Don’t limit and do check live updating
I call this list Writing

2. Setup a second smart playlist that makes a randomised novel list from the first playlist
Conditions are:
-Match all of the following rules
-Playlist is Writing   //or whatever you called the 1st playlist
-Last played is not in the last 14 days    //this means once the track is played it disappears, i.e. refreshes constantly
-Limit to 50 items selected by random
-Check Live updating
I call this list Writing List

That’s it! Now play from Writing List while you write. The same strategy works for other moods, e.g. for making figures I like to listen to different music and so I have another pair for that.

After a while, the tracks that you’ve skipped (for whatever reason) clog up the playlist. Just select all and delete from the smart playlist, this refreshes the list and you can go again with a fresh set.

* If your library has only a few tracks, or has plenty of tracks but they are all of a similar genre, this tip is not for you.

Outer Limits

This post is about a paper that was recently published. It was the result of a nice collaboration between me and Francisco López-Murcia and Artur Llobet in Barcelona.

The paper in a nutshell
The availability of clathrin sets a limit for presynaptic function

Clathrin is a three legged protein that forms a cage around membranes during endoctosis. One site of intense clathrin-mediated endocytosis (CME) is the presynaptic terminal. Here, synaptic vesicles need to be recaptured after fusion and CME is the main route of retrieval. Clathrin is highly abundant in all cells and it is generally thought of as limitless for the formation of multiple clathrin-coated structures. Is this really true? In a neuron where there is a lot of endocytic activity, maybe the limits are tested?
It is known that strong stimulation of neurons causes synaptic depression – a form of reversible synaptic plasticity where the neuron can only evoke a weak postsynaptic response afterwards. Is depression a vesicle supply problem?

What did we find?
We showed that clathrin availability drops during stimulation that evokes depression. The drop in availability is due to clathrin forming vesicles and moving away from the synapse. We mimicked this by RNAi, dropping the clathrin levels and looking at synaptic responses. We found that when the clathrin levels drop, synaptic responses become very small. We noticed that fewer vesicles are able to be formed and those that do form are smaller. Interestingly, the amount of neurotransmitter (acetylcholine) in the vesicles was much less than the volume of the vesicles as measured by electron microscopy. This suggests there is an additional sorting problem in cells with lower clathrin levels.

Killer experiment
A third reviewer was called in (due to a split decision between Reviewers 1 and 2). He/she asked a killer question: all of our data could be due to an off-target effect of RNAi, could we do a rescue experiment? We spent many weeks to get the rescue experiment to work, but a second viral infection was too much for the cells and engineering a virus to express clathrin was very difficult. The referee also said: if clathrin levels set a limit for synaptic function, why don’t you just express more clathrin? Well, we would if we could! But this gave us an idea… why don’t we just put clathrin in the pipette and let it diffuse out to the synapses and rescue the RNAi phenotype over time? We did it – and to our surprise – it worked! The neurons went from an inhibited state to wild-type function in about 20 min. We then realised we could use the same method on normal neurons to boost clathrin levels at the synapse and protect against synaptic depression. This also worked! These killer experiments were a great addition to the paper and are a good example of peer review improving the paper.

Fran and Artur did almost all the experimental work. I did a bit of molecular biology and clathrin purification. Artur and I wrote the paper and put the figures together – lots of skype and dropbox activity.
Artur is a physiologist and his lab like to tackle problems that are experimentally very challenging – work that my lab wouldn’t dare to do – he’s the perfect collaborator. I have known Artur for years. We were postdocs in the same lab at the LMB in the early 2000s. We tried a collaborative project to inhibit dynamin function in adrenal chromaffin cells at that time, but it didn’t work out. We have stayed in touch and this is our first paper together. The situation in Spain for scientific research is currently very bad and it deteriorated while the project was ongoing. This has been very sad to hear about, but fortunately we were able to finish this project and we hope to work together more in the future.

We were on the cover!
25.coverNow the scientific literature is online, this doesn’t mean so much anymore, but they picked our picture for the cover. It is a single cell microculture expressing GFP that was stained for synaptic markers and clathrin. I changed the channels around for artistic effect.

What else?
J Neurosci is slightly different to other journals that I’ve published in recently (my only other J Neurosci paper was published in 2002). For the following reasons:

  1. No supplementary information. The journal did away with this years ago to re-introduce some sanity in the peer review process. This didn’t affect our paper very much. We had a movie of clathrin movement that would have gone into the SI at another journal, but we simply removed it here.
  2. ORCIDs for authors are published with the paper. This gives the reader access to all your professional information and distinguishes authors with similar names. I think this is a good idea.
  3. Submission fee. All manuscripts are subject to a submission fee. I believe this is to defray the costs of editorial work. I think this makes sense, although I’m not sure how I would feel if our paper had been rejected.


López-Murcia, F.J., Royle, S.J. & Llobet, A. (2014) Presynaptic clathrin levels are a limiting factor for synaptic transmission J. Neurosci., 34: 8618-8629. doi: 10.1523/JNEUROSCI.5081-13.2014

Pubmed | Paper

The post title is taken from “Outer Limits” a 7″ Single by Sleep ∞ Over released in 2010.

All This And More

I was looking at the latest issue of Cell and marvelling at how many authors there are on each paper. It’s no secret that the raison d’être of Cell is to publish the “last word” on a topic (although whether it fulfils that objective is debatable). Definitive work needs to be comprehensive. So it follows that this means lots of techniques and ergo lots of authors. This means it is even more impressive when a dual author paper turns up in the table of contents for Cell. Anyway, I got to thinking: has it always been the case that Cell papers have lots of authors and if not, when did that change?

I downloaded the data for all articles published by Cell (and for comparison, J Cell Biol) from Scopus. The records required a bit of cleaning. For example, SnapShot papers needed to be removed and also the odd obituary etc. had been misclassified as an article. These could be quickly removed. I then went back through and filtered out ‘articles’ that were less than three pages as I think it is not possible for a paper to be two pages or fewer in length. The data could be loaded into IgorPro and boxplots generated per year to show how author number varied over time. Reviews that are misclassified as Articles will still be in the dataset, but I figured these would be minimal.

Authors1First off: Yes, there are more authors on average for a Cell paper versus a J Cell Biol paper. What is interesting is that both journals had similar numbers of authors when Cell was born (1974) and they crept up together until the early 2000s, when the number of Cell authors kept increasing, or JCell Biol flattened off, whichever way you look at it.

I think the overall trend to more authors is because understanding biology has increasingly required multiple approaches and the bar for evidence seems to be getting higher over time. The initial creep to more authors (1974-2000) might be due to a cultural change where people (technicians/students/women) began to get proper credit for their contributions. However, this doesn’t explain the divergence between J Cell Biol and Cell in recent years. One possibility is Cell takes more non-cell biology papers and that these papers necessarily have more authors. For example, the polar bear genome was published in Cell (29 authors), and this sort of paper would not appear in J Cell Biol. Another possibility is that J Cell Biol has a shorter and stricter revision procedure, which means that multiple rounds of revision, collecting new techniques and new authors is more limited than it is at Cell. Any other ideas?

AuthorI also quickly checked whether more authors means more citations, but found no evidence for such a relationship. For papers published in the years 2000-2004, the median citation number for papers with 1-10 authors was pretty constant for J Cell Biol. For Cell, these data mere more noisy. Three-author papers tended to be cited a bit more than those with two authors, but then four author papers were also lower.

The number of authors on papers from our lab ranges from 2-9 and median is 3.5. This would put an average paper from our lab in the bottom quartile for JCB and in the lower 10% for Cell in 2013. Ironically, our 9 author paper (an outlier) was published in J Cell Biol. Maybe we need to get more authors on our papers before we can start troubling Cell with our manuscripts…

The Post title is taken from ‘All This and More’ by The Wedding Present from their LP George Best.