Adventures in Code IV: correcting filenames

A large amount of time doing data analysis is the process of cleaning, importing, reorganising and generally not actually analysing data but getting it ready to analyse. I’ve been trying to get over the idea to non-coders in the group that strict naming conventions (for example) are important and very helpful to the poor person who has to deal with the data.

missingplot

Things have improved a lot and dtatsets that used to take a few hours to clean up are now pretty much straightforward. A recent example is shown here. Almost 200 subconditions are plotted out and there is only one missing graph. I suspect the blood sugar levels were getting low in the person generating the data… the cause was a hyphen in the filename and not an underscore.

These data are read into Igor from CSVs outputted from Imaris. Here comes the problem: the folder and all files within it have the incorrect name.

There are 35 files in each folder and clearly this needs a computer to fix, even if it were just one foldersworth at fault. The quickest way is to use the terminal and there are lots of ways to do it.

Now, as I said the problem is that the foldername and filenames both need correcting. Most terminal commands you can quickly find online actually fail because they try to rename the file and folder at the same time, and since the folder with the new name doesn’t exist… you get an error.

The solution is to rename the folders first and then the files.


find . -type d -maxdepth 2 -name "oldstring*" | while read FNAME; do mv "$FNAME" "${FNAME//oldstring/newstring}"; done
find . -type f -maxdepth 3 -name "oldstring*.csv" | while read FNAME; do mv "$FNAME" "${FNAME//oldstring/newstring}"; done

A simple tip, but effective and useful. HT this gist

Part of a series on computers and coding

Tips from the blog X: multi-line commenting in Igor

This is part-tip, part-adventures in code. I found out recently that it is possible to comment out multiple lines of code in Igor and thought I’d put this tip up here.

Multi-line commenting in programming is useful two reasons:

  1. writing comments (instructions, guidance) that last more than one line
  2. the ability to temporarily remove a block of code while testing

In each computer language there is the ability to comment out at least one line of code.

In Igor this is “//”, which comments out the whole line, but no more.

ipcomment1

This is the same as in ImageJ macro language.

ijcomment1

Now, to comment out whole sections in FIJI/ImageJ is easy. Inserting “/*” where you want the comment to start, and then “*/” where it ends, multiple lines later.

ijcomment2

I didn’t think this syntax was available in Igor, and it isn’t really. I was manually adding “//” for each line I wanted to remove, which was annoying. It turns out that you can use Edit > Commentize to add “//” to the start of all selected lines. The keyboard shortcut in IP7 is Cmd-/. You can reverse the process with Edit > Decommentize or Cmd-\.

ipcomment2

There is actually another way. Igor can conditionally compile code. This is useful if for example you write for Igor 7 and Igor 6. You can get compilation of IP7 commands only if the user is running IP7 for example. This same logic can be used to comment out code as follows.

ipcomment3

The condition if 0 is never satisfied, so the code does not compile. The equivalent statement for IP7-specific compilation, is “#if igorversion()>=7”.

So there you have it, two ways to comment out code in Igor. These tips were from IgorExchange.

If you want to read more about commenting in different languages and the origins of comments, read here.

This post is part of a series of tips.

Calendars and Clocks

This is a quick post about the punch card feature on GitHub. This is available from Graphs within each repo and is also directly accessible via the API.

I was looking at the punch card for two of my projects: one is work related and the other, more of a kind of hobby. The punch cards were different (the work one had way more commits, 99, than the hobby, 22). There was an interesting pattern to them. Here they are overlaid. Green is the work repo. Purple is the hobby.

punchcard

It says something about my working day. There’s times when I don’t do any committing, i.e. weekends during the day and most early evenings. What was interesting was that I was pretty stringent about doing hobby stuff only at set times: first thing over a coffee, over lunch, or in the evenings.

As self analysis goes, this is pretty lightweight compared to this terrifying post by Stephen Wolfram.

The post title is taken from “Calendars and Clocks” from The Coral’s debut LP

The Digital Cell: Getting started with IgorPRO

This post follows on from “Getting Started“.

In the lab we use IgorPRO for pretty much everything. We have many analysis routines that run in Igor, we have scripts for processing microscope metadata etc, and we use it for generating all figures for our papers. Even so, people in the lab engage with it to varying extents. The main battle is that the use of Excel is pretty ubiquitous.

I am currently working on getting more people in the lab started with using Igor. I’ve found that everyone is keen to learn. The approach so far has been workshops to go through the basics. This post accompanies the first workshop, which is coupled to the first few pages of the Manual. If you’re interested in using Igor read on… otherwise you can skip to the part where I explain why I don’t want people in the lab to use Excel.

IgorPro is very powerful and the learning curve is steep, but the investment is worth it.

WaveMetrics_IGOR_Pro_LogoThese are some of the things that Igor can do: Publication-quality graphics, High-speed data display, Ability to handle large data sets, Curve-fitting, Fourier transforms, smoothing, statistics, and other data analysis, Waveform arithmetic, Matrix math, Image display and processing, Combination graphical and command-line user interface, Automation and data processing via a built-in programming environment, Extensibility through modules written in the C and C++ languages. You can even play games in it!

The basics

The first thing to learn is about the objects in the Igor environment and how they work.There are four basic objects that all Igor users will encounter straight away.

  • Waves
  • Graphs
  • Tables
  • Layouts

All data is stored as waveforms (or waves for short). Waves can be displayed in graphs or tables. Graphs and tables can be placed in a Layout. This is basically how you make a figure.

The next things to check out are the command window (which displays the history), the data browser and the procedure window.

Essential IgorPro

  • Tables are not spreadsheets! Most important thing to understand. Tables are just a way of displaying a wave. They may look like a spreadsheet, but they are not.
  • Igor is case insensitive.
  • Spaces. Igor can handle spaces in names of objects, but IMO are best avoided.
  • Igor is 0-based not 1-based
  • Logical naming and logical thought – beginners struggle with this and it’s difficult to get this right when you are working on a project, but consistent naming of objects makes life easier.
  • Programming versus not programming – you can get a long way without programming but at some point it will be necessary and it will save you a lot of time.

Pretty soon, you will go beyond the four basic objects and encounter other things. These include: Numeric and string variables, Data folders, Notebooks, Control panels, 3D plots – a.k.a. gizmo, Procedures.

Getting started guide
Getting started guide

Why don’t we use Excel?

  • Excel can’t make high quality graphics for publication.
    • We do that in Igor.
    • So any effort in Excel is a waste of time.
  • Excel is error-prone.
    • Too easy for mistakes to be introduced.
    • Not auditable. Tough/impossible to find mistakes.
    • Igor has a history window that allows us to see what has happened.
  • Most people don’t know how to use it properly.
  • Not good for biological data – Transcription factor Oct4 gets converted to a date.
  • Limited to 1048576 rows and 16384 columns.
  • Related: useful link describing some spreadsheet crimes of data entry.

But we do use Excel a lot

  • Excel is useful for quick calculations and for preparing simple charts to show at lab meeting.
  • Same way that Powerpoint is OK to do rough figures for lab meeting.
  • But neither are publication-quality.
  • We do use Excel for Tracking Tables, Databases(!) etc.

The transition is tough, but worth it

Writing formulae in Excel is straightforward, and the first thing you will find is that to achieve the same thing in Igor is more complicated. For example, working out the mean for each row in an array (a1:y20) in Excel would mean typing =AVERAGE(A1:y1) in cell z1 and copying this cell down to z20. Done. In Igor there are several ways to do this, which itself can be unnerving. One way is to use the Waves Average panel. You need to know how this works to get it to do what you want.

But before you turn back, thinking I’ll just do this in Excel and then import it… imagine you now want to subtract a baseline value from the data, scale it and then average. Imagine that your data are sampled at different intervals. How would you do that? Dealing with those simple cases in Excel is difficult-to-impossible. In Igor, it’s straightforward.

Resources for learning more Igor:

  • Igor Help – fantastic resource containing the manual and more. Access via Help or by typing ShowHelpTopic “thing I want to search for”.
  • Igor Manual – This PDF is available online or in Applications/Igor Pro/Manual. This used to be a distributed as a hard copy… it is now ~3000 pages.
  • Guided Tour of IgorPro – this is a great way to start and will form the basis of the workshops.
  • Demos – Igor comes packed with Demos for most things from simple to advanced applications.
  • IgorExchange – Lots of code snippets and a forum to ask for advice or search for past answers.
  • Igor Tips – I’ve honestly never used these, you can turn on tips in Igor which reveal help on mouse over.
  • Igor mailing list – topics discussed here are pretty advanced.
  • Introduction to IgorPRO from Payam Minoofar is good. A faster start to learning to program that reading the manual.
  • Hands-on experience!

Part of a series on the future of cell biology in quantitative terms.

The Digital Cell: Getting Started

More on the theme of “The Digital Cell“: using quantitative, computational approaches in cell biology.

So you want to get started? Well, the short version of this post is:

Find something that you need to automate and get going!

Programming

http://www.instruction-manuals.co.uk/imageIM/four/seven/bbc.gif
http://www.instruction-manuals.co.uk/imageIM/four/seven/bbc.gif

I make no claim to be a computer wizard. My first taste of programming was the same as anyone who went to school in the UK in the 1980s: BBC Basic. Although my programming only went as far as copying a few examples from the book (right), this experience definitely reduced the “fear of the command line”. My next encounter with coding was to learn HTML when I was an undergraduate. It was not until I was a postdoc that I realised that I needed to write scripts in order get computers to do what I wanted them to do for my research.

Image analysis

I work in cell biology. My work involves a lot of microscopy. From the start, I used computer-based methods to quantify images. My first paper mentions quantifying images, but it wasn’t until I was a PhD student that I first used NIH Image (as it was called then) to extract quantitative information from confocal micrographs. I was also introduced to IgorPRO (version 3!) as a PhD student, but did no programming. That came later. As a postdoc, we used Scanalytics’ IPLab and Igor (as well as a bit of ImageJ as it had become). IPLab had an easy scripting language and it was in this program that I learned to write macros for analysis. At this time there were people in the lab who were writing software in IgorPro and MATLAB. While I didn’t pick up programming in IgorPRO or MATLAB then, it made me realise what was possible.

When I started my own group I discovered that IPLab had been acquired by BD Biosciences and then stripped out. I had hundreds of useless scripts and needed a new solution. ImageJ had improved enormously by this time and so this became our default image analysis program. The first data analysis package I bought was IgorPro (version 6) and I have stuck with it since then. In a future post, I will probably return to whether or not this was a good path.

Getting started with programming

Around 2009, I was still unable to program properly. I needed a macro for baseline subtraction – something really simple – and realised I didn’t know how to do it. We didn’t have just one or two traces to modify, we had hundreds. This was simply not possible by hand. It was this situation that made me realise I needed to learn to program.

…having a concrete problem that is impossible to crack any other way is the best motivator for learning to program.

This might seem obvious, but having a concrete problem that is impossible to crack any other way is the best motivator for learning to program. I know many people who have decided they “want to learn to code” or they are “going to learn to use R”. This approach rarely works. Sitting down and learning this stuff without sufficient motivation is really tough. So I would advise someone wanting to learn programming to find something that needs automation and just get going. Just get something to work!

Don’t worry (initially) about any of the following:

  • What program/language to use – as long as it is possible, just pick something and do it
  • If your code is ugly or embarrassing to show to an expert – as long as it runs, it doesn’t matter
  • About copy-and-pasting from examples – it’s OK as long as you take time to understand what you are doing, this is a quick way to make progress. Resources such as stackoverflow are excellent for this
  • Bugs – you can squish them, they will frustrate you, but you might need some…
  • Help – ask for help. Online forums are great, experts love showing off their knowledge. If you have local expertise, even better!

Once you have written something (and it works)… congratulations, you are a computer programmer!

IMG_2206Seriously, that is all there is to it. OK, it’s a long way to being a good programmer or even a competent one, but you have made a start. Like Obi Wan Kenobi says: you’ve taken your first step into a larger world.

So how do you get started with an environment like IgorPro? This will be the topic for next time.

Part of a series on the future of cell biology in quantitative terms.

The Digital Cell: Workflow

The future of cell biology, even for small labs, is quantitative and computational. What does this mean and what should it look like?

My group is not there yet, but in this post I’ll describe where we are heading. The graphic below shows my current view of the ideal workflow for my lab.

Workflow

The graphic is pretty self-explanatory, but to walk you through:

  • A lab member sets up a microscopy experiment. We have standardised procedures/protocols in a lab manual and systems are in place so that reagents are catalogued to minimise error.
  • Data goes straight from the microscope to the server (and backed-up). Images and metadata are held in a database and object identifiers are used for referencing in electronic lab notebooks (and for auditing).
  • Analysis of the data happens with varying degrees of human intervention. The outputs of all analyses are processed automatically. Code for doing these steps in under version control using git (github).
  • Post-analysis the processed outputs contain markers for QC and error checking. We can also trace back to the original data and check the analysis. Development of code happens here too, speeding up slow procedures via “software engineering”.
  • Figures are generated using scripts which are linked to the original data with an auditable record of any modification to the image.
  • Project management, particularly of paper writing is via trello. Writing papers is done using collaborative tools. Everything is synchronised to enable working from any location.
  • This is just an overview and some details are missing, e.g. backup of analyses is done locally and via the server.

Just to reiterate, that my team are not at this point yet, but we are reasonably close. We have not yet implemented three of these things properly in my group, but in our latest project (via collaboration) the workflow has worked as described above.

The output is a manuscript! In the future I can see that publication of a paper as a condensed report will give way to making the data, scripts and analysis available, together with a written summary. This workflow is designed to allow this to happen easily, but this is the topic for another post.

Part of a series on the future of cell biology in quantitative terms.

The Digital Cell

If you are a cell biologist, you will have noticed the change in emphasis in our field.

At one time, cell biology papers were – in the main – qualitative. Micrographs of “representative cells”, western blots of a “typical experiment”… This descriptive style gave way to more quantitative approaches, converting observations into numbers that could be objectively assessed. More recently, as technology advanced, computing power increased and data sets became more complex, we have seen larger scale analysis, modelling, and automation begin to take centre stage.

This change in emphasis encompasses several areas including (in no particular order):

  • Statistical analysis
  • Image analysis
  • Programming
  • Automation allowing analysis at scale
  • Reproducibility
  • Version control
  • Data storage, archiving and accessing large datasets
  • Electronic lab notebooks
  • Computer vision and machine learning
  • Prospective and retrospective modelling
  • Mathematics and physics

The application of these areas is not new to biology and has been worked on extensively for years in certain areas. Perhaps most obviously by groups that identified themselves as “systems biologists”, “computational biologists”, and people working on large-scale cell biology projects. My feeling is that these methods have now permeated mainstream (read: small-scale) cell biology to such an extent that any groups that want to do cell biology in the future have to adapt in order to survive. It will change the skills that we look for when recruiting and it will shape the cell biologists of the future. Other fields such as biophysics and neuroscience are further through this change, while others have yet to begin. It is an exciting time to be a biologist.

I’m planning to post occasionally about the way that our cell biology research group is working on these issues: our solutions and our problems.

Part of a series on the future of cell biology in quantitative terms.

Adventures in code II

I needed to generate a uniform random distribution of points inside a circle and, later, a sphere. This is part of a bigger project, but the code to do this is kind of interesting. There were no solutions available for IgorPro, but stackexchange had plenty of examples in python and mathematica. There are many ways to do this. The most popular seems to be to generate a uniform random set of points in a square or cube and then discard those that are greater than the radius away from the origin. I didn’t like this idea, because I needed to extend it to spheroids eventually, and as I saw it the computation time saved was minimal.

Here is the version for points in a circle (radius = 1, centred on the origin).

circleCode

This gives a nice set of points, 1000 shown here.

pointsCircle

And here is the version inside a sphere. This code has variable radius for the sphere.

sphereCode

The three waves (xw,yw,zw) can be concatenated and displayed in a Gizmo. The code just plots out the three views.

pointsSphere

My code uses var + enoise(var) to get a random variable from 0,var. This is because enoise goes from -var to +var. There is an interesting discussion about whether this is a truly flat PDF here.

This is part of a bigger project where I’ve had invaluable help from Tom Honnor from Statistics.

This post is part of a series on esoterica in computer programming.

Adventures in code

An occasional series in esoteric programming issues.

As part of a larger analysis project I needed to implement a short program to determine the closest distance of two line segments in 3D space. This will be used to sort out which segments to compare… like I say, part of a bigger project. The best method to do this is to find the closest distance one segment is to the other when the other one is represented as an infinite line. You can then check if that point is beyond the segment if it is you use the limits of the segment to calculate the distance. There’s a discussion on stackoverflow here. The solutions point to one in C++ and one in MATLAB. The C++ version is easiest to port to Igor due to the similarity of languages, but the explanation of the MATLAB code was more approachable. So I ported that to Igor to figure out how it works.

The MATLAB version is:

>> P = [-0.43256      -1.6656      0.12533]
P =
   -0.4326   -1.6656    0.1253
>> Q = [0.28768      -1.1465       1.1909]
Q =
    0.2877   -1.1465    1.1909
>> R = [1.1892    -0.037633      0.32729]
R =
    1.1892   -0.0376    0.3273
>> S = [0.17464     -0.18671      0.72579]
S =
    0.1746   -0.1867    0.7258
>> N = null(P-Q)
N =
   -0.3743   -0.7683
    0.9078   -0.1893
   -0.1893    0.6115
>> r = (R-P)*N
r =
    0.8327   -1.4306
>> s = (S-P)*N
s =
    1.0016   -0.3792
>> n = (s - r)*[0 -1;1 0];
>> n = n/norm(n);
>> n
n =
    0.9873   -0.1587
>> d = dot(n,r)
d =
    1.0491
>> d = dot(n,s)
d =
    1.0491
>> v = dot(s-r,d*n-r)/dot(s-r,s-r)
v =
    1.2024
>> u = (Q-P)'\((S - (S*N)*N') - P)'
u =
    0.9590
>> P + u*(Q-P)
ans =
    0.2582   -1.1678    1.1473
>> norm(P + u*(Q-P) - S)
ans =
    1.0710

and in IgorPro:

Function MakeVectors()
	Make/O/D/N=(1,3) matP={{-0.43256},{-1.6656},{0.12533}}
	Make/O/D/N=(1,3) matQ={{0.28768},{-1.1465},{1.1909}}
	Make/O/D/N=(1,3) matR={{1.1892},{-0.037633},{0.32729}}
	Make/O/D/N=(1,3) matS={{0.17464},{-0.18671},{0.72579}}
End

Function MakeVectors()
	Make/O/D/N=(1,3) matP={{-0.43256},{-1.6656},{0.12533}}
	Make/O/D/N=(1,3) matQ={{0.28768},{-1.1465},{1.1909}}
	Make/O/D/N=(1,3) matR={{1.1892},{-0.037633},{0.32729}}
	Make/O/D/N=(1,3) matS={{0.17464},{-0.18671},{0.72579}}
End

Function DoCalcs()
	WAVE matP,matQ,matR,matS
	MatrixOp/O tempMat = matP - matQ
	MatrixSVD tempMat
	Make/O/D/N=(3,2) matN
	Wave M_VT
	matN = M_VT[p][q+1]
	MatrixOp/O tempMat2 = (matR - matP)
	MatrixMultiply tempMat2, matN
	Wave M_product
	Duplicate/O M_product, mat_r
	MatrixOp/O tempMat2 = (matS - matP)
	MatrixMultiply tempMat2, matN
	Duplicate/O M_product, mat_s
	Make/O/D/N=(2,2) MatUnit
	matUnit = {{0,1},{-1,0}}
	MatrixOp/O tempMat2 = (mat_s - mat_r)
	MatrixMultiply tempMat2,MatUnit
	Duplicate/O M_Product, Mat_n
	Variable nn
	nn = norm(mat_n)
	MatrixOP/O new_n = mat_n / nn
	//new_n is now a vector with unit length
	Variable dd
	dd = MatrixDot(new_n,mat_r)
	//print dd
	//dd = MatrixDot(new_n,mat_s)
	//print dd
	dd = abs(dd)
	// now find v
	// v = dot(s-r,d*n-r)/dot(s-r,s-r)
	variable vv
	MatrixOp/O mat_s_r = mat_s - mat_r
	MatrixOp/O tempmat2 = dd * mat_n - mat_r
	vv = MatrixDot(mat_s_r,tempmat2) / MatrixDot(mat_s_r,mat_s_r)
	//print vv
	//because vv > 1 then closest post is s (because rs(1) = s) and therefore closest point on RS to infinite line PQ is S
	//what about the point on PQ is this also outside the segment?
	// u = (Q-P)'\((S - (S*N)*N') - P)'
	variable uu
	MatrixOp/O matQ_P = matQ - matP
	MatrixTranspose matQ_P
	//MatrixOP/O tempMat2 = ((matS - (matS * matN) * MatrixTranspose(MatN)) - MatrixTranspose(matP))
	Duplicate/O MatN, matNprime
	MatrixTranspose matNprime
	MatrixMultiply matS, matN
	Duplicate/O M_Product, matSN
	MatrixMultiply M_Product, matNprime
	MatrixOP/O tempMat2 = ((matS - M_product) - matP)
	MatrixTranspose tempMat2
	MatrixLLS matQ_P tempMat2
	Wave M_B
	uu = M_B[0]
	// find point on PQ that is closest to RS
	// P + u*(Q-P)
	MatrixOp/O matQ_P = matQ - matP
	MatrixOp/O matPoint = MatP + (uu * MatQ_P)
	MatrixOP/O distpoint = matPoint - matS
	Variable dist
	dist = norm(distpoint)
	Print dist
End

The sticking points were finding the Igor equivalents of

  • null()
  • norm()
  • dot()
  • \ otherwise known as mldivide

Which are:

  • MatrixSVD (answer is in the final two columns of wave M_VT)
  • norm()
  • MatrixDot()
  • MatrixLLS

MatrixLLS wouldn’t accept a mix of single-precision and double-precision waves, so this needed to be factored into the code.

As you can see, the Igor code is much longer. Overall, I think MATLAB handles Matrix Math better than Igor. It is certainly easier to write. I suspect that there are a series of Igor operations that can do what I am trying to do here, but this was an exercise in direct porting.

More work is needed to condense this down and also deal with every possible case. Then it needs to be incorporated into the bigger program. SIMPLE! Anyway, hope this helps somebody.

The post title is taken from the band Adventures In Stereo.

The Great Curve: Citation distributions

This post follows on from a previous post on citation distributions and the wrongness of Impact Factor.

Stephen Curry had previously made the call that journals should “show us the data” that underlie the much-maligned Journal Impact Factor (JIF). However, this call made me wonder what “showing us the data” would look like and how journals might do it.

What citation distribution should we look at? The JIF looks at citations in a year to articles published in the preceding 2 years. This captures a period in a paper’s life, but it misses “slow burner” papers and also underestimates the impact of papers that just keep generating citations long after publication. I wrote a quick bit of code that would look at a decade’s worth of papers at one journal to see what happened to them as yearly cohorts over that decade. I picked EMBO J to look at since they have actually published their own citation distribution, and also they appear willing to engage with more transparency around scientific publication. Note that, when they published their distribution, it considered citations to papers via a JIF-style window over 5 years.

I pulled 4082 papers with a publication date of 2004-2014 from Web of Science (the search was limited to Articles) along with data on citations that occurred per year. I generated histograms to look at distribution of citations for each year. Papers published in 2004 are in the top row, papers from 2014 are in the bottom row. The first histogram shows citations in the same year as publication, in the next column, the following year and so-on. Number of papers is on y and on x the number of citations. Sorry for the lack of labelling! My excuse is that my code made a plot with “subwindows”, which I’m not too familiar with.

allPlot

What is interesting is that the distribution changes over time:

  • In the year of publication, most papers are not cited at all, which is expected since there is a lag to publication of papers which can cite the work and also some papers do not come out until later in the year, meaning the likelihood of a citing paper coming out decreases as the year progresses.
  • The following year most papers are picking up citations: the distribution moves rightwards.
  • Over the next few years the distribution relaxes back leftwards as the citations die away.
  • The distributions are always skewed. Few papers get loads of citations, most get very few.

Although I truncated the x-axis at 40 citations, there are a handful of papers that are picking up >40 cites per year up to 10 years after publication – clearly these are very useful papers!

To summarise these distributions I generated the median (and the mean – I know, I know) number of citations for each publication year-citation year combination and made plots.

citedist

The mean is shown on the left and median on the right. The layout is the same as in the multi-histogram plot above.

Follow along a row and you can again see how the cohort of papers attracts citations, peaks and then dies away. You can also see that some years were better than others in terms of citations, 2004 and 2005 were good years, 2007 was not so good. It is very difficult, if not impossible, to judge how 2013 and 2014 papers will fare into the future.

What was the point of all this? Well, I think showing the citation data that underlie the JIF is a good start. However, citation data are more nuanced than the JIF allows for. So being able to choose how we look at the citations is important to understand how a journal performs. Having some kind of widget that allows one to select the year(s) of papers to look at and the year(s) that the citations came from would be perfect, but this is beyond me. Otherwise, journals would probably elect to show us a distribution for a golden year (like 2004 in this case), or pick a window for comparison that looked highly favourable.

Finally, I think journals are unlikely to provide this kind of analysis. They should, if only because it is a chance for a journal to show how it publishes many papers that are really useful to the community. Anyway, maybe they don’t have to… What this quick analysis shows is that it can be (fairly) easily harvested and displayed. We could crowdsource this analysis using standardised code.

Below is the code that I used – it’s a bit rough and would need some work before it could be used generally. It also uses a 2D filtering method that was posted on IgorExchange by John Weeks.
cdcode

The post title is taken from “The Great Curve” by Talking Heads from their classic LP Remain in Light.