Plotlines: the story behind the graph

On a whim a posted a plot on Twitter. It shows a marathon training schedule. This post explains the story behind the graph.

I downloaded a few different 17-week marathon training schedules. Most were in imperial measurement and/or were written for time at a certain pace, e.g. 30 min Easy Run etc. I wanted to convert one of these schedules into a proper plan where I input my pace and get an idea of the distance I need to run (in metric). This means I can pick routes to run each day without having to think too much about it.

Calculating the running paces was simple using Jack Daniels’ VDot calculator and verifying the predicted paces with my running database. I constructed a spreadsheet from the plan, and then did the calculations to get the distances out. Once this was done I wondered what the rationale behind the schedule is, and the best way to see that is to plot it out.

No colour-coding

From this plot the way that the long run on each Sunday is extended or tapered is reasonably clear. However, I was wondering about how intense each of the runs will be. Running 5K at threshold is more intense than a 10K easy run. To look at this I just took the average pace for the session. This doesn’t quite tell us about intensity, because a 10 min easy run + 2 intervals + 10 min easy run is not as intense as doing 4 intervals, yet the average pace would be similar. But it would be close enough. I used a colourscale called VioletOrangeYellow and the result was quite intuitive.

Marathon training schedule
A menu of pain – marathon training schedule

The shorter runs are organised in blocks of intensity while the long runs are about building endurance. From what I understand, the blocks are to do with adapting to the stresses of increasing running load/intensity.

Feedback on the plot was good: runners liked it, non-runners thought it was psychotic.

Small Talk: How big is your lab?

I really dislike being asked “how big is your lab?”. The question usually arises at scientific meetings when you are chatting to someone during a break. Small talk can lead to some banal questions being asked, and that’s fine, but when this question is asked seriously, the person asking really just wants to compare themselves to you in some way. This is one reason why I dislike being asked “how big is your lab?”.

The other reason I don’t like the question is that it can be difficult to answer. I don’t mean that I have so many people in my group that I can’t possibly count them. No, I mean that it can be difficult to give an accurate answer. There’s perhaps a student in the group who is currently writing up, or possibly they’ve handed in their thesis and they are awaiting a viva – do they count towards the tally? They are in your lab but they’re not in your lab. Perhaps you jointly supervise someone, or maybe there is someone who is away working in another lab somewhere. I’m guilty of overthinking this or at least fretting about giving an incorrect answer. Whatever the circumstance, I think that the size of most research groups is not very stable over time, so I dislike the question because it’s difficult answer accurately.

I looked at group size recently because the lab had surpassed the milestone of having 50 all-time members and I wanted to see how the group size had varied over time.

Timeline of people in the lab

The first timeline shows the arrival and departure of lab members over time. The role of each person is colour coded as indicated. Note that some people start in one role and get upgraded. PG to PhD, PhD to Post-doc (PDRA). So what it the group size over time?

Group size over time

It turns out that we peaked this year with a team size of 12. The smallest size (besides the period where I started out, when I was on my own!) was at the end of 2012 when I prepared to move the lab to a different university. What has the make up of the lab been during this time.

Constitution of the lab over time

In this last plot the fraction of the team that are PhD, post-doc etc. is shown over time. This plot is interesting because I can see that it was two years before a PhD student joined the group and also how the lab has become post-doc-heavy in the last 18 months.

So what is the answer to “how big is your lab?”. Well, take your pick. Right now it is 11 with someone just joined this week. Over the last year it has averaged at just over 10. Over the last five years it has been 8 to 9. It’s still not an easy question to answer even if you can see all the data.

Methods: I have been trying to use R for these type of posts, so that sharing the code is more useful, but I drew a blank with this one. I found several tools to plot the first timeline (timevis and vistime). To do the integration and breakdown plots, I struggled… I knew exactly how to make those plots in Igor, so that’s what I did. All that was required was a list of the people, their role, their start-end dates, and a few lines of code. I keep a record of this as previously mentioned.

The post title comes from “Small Talk” a track by American Culture on a Split 7″ with Boyracer on Emotional Response Records.

My Blank Pages VI: Programming in Igor Pro

It has been a long time since I wrote a book review.

A few months ago I read on IgorExchange that Martin Schmid had written a book about programming Igor. I snapped up a copy. I’m a competent Igor programmer but I was hoping that this book would be useful for lab members that want to learn.

Learning Igor – like most IDEs or programming languages – is tough going. There’s a booklet from WaveMetrics (the company that sells Igor Pro) called Getting Started – which is really good. There are a few other guides on the web (Payam’s guide, Thomas Braun’s coding conventions, quantixed’s own translator), but other resources are pretty scarce. The Igor Manual itself is excellent but it’s many, many pages long and is only meant to be consulted. So I was intrigued whether Martin Schmid’s book would fill the gap between Getting Started and more advanced guides.

What makes Igor Pro so fantastic is the way that you can use it for so many different things: image processing, statistics, graphing, curve fitting, instrument control and so on. Part of the challenge of writing a book on Igor Programming is deciding what to cover. Schmid deals with this by covering basic programming and core-intermediate topics such as dialogs, loops, string magic etc. The book stops short of any specialised applications. So it’s a really useful intermediate programming guide. It’s a great little book and is recommended for those who want to dig further after doing the Getting Started exercises.

I knew I would learn something from the book because there’s always alternative ways to do stuff in Igor: things that you didn’t know about or little tricks to do stuff faster. What I was surprised about was the first thing mentioned in the book was new to me. The author favours module-static programming. All of my Igor programming has been done in the global pragma and I have avoided this more C-like way of encapsulating programs that I’ve written so far. Module-static works well because it eliminates naming conflicts. I have dealt with name conflicts by using static functions which are called from the top of the stack, and the top has a unique name (arguably this is the same as module-static, but not identical). As the Igor Manual says “this gets tedious after a while” and that’s true. Although in my defence, name conflicts are generally not a problem for the way I work because I favour a reproducible approach. A new experiment is started – one user-written ipf is opened – and the code is run. This means naming conflicts are minimised. The book has actually convinced me that module-static is a good thing, especially since my Igor code is now deployed around the lab and naming conflicts could easily become a problem. It’s an advanced programming technique but is dealt with early by the author and it kind of works. After this, more basic programming topics are covered in depth.

There’s always room for improvement: there are several example programs at the back which need to be rekeyed to run, since this is a paper book and no electronic version is available AFAIK. The author has put one up here to save rekeying and another here, but otherwise you need to type in the examples to see what will happen. This is too long-winded. I’ve been spoiled reading texts about R where the examples can all be run from a markdown file inside RStudio. It would’ve been nice if the code was made available for this book. I don’t think it would compromise the value of the book since it is the text that is most valuable.

The book is available at Amazon for £7.99 at the time of writing.

My Blank Pages is a track by Velvet Crush. This is an occasional series of book reviews.

Cloud Eleven: A cloud-based code sharing solution for IgorPro

This post is something of a “how to” guide. The problem is how can you share code with a small team and keep it up-to-date?

For ImageJ, the solution is simple. You can make an ImageJ update site and then push any updated code to the user when they startup ImageJ. For IgorPro, there is no equivalent. Typically I send ipf files to someone and they run the code, but I have to resend them whenever there’s an update. This can cause confusion over which is the latest version.

I’ve tried a bunch of things such as versioning the code (this at least tells you if the person is running an out-of-date version). I also put the code up on GitHub and tell people to pull down the latest version, but again this doesn’t work well. The topic of how to share code comes up perennially on the Igor mailing list and on IgorExchange, so it’s clearly something that people struggle with. I’ve found the solutions offered to be a bit daunting.

My solution is detailed here with some code to make it run.

The details

It is possible to use aliases (shortcuts on Windows) in the User’s WaveMetrics folders to point to external files. Items in the Igor Procedures directory get loaded when you start Igor. Items in User Procedures can get loaded optionally. These aliases sit in the Users Wavemetrics folder (the exact location depends on the Igor Version) and this means that the program itself can get updated without overwriting these files.

So, if aliases are created here that point to a cloud-based repo of Igor Code it can be used to:

  1. Optionally load code as the user needs it. Because the code sits in the cloud it can be updated and instantly used by everyone.
  2. Force Igor to load a bit of code to make a little menu item so that the user can pick the code they want to load.

To get this working, I made use of Ryotako’s excellent menu loader which was written for loading the “hidden” WaveMetrics procedures. I created a version like that one which makes a menu of our shared code in alphabetical order. I then made a version that allows the code to be grouped by purpose. People in my group told me they prefer this version. Code just needs to be organised in folders for it to work.

The IgorDistro.ipf just needs to be placed in a folder called IP somewhere in a share that users have access to (unless people are contributing to code development, you can make the share read-only). In the same directory that IP sits in, place a folder called UP with all of your code organised into folders. Aliases need to be created that point Igor Procedures folder to IP and User Procedures to UP. That’s it! I’ve tested it on IP7 and IP8, on Windows and Mac, using a shared Dropbox as the cloud repo.

Obviously, the correct IgorPro licences need to be in place to share the code with multiple users.

The post title comes from the pseudonymous LP by Cloud Eleven. A great debut album. The track “Wish I” is worth the price of the record alone (assuming people still buy records).

Turn That Heartbeat Over Again: comparing wrist and chest-strap HRM

As a geek, the added bonus of exercise is the fun that you can have with the data you’ve generated. A recent conversation on Twitter about the accuracy of wrist-based HRMs got me thinking… how does a wrist-based HRM compare with a traditional chest-strap HRM? Conventional wisdom says that the chest-strap is more accurate, but my own experience of chest-strap HRMs is that they are a bit unreliable. Time to put it to the test.

I have a Garmin Fēnix 5 which records wrist-based HR and I have a Garmin chest-strap which uses ANT+ to transmit. I could pick up the ANT+ signal with a Garmin Edge 800 so that I could record both datasets simultaneously, on the same bike ride. Both the Fēnix and Edge can record GPS and Time (obviously) allowing accurate registration of the data. I also set both devices to receive cadence data via ANT+ from the same cadence/speed sensor so that I could control for (or at least look at) variability in recordings. I rode for a ~1 hr ~32 km to capture enough data. Obviously this is just one trial but the data gives a feel for the accuracy of the two HRMs. Biking, I figured was a fair activity since upper body and wrist movement is minimal, meaning that the contacts for both HRMs are more likely to stay in place than if I was running.

I’ll get to heart rate last. First, you can see that the GPS recording is virtually identical between the two units – so that’s a good sign. Second, elevation is pretty similar too. There’s a bit of divergence at the beginning and end of the ride, since those parts are over the same stretch of road, neither device looks totally accurate. I’d be inclined to trust the Fēnix here since has a newer altimeter in it. Third, cadence looks accurate. Remember this data is coming off the same sensor and so any divergence is in how it’s being logged. Finally, heart rate. You can see that at the beginning of the ride, both wrist-based and chest-strap HRMs struggle. Obviously I have no reference here to know which trace is correct, but the chest-strap recording looks like it has an erroneous low reading for several minutes but is otherwise normal. The wrist-based HRM looks like it is reading 120ish bpm values from the start and then snaps into reading the right thing after a while. The chest-strap makes best contact when some perspiration has built up, which might explain its readings. The comparisons are shown below in grey. The correlation is OK but not great. Compared to cadence, the data diverge a lot more which rules out simple logging differences as a cause.

I found a different story in Smart recording mode on the Fēnix. This is a lower frequency recording mode, which is recommended to preserve battery life for long activities.

So what can we see here? Well, the data from the Fēnix are more patchy but even so, the data are pretty similar except for heart rate. The Fēnix performs badly here. Again, you can see the drop out of the chest strap HRM for a few minutes at the start, but otherwise it seems pretty reliable.  The comparison graph for heart rate shows how poorly the wrist-based HRM measures heart rate, in this mode.

OK, this is just two rides, for one person – not exactly conclusive but it gives some idea about the data that are captured with each HRM.

Conclusions

Wrist-based HRM is pretty good (at the higher sampling rate only) especially considering how the technology works, plus chest-strap HRMs can be uncomfortable to wear, so wrist-based HRM may be all you need. If you are training to heart rate zones, or want the best data, chest-strap HRM is more reliable than wrist-based HRM generally. Neither are very good for very short activities (<15 min).

For nerds only

Comparisons like these are quite easy to do in desktop packages like Rubitrack (which I love) or Ascent or others. They tend to mask things like missing data points, they smooth out the data and getting the plots the way you want is not straightforward. So, I took the original FIT files from each unit and used these for processing. There’s a great package for R called cycleRtools. This worked great except for the smart recording data which was sampled irregularly and it turns out and the package requires monotonic sampling. I generated a gpx file and parsed the data for this activity in R using XML. I found this snippet on the web (modified slightly).

library(XML)
library(plyr)
filename <- "myfile.gpx"
gpx.raw <- xmlTreeParse(filename, useInternalNodes = TRUE)
rootNode <- xmlRoot(gpx.raw)
gpx.rawlist <- xmlToList(rootNode)$trk
gpx.list <- unlist(gpx.rawlist[names(gpx.rawlist) == "trkseg"], recursive = FALSE)
gpx <- do.call(rbind.fill, lapply(gpx.list, function(x) as.data.frame(t(unlist(x)), stringsAsFactors=F)))
names(gpx) <- c("ele", "time", "temp", "hr", "cad", "lat", "lon")

Otherwise:

library(cycleRtools)
edge <- as.data.frame(read_fit(file = file.choose(), format = TRUE, CP = NULL, sRPE = NULL))
write.csv(edge, file = "edge.csv", row.names = F)

The resulting data frames could be saved out as csv and read into Igor to make the plots. I wrote a quick function in Igor to resample all datasets at 1 Hz. The plots and layout were generated by hand.

The post title comes from “Turn That Heartbeat Over Again” by Steely Dan from Can’t Buy A Thrill

Adventures in Code VI: debugging and silly mistakes

This deserved a bit of further explanation, due to the stupidity involved.

“Debugging is like being the detective in a crime movie where you are also the murderer.” – Filipe Fortes

My code was giving an unexpected result and I was having a hard time figuring out the problem. The unexpected result was that a resampled set of 2D coordinates were not being rotated randomly. I was fortunate to be able to see this otherwise I would have never found this bug and probably would’ve propagated the error to other code projects.

I narrowed down the cause but ended up having to write some short code to check that it really did cause the error.

I was making a rotation matrix and then using it to rotate a 2D coordinate set by matrix multiplication. The angle was randomised in the loop. What could go wrong? I looked at that this:

theta = pi*enoise(1)
rotMat = {{cos(theta),-sin(theta)},{sin(theta),cos(theta)}}

and thought “two lines – pah – it can be done in one!”. Since the rotation matrix is four numbers [-1,1], I thought “I’ll just pick those numbers at random, I just want a random angle don’t I?”

rotMat = enoise(1)

Why doesn’t an alarm go off when this happens? A flashing sign saying “are you sure about that?”…

My checks showed that a single point at 1,0 after matrix multiplication with this method gives.

When it should give

And it’s so obvious when you’ve seen why. The four numbers in the rotation matrix are, of course, not independent.

I won’t make that mistake again and I’m going to try to think twice when trying to save a line of code like in the future!

Part of a series on computers and coding.

Measured Steps: Garmin step adjustment algorithm

I recently got a new GPS running watch, a Garmin Fēnix 5. As well as tracking runs, cycling and swimming, it does “activity tracking” – number of steps taken in a day, sleep, and so on. The step goals are set to move automatically and I wondered how it worked. With a quick number crunch, the algorithm revealed itself. Read on if you are interested how it works.

Step screen on the Garmin Fēnix 5

The watch started out with a step target of 7500 steps in one day. I missed this by 2801 and the target got reduced by 560 to 6940 for the next day. That day I managed 12480, i.e. 5540 over the target. So the target went up by 560 to 7500. With me so far? Good. So next I went over the target and it went up again (but this time by 590 steps). I missed that target by a lot and the target was reduced by 530 steps. This told me that I’d need to collect a bit more data to figure out how the goal is set. Here are the first few days to help you see the problem.

Actual steps Goal Deficit/Surplus Adjustment for Tomorrow
4699 7500 -2801 -560
12480 6940 5540 560
10417 7500 2917 590
2726 8090 -5364 -530
6451 7560 -1109 -220
8843 7340 1503 150
8984 7490 1494 300
9216 7790 1426 290

The data is available for download as a csv via the Garmin Connect website. After waiting to accumulate some more data, I plotted out the adjustment vs step deficit/surplus. The pattern was pretty clear.

There are two slopes here that pass through the origin. It doesn’t matter what the target was, the adjustment applied is scaled according to how close to the target I was, i.e. the step deficit or surplus. There was either a small (0.1) or large (0.2) scaling used to adjust the step target for the next day, but how did the watch decide which scale to use?

The answer was to look back at the previous day’s activity as well as the current day.

So if today you exceeded the target and you also exceeded the target yesterday then you get a small scale increase. Likewise if you fell short today and yesterday, you get a small scale decrease. However, if you’ve exceeded today but fell short yesterday, your target goes up by the big scaling. Falling short after exceeding yesterday is rewarded with a big scale decrease. The actual size of the decrease depends on the deficit or surplus on that day. The above plot is coloured according to the four possibilities described here.

I guess there is a logic to this. The goal could quickly get unreachable if it increased by 20% on a run of two days exceeding the target, and conversely, too easy if the decreases went down rapidly with consecutive inactivity. It’s only when there’s been a swing in activity that the goal should get moved by the large scaling. Otherwise, 10% in the direction of attainment is fine.

I have no idea if this is the algorithm used across all of Garmin’s watches or if other watch manufacturer’s use different target-setting algorithms.

The post title comes from “Measured Steps” by Edsel from their Techniques of Speed Hypnosis album.

Esoteric Circle

Many projects in the lab involve quantifying circular objects. Microtubules, vesicles and so on are approximately circular in cross section. This quick post is about how to find the diameter of these objects using a computer.

So how do you measure the diameter of an object that is approximately circular? Well, if it was circular you would measure the distance from one edge to the other, crossing the centre of the object. It doesn’t matter along which axis you do this. However, since these objects are only approximately circular, it matters along which axis you measure. There are a couple of approaches that can be used to solve this problem.

Principal component analysis

The object is a collection of points* and we can find the eigenvectors and eigenvalues of these points using principal component analysis. This was discussed previously here. The 1st eigenvector points along the direction of greatest variance and the 2nd eigenvector is normal to the first. The order of eigenvectors is determined by their eigenvalues. We use these to rotate the coordinate set and offset to the origin.

Now the major axis of the object is aligned to the x-axis at y=0 and the minor axis is aligned with the y-axis at x=0 (compare the plot on the right with the one on the left, where the profiles are in their original orientation – offset to zero). We can then find the absolute values of the axis crossing points and when added together these represent the major axis and minor axis of the object. In Igor, this is done using a oneliner to retrieve a rotated set of coords as the wave M_R.

PCA/ALL/SEVC/SRMT/SCMT xCoord,yCoord

To find the crossing points, I use Igor’s interpolation-based level crossing functions. For example, storing the aggregated diameter in a variable called len.

FindLevel/Q/EDGE=1/P m1c0, 0
len = abs(m1c1(V_LevelX))
FindLevel/Q/EDGE=2/P m1c0, 0
len += abs(m1c1(V_LevelX))

This is just to find one axis (where m1c0 and m1c1 are the 1st and 2nd columns of a 2-column wave m1) and so you can see it is a bit cumbersome.

Anyway, I was quite happy with this solution. It is unbiased and also tells us how approximately circular the object is (because the major and minor axes tell us the aspect ratio or ellipticity of the object). I used it in Figure 2 of this paper to show the sizes of the coated vesicles. However, in another project we wanted to state what the diameter of a vesicle was. Not two numbers, just one. How do we do that? We could take the average of the major and minor axes, but maybe there’s an easier way.

Polar coordinates

The distance from the centre to every point on the edge of the object can be found easily by converting the xy coordinates to polar coordinates. To do this, we first find the centre of the object. This is the centroid \((\bar{x},\bar{y})\) represented by

\(\bar{x} = \frac{1}{n}\sum_{i=1}^{n} x_{i} \) and \(\bar{y} = \frac{1}{n}\sum_{i=1}^{n} y_{i} \)

for n points and subtract this centroid from all points to arrange the object around the origin. Now, since the xy coords are represented in polar system by

\(x_{i} = r_{i}\cos(\phi) \) and \(y_{i} = r_{i}\sin(\phi) \)

we can find r, the radial distance, using

\(r_{i} = \sqrt{x_{i}^{2} + y_{i}^{2}}\)

With those values we can then find the average radial distance and report that.

There’s something slightly circular (pardon the pun) about this method because all we are doing is minimising the distance to a central point initially and then measuring the average distance to this minimised point in the latter step. It is much faster than the PCA approach and would be insensitive to changes in point density around the object. The two methods would probably diverge for noisy images. Again in Igor this is simple:

Make/O/N=(dimsize(m1,0)-1)/FREE rW

rW[] = sqrt(m1[p][0]^2 + m1[p][1]^2)

len = 2 * mean(rW)
Here again, m1 is the 2-column wave of coords and the diameter of the object is stored in len.

How does this compare with the method above? The answer is kind of obvious, but it is equidistant between the major and minor axes. Major axis is shown in red and minor axis shown in blue compared with the mean radial distance method (plotted on the y-axis). In places there is nearly a 10 nm difference which is considerable for objects which are between 20 and 35 nm in diameter. How close is it to the average of the major and minor axis? Those points are in black and they are very close but not exactly on y=x.

So for simple, approximately circular objects with low noise, the ridiculously simple polar method gives us a single estimate of the diameter of the object and this is much faster than the more complex methods above. For more complicated shapes and noisy images, my feeling is that the PCA approach would be more robust. The two methods actually tell us two subtly different things about the shapes.

Why don’t you just measure them by hand?

In case there is anyone out there wondering why a computer is used for this rather than a human wielding the line tool in ImageJ… there are two good reasons.

  1. There are just too many! Each image has tens of profiles and we have hundreds of images from several experiments.
  2. How would you measure the profile manually? This approach shows two unbiased methods that don’t rely on a human to draw any line across the object.

* = I am assuming that the point set is already created.

The post title is taken from “Esoteric Circle” by Jan Garbarek from the LP of the same name released in 1969. The title fits well since this post is definitely esoteric. But maybe someone out there is interested!

Parallel lines: new paper on modelling mitotic microtubules in 3D

We have a new paper out! You can access it here.

The people

This paper really was a team effort. Faye Nixon and Tom Honnor are joint-first authors. Faye did most of the experimental work in the final months of her PhD and Tom came up with the idea for the mathematical modelling and helped to rewrite our analysis method in R. Other people helped in lots of ways. George did extra segmentation, rendering and movie making. Nick helped during the revisions of the paper. Ali helped to image samples… the list is quite long.

The paper in a nutshell

We used a 3D imaging technique called SBF-SEM to see microtubules in dividing cells, then used computers to describe their organisation.

What’s SBF-SEM?

Serial block face scanning electron microscopy. This method allows us to take an image of a cell and then remove a tiny slice, take another image and so on. We then have a pile of images which covers the entire cell. Next we need to put them back together and make some sense of them.

How do you do that?

We use a computer to track where all the microtubules are in the cell. In dividing cells – in mitosis – the microtubules are in the form of a mitotic spindle. This is a machine that the cell builds to share the chromosomes to the two new cells. It’s very important that this process goes right. If it fails, mistakes can lead to diseases such as cancer. Before we started, it wasn’t known whether SBF-SEM had the power to see microtubules, but we show in this paper that it is possible.

We can see lots of other cool things inside the cell too like chromosomes, kinetochores, mitochondria, membranes. We made many interesting observations in the paper, although the focus was on the microtubules.

So you can see all the microtubules, what’s interesting about that?

The interesting thing is that our resolution is really good, and is at a large scale. This means we can determine the direction of all the microtubules in the spindle and use this for understanding how well the microtubules are organised. Previous work had suggested that proteins whose expression is altered in cancer cause changes in the organisation of spindle microtubules. Our computational methods allowed us to test these ideas for the first time.

Resolution at a large scale, what does that mean?

The spindle is made of thousands of microtubules. With a normal light microscope, we can see the spindle but we can’t tell individual microtubules apart. There are improvements in light microscopy (called super-resolution) but even with those improvements, right in the body of the spindle it is still not possible to resolve individual microtubules. SBF-SEM can do this. It doesn’t have the best resolution available though. A method called Electron Tomography has much higher resolution. However, to image microtubules at this large scale (meaning for one whole spindle), it would take months or years of effort! SBF-SEM takes a few hours. Our resolution is better than light microscopy, worse than electron tomography, but because we can see the whole spindle and image more samples, it has huge benefits.

What mathematical modelling did you do?

Cells are beautiful things but they are far from perfect. The microtubules in a mitotic spindle follow a pattern, but don’t do so exactly. So what we did was to create a “virtual spindle” where each microtubule had been made perfect. It was a bit like “photoshopping” the cell. Instead of straightening the noses of actresses, we corrected the path of every microtubule. How much photoshopping was needed told us how imperfect the microtubule’s direction was. This measure – which was a readout of microtubule “wonkiness” – could be done on thousands of microtubules and tell us whether cancer-associated proteins really cause the microtubules to lose organisation.

The publication process

The paper is published in Journal of Cell Science and it was a great experience. Last November, we put up a preprint on this work and left it up for a few weeks. We got some great feedback and modified the paper a bit before submitting it to a journal. One reviewer gave us a long list of useful comments that we needed to address. However, the other two reviewers didn’t think our paper was a big enough breakthrough for that journal. Our paper was rejected*. This can happen sometimes and it is frustrating as an author because it is difficult for anybody to judge which papers will go on to make an impact and which ones won’t. One of the two reviewers thought that because the resolution of SBF-SEM is lower than electron tomography, our paper was not good enough. The other one thought that because SBF-SEM will not surpass light microscopy as an imaging method (really!**) and because EM cannot be done live (the cells have to be fixed), it was not enough of a breakthrough. As I explained above, the power is that SBF-SEM is between these two methods. Somehow, the referees weren’t convinced. We did some more work, revised the paper, and sent it to J Cell Sci.

J Cell Sci is a great journal which is published by Company of Biologists, a not-for-profit organisation who put a lot of money back into cell biology in the UK. They are preprint friendly, they allow the submission of papers in any format, and most importantly, they have a fast-track*** option. This allowed me to send on the reviews we had and including our response to them. They sent the paper back to the reviewer who had a list of useful comments and they were happy with the changes we made. It was accepted just 18 days after we sent it in and it was online 8 days later. I’m really pleased with the whole publishing experience with J Cell Sci.

 

* I’m writing about this because we all have papers rejected. There’s no shame in that at all. Moreover, it’s obvious from the dates on the preprint and on the JCS paper that our manuscript was rejected from another journal first.

** Anyone who knows something about microscopy will find this amusing and/or ridiculous.

*** Fast-track is offered by lots of journals nowadays. It allows authors to send in a paper that has been reviewed elsewhere with the peer review file. How the paper has been revised in light of those comments is assessed by at the Editor and one peer reviewer.

Parallel lines is of course the title of the seminal Blondie LP. I have used this title before for a blog post, but it matches the topic so well.

Adventures in Code V: making a map of Igor functions

I’ve generated a lot of code for IgorPro. Keeping track of it all has got easier since I started using GitHub – even so – I have found myself writing something only to discover that I had previously written the same thing. I was thinking that it would be good to make a list of all functions that I’ve written to locate long lost functions.

This question was brought up on the Igor mailing list a while back and there are several solutions – especially if you want to look at dependencies. However, this two liner works to generate a file called funcfile.txt which contains a list of functions and the ipf file that they are appear in.

grep "^[ \t]*Function" *.ipf | grep -oE '[ \t]+[A-Za-z_0-9]+\(' | tr -d " " | tr -d "(" > output
for i in `cat output`; do grep -ie "$i" *.ipf | grep -w "Function" >> funcfile.txt ; done

Thanks to Thomas Braun on the mailing list for the idea. I have converted it to work on grep (BSD grep) 2.5.1-FreeBSD which runs on macOS. Use the terminal, cd to the directory containing your ipf files and run it. Enjoy!

EDIT: I did a bit more work on this idea and it has now expanded to its own repo. Briefly, funcfile.txt is converted to tsv and then parsed – using Igor – to json. This can be displayed using some d3.js magic.


Part of a series with code snippets and tips.