Ferrous: new paper on FerriTagging proteins in cells

We have a new paper out. It’s not exactly news, because the paper has been up on bioRxiv since December 2016 and hasn’t changed too much. All of the work was done by Nick Clarke when he was a PhD student in the lab. This post is to explain our new paper to a general audience.

The paper in a nutshell

We have invented a new way to tag proteins in living cells so that you can see them by light microscopy and by electron microscopy.

Why would you want to do that?

Proteins do almost all of the jobs in cells that scientists want to study. We can learn a lot about how proteins work by simply watching them down the microscope. We want to know their precise location. Light microscopy means that the cells are alive and we can watch the proteins move around. It’s a great method but it has low resolution, so seeing a protein’s precise location is not possible. We can overcome this limitation by using electron microscopy. This gives us higher resolution, but the proteins are stuck in one location. When we correlate images from one microscope to the other, we can watch proteins move and then look at them with high resolution. All we need is a way to see the proteins so that they can be seen in both types of microscope. We do this with tagging.

Tagging proteins so that we can see them by light microscopy is easy. A widely used method is to use a fluorescent protein such as GFP. We can’t see GFP in the electron microscope (EM) so we need another method. Again, there are several tags available but they all have drawbacks. They are not precise enough, or they don’t work on single proteins. So we came up with a new one and fused it with a fluorescent protein.

What is your EM tag?

We call it FerriTag. It is based on Ferritin which is a large protein shell that cells use to store iron. Because iron scatters electrons, this protein shell can be seen by EM as a particle. There was a problem though. If Ferritin is fused to a protein, we end up with a mush. So, we changed Ferritin so that it could be attached to the protein of interest by using a drug. This meant that we could put the FerriTag onto the protein we want to image in a few seconds. In the picture on the right you can see how this works to FerriTag clathrin, a component of vesicles in cells.

We can watch the tagging process happening in cells before looking by EM. The movie on the right shows green spots (clathrin-coated pits in a living cell) turning orange/yellow when we do FerriTagging. The cool thing about FerriTag is that it is genetically encoded. That means that we get the cell to make the tag itself and we don’t have to put it in from outside which would damage the cell.

What can you use FerriTag for?

Well, it can be used to tag many proteins in cells. We wanted to precisely localise a protein called HIP1R which links clathrin-coated pits to the cytoskeleton. We FerriTagged HIP1R and carried out what we call “contextual nanoscale mapping”. This is just a fancy way of saying that we could find the FerriTagged HIP1R and map where it is relative to the clathrin-coated pit. This allowed us to see that HIP1R is found at the pit and surrounding membrane. We could even see small changes in the shape of HIP1R in the different locations.

We’re using FerriTag for lots of projects. Our motivation to make FerriTag was so that we could look at proteins that are important for cell division and this is what we are doing now.

Is the work freely available?

Yes! The paper is available here under CC-BY licence. All of the code we wrote to analyse the data and run computer simulations is available here. All of the plasmids needed to do FerriTagging are available from Addgene (a non-profit company, there is a small fee) so that anyone can use them in the lab to FerriTag their favourite protein.

How long did it take to do this project?

Nick worked for four years on this project. Our first attempt at using ribosomes to tag proteins failed, but Nick then managed to get Ferritin working as a tag. This paper has broken our lab record for longest publication delay from first submission to final publication. The diagram below tells the whole saga.

 

The publication process was frustratingly slow. It took a few months to write the paper and then we submitted to the first journal after Christmas 2016. We got a rapid desk rejection and sent the paper to another journal and it went out for review. We had two positive referees and one negative one, but we felt we could address the comments and checked with the journal who said that they would consider a revised paper as an appeal. We did some work and resubmitted the paper. Almost six months after first submission the paper was rejected, but with the offer of a rapid (ha!) publication at Nature Communications using the peer review file from the other journal.

Hindsight is a wonderful thing but I now regret agreeing to transfer the paper to Nature Communications. It was far from rapid. They drafted in a new reviewer who came with a list of new questions, as well as being slow to respond. Sure, a huge chunk of the delay was caused by us doing revision experiments (the revisions took longer than they should because Nick defended his PhD, was working on other projects and also became a parent). However, the journal was really slow. The Editor assigned to our paper left the journal which didn’t help and the reviewer they drafted in was slow to respond each time (6 and 7 weeks, respectively). Particularly at the end, after the paper was ‘accepted in principle’ it took them three weeks to actually accept the paper (seemingly a week to figure out what a bib file is and another to ask us something about chi-squared tests). Then a further three weeks to send us the proofs, and then another three weeks until publication. You can see from the graphic that we sent back the paper in the third week of February and only incurred a 9-day delay ourselves, yet the paper was not published until July.

Did the paper improve as a result of this process? Yes and no. We actually added some things in the first revision cycle (for Journal #2) that got removed in subsequent peer review cycles! And the message in the final paper is exactly the same as the version on bioRxiv, posted 18 months previously. So in that sense, no it didn’t. It wasn’t all a total waste of time though, the extra reviewer convinced us to add some new analysis which made the paper more convincing in the end. Was this worth an 18-month delay? You can download our paper and the preprint and judge for yourself.

Were we unlucky with this slow experience? Maybe, but I know other authors who’ve had similar (and worse) experiences at this journal. As described in a previous post, the publication lag times are getting longer at Nature Communications. This suggests that our lengthy wait is not unique.

There’s lots to like about this journal:

  • It is open access.
  • It has the Nature branding (which, like it or not, impresses many people).
  • Peer review file is available
  • The papers look great (in print and online).

But there are downsides too.

  • The APC for each paper is £3300 ($5200). Obviously open access must cost something, but there a cheaper OA journals available (albeit without the Nature branding).
  • Ironically, paying a premium for this reputation is complicated since the journal covers a wide range of science and its kudos varies depending on subfield.
  • It’s also slow, and especially so when you consider that papers have often transferred here from somewhere else.
  • It’s essentially a mega journal, so your paper doesn’t get the same exposure as it would in a community-focused journal.
  • There’s the whole ReadCube/SpringerNature thing…

Overall it was a negative publication experience with this paper. Transferring a paper along with the peer review file to another journal has worked out well for us recently and has been rapid, but not this time. Please leave a comment particularly if you’ve had a positive experience and redress the balance.

The post title comes from “Ferrous” by Circle from their album Meronia.

Joining A Fanclub

When I started this blog, my plan was to write about interesting papers or at least blog about the ones from my lab. This post is a bit of both.

I was recently asked to write a “Journal Club” piece for Nature Reviews Molecular Cell Biology, which is now available online. It’s paywalled unfortunately. It’s also very short, due to the format. For these reasons, I thought I’d expand a bit on the papers I highlighted.

I picked two papers from Dick McIntosh’s group, published in J Cell Biol in the early 1990s as my subject. The two papers are McDonald et al. 1992 and Mastronarde et al. 1993.

Almost everything we know about the microanatomy of mitotic spindles comes from classical electron microscopy (EM) studies. How many microtubules are there in a kinetochore fibre? How do they contact the kinetochore? These questions have been addressed by EM. McIntosh’s group in Boulder, Colorado have published so many classic papers in this area, but there are many more coming from Conly Rieder, Alexey Khodjakov, Bruce McEwen and many others. Even with the advances in light microscopy which have improved spatial resolution (resulting in a Nobel Prize last year), EM is the only way to see individual microtubules within a complex subcellular structure like the mitotic spindle. The title of the piece, Super-duper resolution imaging of mitotic microtubules, is a bit of a dig at the fact that EM still exceeds the resolution available from super-resolution light microscopy. It’s not the first time that this gag has been used, but I thought it suited the piece quite well.

There are several reasons to highlight these papers over other electron microscopy studies of mitotic spindles.

It was the first time that 3D models of microtubules in mitotic spindles were built from electron micrographs of serial sections. This allowed spatial statistical methods to be applied to understand microtubule spacing and clustering. The software that was developed by David Mastronarde to do this was later packaged into IMOD. This is a great software suite that is actively maintained, free to download and is essential for doing electron microscopy. Taking on the same analysis today would be a lot faster, but still somewhat limited by cutting sections and imaging to get the resolution required to trace individual microtubules.

kfibreThe paper actually showed that some of the microtubules in kinetochore fibres travel all the way from the pole to the kinetochore, and that interpolar microtubules invade the bundle occasionally. This was an open question at the time and was really only definitively answered thanks to the ability to digitise and trace individual microtubules using computational methods.

The final thing I like about these papers is that it’s possible to reproduce the analysis. The methods sections are wonderfully detailed and of course the software is available to do similar work. This is in contrast to most papers nowadays, where it is difficult to understand how the work has been done in the first place, let alone to try and reproduce it in your own lab.

David Mastronarde and Dick McIntosh kindly commented on the piece that I wrote and also Faye Nixon in my lab made some helpful suggestions. There’s no acknowledgement section, so I’ll thank them all here.

References

McDonald, K. L., O’Toole, E. T., Mastronarde, D. N. & McIntosh, J. R. (1992) Kinetochore microtubules in PTK cells. J. Cell Biol. 118, 369—383

Mastronarde, D. N., McDonald, K. L., Ding, R. & McIntosh, J. R. (1993) Interpolar spindle microtubules in PTK cells. J. Cell Biol. 123, 1475—1489

Royle, S.J. (2015) Super-duper resolution imaging of mitotic microtubules. Nat. Rev. Mol. Cell. Biol. doi:10.1038/nrm3937 Published online 05 January 2015

The post title is taken from “Joining a Fanclub” by Jellyfish from their classic second and final LP “Spilt Milk”.