Ferrous: new paper on FerriTagging proteins in cells

We have a new paper out. It’s not exactly news, because the paper has been up on bioRxiv since December 2016 and hasn’t changed too much. All of the work was done by Nick Clarke when he was a PhD student in the lab. This post is to explain our new paper to a general audience.

The paper in a nutshell

We have invented a new way to tag proteins in living cells so that you can see them by light microscopy and by electron microscopy.

Why would you want to do that?

Proteins do almost all of the jobs in cells that scientists want to study. We can learn a lot about how proteins work by simply watching them down the microscope. We want to know their precise location. Light microscopy means that the cells are alive and we can watch the proteins move around. It’s a great method but it has low resolution, so seeing a protein’s precise location is not possible. We can overcome this limitation by using electron microscopy. This gives us higher resolution, but the proteins are stuck in one location. When we correlate images from one microscope to the other, we can watch proteins move and then look at them with high resolution. All we need is a way to see the proteins so that they can be seen in both types of microscope. We do this with tagging.

Tagging proteins so that we can see them by light microscopy is easy. A widely used method is to use a fluorescent protein such as GFP. We can’t see GFP in the electron microscope (EM) so we need another method. Again, there are several tags available but they all have drawbacks. They are not precise enough, or they don’t work on single proteins. So we came up with a new one and fused it with a fluorescent protein.

What is your EM tag?

We call it FerriTag. It is based on Ferritin which is a large protein shell that cells use to store iron. Because iron scatters electrons, this protein shell can be seen by EM as a particle. There was a problem though. If Ferritin is fused to a protein, we end up with a mush. So, we changed Ferritin so that it could be attached to the protein of interest by using a drug. This meant that we could put the FerriTag onto the protein we want to image in a few seconds. In the picture on the right you can see how this works to FerriTag clathrin, a component of vesicles in cells.

We can watch the tagging process happening in cells before looking by EM. The movie on the right shows green spots (clathrin-coated pits in a living cell) turning orange/yellow when we do FerriTagging. The cool thing about FerriTag is that it is genetically encoded. That means that we get the cell to make the tag itself and we don’t have to put it in from outside which would damage the cell.

What can you use FerriTag for?

Well, it can be used to tag many proteins in cells. We wanted to precisely localise a protein called HIP1R which links clathrin-coated pits to the cytoskeleton. We FerriTagged HIP1R and carried out what we call “contextual nanoscale mapping”. This is just a fancy way of saying that we could find the FerriTagged HIP1R and map where it is relative to the clathrin-coated pit. This allowed us to see that HIP1R is found at the pit and surrounding membrane. We could even see small changes in the shape of HIP1R in the different locations.

We’re using FerriTag for lots of projects. Our motivation to make FerriTag was so that we could look at proteins that are important for cell division and this is what we are doing now.

Is the work freely available?

Yes! The paper is available here under CC-BY licence. All of the code we wrote to analyse the data and run computer simulations is available here. All of the plasmids needed to do FerriTagging are available from Addgene (a non-profit company, there is a small fee) so that anyone can use them in the lab to FerriTag their favourite protein.

How long did it take to do this project?

Nick worked for four years on this project. Our first attempt at using ribosomes to tag proteins failed, but Nick then managed to get Ferritin working as a tag. This paper has broken our lab record for longest publication delay from first submission to final publication. The diagram below tells the whole saga.

 

The publication process was frustratingly slow. It took a few months to write the paper and then we submitted to the first journal after Christmas 2016. We got a rapid desk rejection and sent the paper to another journal and it went out for review. We had two positive referees and one negative one, but we felt we could address the comments and checked with the journal who said that they would consider a revised paper as an appeal. We did some work and resubmitted the paper. Almost six months after first submission the paper was rejected, but with the offer of a rapid (ha!) publication at Nature Communications using the peer review file from the other journal.

Hindsight is a wonderful thing but I now regret agreeing to transfer the paper to Nature Communications. It was far from rapid. They drafted in a new reviewer who came with a list of new questions, as well as being slow to respond. Sure, a huge chunk of the delay was caused by us doing revision experiments (the revisions took longer than they should because Nick defended his PhD, was working on other projects and also became a parent). However, the journal was really slow. The Editor assigned to our paper left the journal which didn’t help and the reviewer they drafted in was slow to respond each time (6 and 7 weeks, respectively). Particularly at the end, after the paper was ‘accepted in principle’ it took them three weeks to actually accept the paper (seemingly a week to figure out what a bib file is and another to ask us something about chi-squared tests). Then a further three weeks to send us the proofs, and then another three weeks until publication. You can see from the graphic that we sent back the paper in the third week of February and only incurred a 9-day delay ourselves, yet the paper was not published until July.

Did the paper improve as a result of this process? Yes and no. We actually added some things in the first revision cycle (for Journal #2) that got removed in subsequent peer review cycles! And the message in the final paper is exactly the same as the version on bioRxiv, posted 18 months previously. So in that sense, no it didn’t. It wasn’t all a total waste of time though, the extra reviewer convinced us to add some new analysis which made the paper more convincing in the end. Was this worth an 18-month delay? You can download our paper and the preprint and judge for yourself.

Were we unlucky with this slow experience? Maybe, but I know other authors who’ve had similar (and worse) experiences at this journal. As described in a previous post, the publication lag times are getting longer at Nature Communications. This suggests that our lengthy wait is not unique.

There’s lots to like about this journal:

  • It is open access.
  • It has the Nature branding (which, like it or not, impresses many people).
  • Peer review file is available
  • The papers look great (in print and online).

But there are downsides too.

  • The APC for each paper is £3300 ($5200). Obviously open access must cost something, but there a cheaper OA journals available (albeit without the Nature branding).
  • Ironically, paying a premium for this reputation is complicated since the journal covers a wide range of science and its kudos varies depending on subfield.
  • It’s also slow, and especially so when you consider that papers have often transferred here from somewhere else.
  • It’s essentially a mega journal, so your paper doesn’t get the same exposure as it would in a community-focused journal.
  • There’s the whole ReadCube/SpringerNature thing…

Overall it was a negative publication experience with this paper. Transferring a paper along with the peer review file to another journal has worked out well for us recently and has been rapid, but not this time. Please leave a comment particularly if you’ve had a positive experience and redress the balance.

The post title comes from “Ferrous” by Circle from their album Meronia.

Pentagrammarspin: why twelve pentagons?

This post has been in my drafts folder for a while. With the World Cup here, it’s time to post it!

It’s a rule that a 3D assembly of hexagons must have at least twelve pentagons in order to be a closed polyhedral shape. This post takes a look at why this is true.

First, some examples from nature. The stinkhorn fungus Clathrus ruber, has a largely hexagonal layout, with pentagons inserted. The core of HIV has to contain twelve pentagons (shown in red, in this image from the Briggs group) amongst many hexagonal units. My personal favourite, the clathrin cage, can assemble into many buckminsterfullerene-like shapes, but all must contain at least twelve pentagons with a variable number of hexagons.

The case of clathrin is particularly interesting because clathrin triskelia can assemble into a flat hexagonal lattice on membranes. If clathrin is going to coat a vesicle, that means 12 pentagons need to be introduced. So there needs to be quite a bit of rearrangement in order to do this.

You can see the same rule in everyday objects. The best example is a football, or soccer ball, if you are reading in the USA.

The classic design of football has precisely twelve pentagons and twenty hexagonal panels. The roadsign for football stadia here in the UK shows a weirdly distorted hexagonal array that has no pentagons. 22,543 people signed a petition to pressurise the authorities to change it, but the Government responded that it was too costly to correct this geometrical error.

So why do all of these assemblies have 12 pentagons?

In the classic text “On Growth and Form” by D’Arcy Wentworth Thompson, polyhedral forms in nature are explored in some detail. In the wonderfully titled On Concretions, Specules etc. section, the author notes polyhedral forms in natural objects.

One example is Dorataspis, shown left. The layout is identical to the D6 hexagonal barrel assembly of a clathrin cage shown above. There is a belt of six hexagons, one at the top, one at the bottom (eight total) and twelve pentagons between the hexagons. In the book, there is an explanation of the maths behind why there must be twelve pentagons in such assemblies, but it’s obfuscated in bizarre footnotes in latin. I’ll attempt to explain it below.

To shed some light on this we need the help of Euler’s formulae. The surface of a polyhedron in 3D is composed of faces, edges and vertices. If we think back to the football the faces are the pentagons and hexgonal panels, the edges are the stitching where two panels meet and the vertices are where three edges come together. We can denote faces, edges and vertices as f, e and v, respectively. These are 2D, 1D and zero-dimensional objects, respectively. Euler’s formula which is true for all polyhedra is:

\(f – e + v = 2\)

If you think about a cube, it has six faces. It has 12 edges and 8 vertices. So, 6 – 12 + 8 = 2. We can also check out a the football above. This has 32 faces (twelve pentagons, twenty hexagons), 90 edges and sixty vertices. 32 – 90 + 60 = 2. Feel free to check it with other polyhedra!

Euler found a second formula which is true for polyhedra where three edges come together at a vertex.

\(\sum (6-n)f_{n} = 12\)

in this formula, \(f_{n}\) means number of n-gons.

So let’s say we have dodecahedron, which is a polyhedron made of 12 pentagons. So \(n\) = 5 and \(f_{n}\) = 12, and you can see that \((6-5)12 = 12\).

Let’s take a more complicated object, like the football. Now we have:

\(((6-6)20) + ((6-5)12) = 12\)

You can now see why the twelve pentagons are needed. Because 6-6 = 0, we can add as many hexagons as we like, this will add nothing to the left hand side. As long as the twelve pentagons are there, we will have a polyhedron. Without them we don’t. This is the answer to why there must be twelve pentagons in a closed polyhedral assembly.

So how did Euler get to the second equation? You might have spotted this yourself for the f, e, v values for the football. Did you notice that the ratio of edges to vertices is 3:2? This is because each edge has two vertices at either end (it is a 1D object) and remember we are dealing with polyhedra with three edges at each vertex. so \(v = \frac{2}{3}e\). Also, each edge is at the boundary of two polygons. So \( e = \frac{1}{2}\sum n f_{n}\). You can check that with the values for the cube or football above. We know that \(f = \sum f_{n}\), this just means that the number of faces is the sum of all the faces of all n-gons. This means that:

\(f – e + v = 2\)

Can be turned into

\(f – (1/3)e = \sum n f_{n} – \frac{1}{6}\sum n f_{n} = 2\)

Let’s multiply by 6 to get, oh yes

\(\sum (6-n)f_{n} = 12\)

There are some topics for further exploration here:

  • You can add 0, 2 or 10000 hexagons to 12 pentagons to make a polyhedron, but can you add just one?
  • What happens when you add a few heptagons into the array?

Image credits (free-to-use/wiki or):

Clathrus ruber – tineye search didn’t find source.

HIV cores – Briggs Group

Exploded football – Quora

The post title comes from “Pentagrammarspin” by Steve Hillage from the 2006 remaster of his LP Fish Rising

Do It Yourself: Lab Notebook Archiving Project

A while back, the lab moved to an electronic lab notebook (details here and here). One of the drivers for this move was the huge number of hard copy lab note books that had accumulated in the lab over >10 years. Switching to an ELN solved this problem for the future, but didn’t make the old lab note books disappear. So the next step was to archive them and free up some space.

We access the contents of these books fairly regularly so archiving had to mean digitising them as well as putting them into storage. I looked at a few options before settling on a very lo-fi solution.

Option 1: call in the professionals

I got a quote from our University’s preferred data archiving firm. The lab notebooks we use have 188 pages and I had 89 to archive. The quote was over £4000 + VAT for scanning only. This was too expensive and so I next looked at DIY options.

Option 2: scan the books

At the University we have good MPDs that will scan documents and store them on a server as a multipage PDF. There’s two resolutions at which you can scan, which are good-but-not-amazing quality. The scanners have a feeder which would automate the scan of a lab book, but it would mean destroying the books (which are hardbound) to scan them.

I tried scanning one book using this method. Disassembling a notebook with a razorblade was quite quick but the problem was that the scanner struggled with the little print outs that people stick in their lab books. Dealing with jams and misfired scans meant that this was not an option, and I didn’t want to destroy all of the books either.

Option 3: photography rigs

Next, I looked at book scanning projects to see how they were done. In these projects, the books are valuable and so can’t be destroyed, but it must be automated… I found that these projects use a cradle to sit the book in. A platen is pushed against the pages (to flatten the pages) and then two cameras take a picture of the two pages, triggered in sync using an external button or foot pedal. An example of one raspberry pi-powered rig is here. Building one of these appealed but would still require some expense (and time and effort). I asked around if anyone else wanted to help with the build, thinking that others may be wanting to archive their notebooks, but I got no takers.

Option 4: the zero-cost solution!

Inspiration came from a student who left my lab and wanted to photograph her lab books for future reference. She captured them on her camera phone by hand in a matter of minutes. Shooting two pages of a book from a single digital camera suspended above the notebook would be a good compromise. Luckily I had access to a digital camera and a few hundred Lego bricks. Total new spend = £0.

I know it looks terrible, but it was pretty effective!

I put the rig on a table (for ergonomic reasons), next to a window and photographed each book using natural light. It took around 10 min to photograph one lab book. I took the images over a few weeks amongst doing other stuff so that the job didn’t become too onerous. I shot the books at the highest resolution and stored the raw images on the server. I wrote a quick script to stack the images scale them down 25% and export to PDF to make an easy-to-consult PDF file for each lab book. Everyone in the lab can access these PDFs and if needed can pull down the high res versions. The lab books have now been stored in a sealed container. We can access the books if needed. However, having looked at the images, I think if something is not readable from the file, it won’t be readable in the hard copy.

Was it worth it?

I think so. It took a while to get everything digitised but I’m glad it’s done. The benefits are:

  1. Easy access to all lab books for every member of the lab.
  2. Clearing a load of clutter from my office.
  3. The rig can be rebuilt easily, but is not otherwise sitting around gathering dust.
  4. Some of the older lab books were deteriorating and so capturing them before they got worse was a good idea (see picture above for some sellotape degradation).

The post title is taken from the LP “Do It Yourself” by The Seahorses.

Dividing Line: not so simple division in ctenophores

This wonderful movie has repeatedly popped up into my twitter feed.

It was taken by Tessa Montague and is available here (tweet is here).

The movie is striking because of the way that cytokinesis starts at one side and moves to the other. Most model systems for cell division have symmetrical division.

Rob de Bruin commented that “it makes total sense to segregate this way”. Implying that if a cell just gets cut in half it deals with equal sharing of components. This got me thinking…

It does make sense to share n identical objects this way. For example, vesiculation of the Golgi generates many equally sized vesicles. Cutting the cell in half ensures that each cell gets approximately half of the Golgi (although there is another pathway that actively segregates vesicular material, reviewed here). However, for segregation of genetic material – where it is essential that each cell receives one (and exactly one) copy of the genome – a cutting-in-half mechanism simply doesn’t cut it (pardon the pun).

The error rate of such a mechanism would be approximately 50% which is far too high for something so important. Especially at this (first) division as shown in the movie.

I knew nothing about ctenophores (comb jellies) before seeing this movie and with a bit of searching I found this paper. In here they show that there is indeed a karyokinetic (mitotic) mechanism that segregates the genetic material and that this happens independently of the cytokinetic process which is actin-dependent. So not so different after all. The asymmetric division and the fact that these divisions are very rapid and synchronised is very interesting. It’s very different to the sorts of cells that we study in the lab. Thanks to Tessa Montague for the amazing video that got me thinking about this.

Footnote: the 50% error rate can be calculated as follows. Although segregation is in 3D, this is a 1D problem. If we assume that the cell divides down the centre of the long axis and that object 1 and object 2 can be randomly situated along the long axis. There is an equal probability of each object ending each cell. So object 1 can end in either cell 1 or cell 2, as can object 2. The probability that objects 1 and 2 end in the same cell is 50%. This is because there is a 25% chance of each outcome (object 1 in cell 1, object 2 in cell 2; object 1 in cell 2, object 2 in cell 1; object 1 and object 2 in cell 1; object 1 and object 2 in cell 2). It doesn’t matter how many objects we are talking about or the size of the cell. This is a highly simplified calculation but serves the purpose of showing that another solution is needed to segregate objects with identity during cell division.

The post title comes from “Dividing Line” from the Icons of Filth LP Onward Christian Soldiers.

Frankly, Mr. Shankly

I read about Antonio Sánchez Chinchón’s clever approach to use the Travelling Salesperson algorithm to generate some math-art in R. The follow up was even nicer in my opinion, Pencil Scribbles. The subject was Boris Karloff as the monster in Frankenstein. I was interested in running the code (available here and here), so I thought I’d run it on a famous scientist.

By happy chance one of the most famous scientists of the 20th Century, Rosalind Franklin, shares a nominative prefix with the original subject. There is also a famous portrait of her that I thought would work well.

I first needed needed to clear up the background because it was too dark.

Now to run the TSP code.

The pencil scribbles version is nicer I think.

The R scripts basically ran out-of-the-box. I was using a new computer that didn’t have X11quartz on it nor the packages required, but once that they were installed I just needed to edit the line to use a local file in my working directory. The code just ran. The outputs FrankyTSP and Franky_scribbles didn’t even need to be renamed, given my subject’s name.

Thanks to Antonio for making the code available and so easy to use.

The post title comes from “Frankly, Mr. Shankly” by The Smiths which appears on The Queen is Dead. If the choice of post title needs an explanation, it wasn’t a good choice…

Start Me Up: Endocytosis on demand

We have a new paper out. The title is New tools for ‘hot-wiring’ clathrin-mediated endocytosis with temporal and spatial precision. You can read it here.

Cells have a plasma membrane which is the barrier between the cell’s interior and the outside world. In order to import material from outside, cells have a special process called endocytosis. During endocytosis, cells form a tiny bubble of plasma membrane and pull it inside – taking with it a little pocket of the outside world. This process is very important to the cell. For example, it is one way that cells import nutrients to live. It also controls cell movement, growth, and how cells talk to one another. Because it is so important, cell biologists have studied how endocytosis works for decades.

Studying endocytosis is tricky. Like naughty children, cells simply do not do what they are told. There is no way to make a cell in the lab “do endocytosis”. It does it all the time, but we don’t know when or where on the cell surface a vesicle will be made. Not only that, but when a vesicle is made, we don’t really know what cargo it contains. It would be helpful to cell biologists if we could bring cells under control. This paper shows a way to do this. We demonstrate that clathrin-mediated endocytosis can be triggered, so that we can make it happen on-demand.

Endocytosis on-demand

Using a chemical which diffuses into the cell, we can trigger endocytosis to happen all over the cell. The movie on the right shows vesicles (bright white spots) forming after we add the chemical (at 0:00). The way that we designed the system means that the vesicles that form have one type of cargo in there. This is exciting because it means that we can now deliver things into cells using this cargo. So, we can trigger endocytosis on-demand and we can control the cargo, but we still cannot control where on the plasma membrane this happens.

We solved this problem by engineering a light-sensitive version of our system. With this new version we can use blue light to trigger endocytosis. Whereas the chemical diffused everywhere, the light can be focussed in a narrow region on the cell and endocytosis can be trigger only in that region. This means we control where, as well as when, a vesicle will form.

What does hot-wiring mean?

It is possible to start a car without a key by “hot-wiring” it. This happens in the movies, when the bad guy breaks into a car and just twists some wires together to start the car and make a getaway. To trigger endocytosis we used the cell’s own proteins, but we modified them. We chopped out all the unnecessary parts and just left the bare essentials. We call the process of triggering endocytosis “hot-wiring” because it is similar to just twisting the wires together rather than having a key.

It turns out that movies are not like real life, and hot-wiring a car is actually quite difficult and takes a while. So our systems are more like the Hollywood version than real life!

What is this useful for?

As mentioned above, the systems we have made are useful for cell biologists because they allow cells to be “tamed”. This means that we can accurately study the timing of endocytosis and which proteins are required in a very controlled way. It also potentially means that molecules can be delivered to cells that cannot normally enter. So we have a way to “force feed” cells with whatever we want. This would be most useful for drugs or nanoparticles that are not actively taken up by cells.

Who did the work?

Almost all of the work in the paper was by Laura Wood, a PhD student in the lab. She had help from fellow lab members Nick Clarke, who did the correlative light-electron microscopy, and Sourav Sarkar who did the binding experiments. Gabrielle Larocque, another PhD student did some fantastic work to revise the paper after Laura had departed for a post-doc position at another University. We put the paper up on bioRxiv in Summer 2016 and the paper has slowly made its way through peer review to be published in J Cell Biol today.

Wait? I’m a cell biologist! I want to know how this thing really works!

OK. The design is shown to the right. We made a plasma membrane “anchor” and a clathrin “hook” which is a fragment of protein which binds clathrin. The anchor and the hook have an FRB domain and an FKBP domain and these can be brought together by rapamycin. When the clathrin hook is at the membrane this is recognised by clathrin and vesicle formation can begin. The main hook we use is the appendage and hinge from the beta2 subunit of the AP2 complex.

Normally AP2, which has four subunits, needs to bind to PIP2 in the plasma membrane and undergo a conformational change to recognise a cargo molecule with a specific motif, only then can clathrin bind the beta2 appendage and hinge. By hot-wiring, we effectively remove all of those other proteins and all of those steps to just bring the clathrin binding bit to the membrane when we want. Being able to recreate endocytosis using such a minimalist system was a surprise. In vitro work from Dannhauser and Ungewickell had suggested this might be possible, but it really seems that the steps before clathrin engagement are not a precursor for endocytosis.

To make the light inducible version we used TULIPs (tunable light-controlled interacting proteins). So instead of FRB and FKBP we had a LOVpep and PDZ domain on the hook and anchor.

The post title comes from “Start Me Up” by The Rolling Stones. Originally on Tattoo You, but perhaps better known for its use by Microsoft in their Windows 95 advertising campaign. I’ve finally broken a rule that I wouldn’t use mainstream song titles for posts on this blog.

Fusion confusion: new paper on FGFR3-TACC3 fusions in cancer

We have a new paper out! This post is to explain what it’s about.

Cancer cells often have gene fusions. This happens because the DNA in cancer cells is really messed up. Sometimes, chromosomes can break and get reattached to a different one in a strange way. This means you get a fusion between one gene and another which makes a new gene, called a gene fusion. There are famous fusions that are known to cause cancer, such as the Philadelphia chromosome in chronic myelogenous leukaemia. This rearrangement of chromosomes 9 and 22 result in a fusion called BCR-ABL. There are lots of different gene fusions and a few years ago, a new fusion was discovered in bladder and brain cancers, called FGFR3-TACC3.

Genes encode proteins and proteins do jobs in cells. So the question is: how are the proteins from gene fusions different to their normal versions, and how do they cause cancer? Many of the gene fusions that scientists have found result in a protein that continues to send a signal to the cell when it shouldn’t. It’s thought that this transforms the cell to divide uncontrollably. FGFR3-TACC3 is no different. FGFR3 can send signals and the TACC3 part probably makes it do this uncontrollably. But, what about the TACC3 part? Does that do anything, or is this all about FGFR3 going wrong?

What is TACC3?

Chromosomes getting shared to the two daughter cells

TACC3, or transforming acidic coiled-coil protein 3 to give it its full name, is a protein important for cell division. It helps to share the chromosomes to the two daughter cells when a cell divides. Chromosomes are shared out by a machine built inside the cell called the mitotic spindle. This is made up of tiny threads called microtubules. TACC3 stabilises these microtubules and adds strength to this machine.

We wondered if cancer cells with FGFR3-TACC3 had problems in cell division. If they did, this might be because the TACC3 part of FGFR3-TACC3 is changed.

We weren’t the first people to have this idea. The scientists that found the gene fusion suggested that FGFR3-TACC3 might bind to the mitotic spindle but not be able to work properly. We decided to take a closer look…

What did you find?

First of all FGFR3-TACC3 is not actually bound to the mitotic spindle. It is at the cells membrane and in small vesicles in the cell. So if it is not part of the mitotic spindle, how can it affect cell division? One unusual thing about TACC3 is that it is a dimer, meaning two TACC3s are stuck together. Stranger than that, these dimers can stick to more dimers and multimerise into a much bigger protein. When we looked at the normal TACC3 in the cell we noticed that the amount bound to the spindle had decreased. We wondered whether the FGFR3-TACC3 was hoovering the normal TACC3 off the spindle, preventing normal cell division.

We made the cancer cells express a bit more normal TACC3 and this rescued the faulty division. We also got rid of the FGFR3-TACC3 fusion, and that also put things back to normal. Finally, we made a fake FGFR3-TACC3 which had a dummy part in place of FGFR3 and this was just as good at hoovering up normal TACC3 and causing cell division problems. So our idea seemed to be right!

What does this mean for cancer?

This project was to look at what is going on inside cancer cells and it is a long way from any cancer treatments. Drug companies can develop chemicals which stop cell signalling from fusions, these could work as anti-cancer agents. In the case of FGFR3-TACC3, what we are saying is: even if you stop the signalling there will still be cell division problems in the cancer cells. So an ideal treatment might be to block TACC3 interactions as well as stopping signalling. This is very difficult to do and is far in the future. Doing work like this is important to understand all the possible ways to tackle a specific cancer and to find any problems with potential treatments.

The people

Sourav Sarkar did virtually all the work for this paper and he is first author. Sourav left the lab before we managed to submit this paper and so the revision experiments requested by the peer reviewers were done by Ellis Ryan.

Why didn’t we post this paper as a preprint?

My group have generally been posting our new manuscripts as preprints while they undergo peer review, but we didn’t post this one. I was reluctant because many cancer journals at the time of submission did not allow preprints. This has changed a bit in the last few months, but back in February several key cancer journals did not accept papers that had appeared first as preprints.

The title of the post comes from “Fusion Confusion” 4th track on the Hazy EP by Dr Phibes & The House of Wax Equations.

Parallel lines: new paper on modelling mitotic microtubules in 3D

We have a new paper out! You can access it here.

The people

This paper really was a team effort. Faye Nixon and Tom Honnor are joint-first authors. Faye did most of the experimental work in the final months of her PhD and Tom came up with the idea for the mathematical modelling and helped to rewrite our analysis method in R. Other people helped in lots of ways. George did extra segmentation, rendering and movie making. Nick helped during the revisions of the paper. Ali helped to image samples… the list is quite long.

The paper in a nutshell

We used a 3D imaging technique called SBF-SEM to see microtubules in dividing cells, then used computers to describe their organisation.

What’s SBF-SEM?

Serial block face scanning electron microscopy. This method allows us to take an image of a cell and then remove a tiny slice, take another image and so on. We then have a pile of images which covers the entire cell. Next we need to put them back together and make some sense of them.

How do you do that?

We use a computer to track where all the microtubules are in the cell. In dividing cells – in mitosis – the microtubules are in the form of a mitotic spindle. This is a machine that the cell builds to share the chromosomes to the two new cells. It’s very important that this process goes right. If it fails, mistakes can lead to diseases such as cancer. Before we started, it wasn’t known whether SBF-SEM had the power to see microtubules, but we show in this paper that it is possible.

We can see lots of other cool things inside the cell too like chromosomes, kinetochores, mitochondria, membranes. We made many interesting observations in the paper, although the focus was on the microtubules.

So you can see all the microtubules, what’s interesting about that?

The interesting thing is that our resolution is really good, and is at a large scale. This means we can determine the direction of all the microtubules in the spindle and use this for understanding how well the microtubules are organised. Previous work had suggested that proteins whose expression is altered in cancer cause changes in the organisation of spindle microtubules. Our computational methods allowed us to test these ideas for the first time.

Resolution at a large scale, what does that mean?

The spindle is made of thousands of microtubules. With a normal light microscope, we can see the spindle but we can’t tell individual microtubules apart. There are improvements in light microscopy (called super-resolution) but even with those improvements, right in the body of the spindle it is still not possible to resolve individual microtubules. SBF-SEM can do this. It doesn’t have the best resolution available though. A method called Electron Tomography has much higher resolution. However, to image microtubules at this large scale (meaning for one whole spindle), it would take months or years of effort! SBF-SEM takes a few hours. Our resolution is better than light microscopy, worse than electron tomography, but because we can see the whole spindle and image more samples, it has huge benefits.

What mathematical modelling did you do?

Cells are beautiful things but they are far from perfect. The microtubules in a mitotic spindle follow a pattern, but don’t do so exactly. So what we did was to create a “virtual spindle” where each microtubule had been made perfect. It was a bit like “photoshopping” the cell. Instead of straightening the noses of actresses, we corrected the path of every microtubule. How much photoshopping was needed told us how imperfect the microtubule’s direction was. This measure – which was a readout of microtubule “wonkiness” – could be done on thousands of microtubules and tell us whether cancer-associated proteins really cause the microtubules to lose organisation.

The publication process

The paper is published in Journal of Cell Science and it was a great experience. Last November, we put up a preprint on this work and left it up for a few weeks. We got some great feedback and modified the paper a bit before submitting it to a journal. One reviewer gave us a long list of useful comments that we needed to address. However, the other two reviewers didn’t think our paper was a big enough breakthrough for that journal. Our paper was rejected*. This can happen sometimes and it is frustrating as an author because it is difficult for anybody to judge which papers will go on to make an impact and which ones won’t. One of the two reviewers thought that because the resolution of SBF-SEM is lower than electron tomography, our paper was not good enough. The other one thought that because SBF-SEM will not surpass light microscopy as an imaging method (really!**) and because EM cannot be done live (the cells have to be fixed), it was not enough of a breakthrough. As I explained above, the power is that SBF-SEM is between these two methods. Somehow, the referees weren’t convinced. We did some more work, revised the paper, and sent it to J Cell Sci.

J Cell Sci is a great journal which is published by Company of Biologists, a not-for-profit organisation who put a lot of money back into cell biology in the UK. They are preprint friendly, they allow the submission of papers in any format, and most importantly, they have a fast-track*** option. This allowed me to send on the reviews we had and including our response to them. They sent the paper back to the reviewer who had a list of useful comments and they were happy with the changes we made. It was accepted just 18 days after we sent it in and it was online 8 days later. I’m really pleased with the whole publishing experience with J Cell Sci.

 

* I’m writing about this because we all have papers rejected. There’s no shame in that at all. Moreover, it’s obvious from the dates on the preprint and on the JCS paper that our manuscript was rejected from another journal first.

** Anyone who knows something about microscopy will find this amusing and/or ridiculous.

*** Fast-track is offered by lots of journals nowadays. It allows authors to send in a paper that has been reviewed elsewhere with the peer review file. How the paper has been revised in light of those comments is assessed by at the Editor and one peer reviewer.

Parallel lines is of course the title of the seminal Blondie LP. I have used this title before for a blog post, but it matches the topic so well.

Notes To The Future

Previously I wrote about our move to electronic lab notebooks (ELNs). This post contains the technical details to understand how it works for us. You can even replicate our setup if you want to take the plunge.

Why go electronic?

Lots and lots of lab books and folders.

Many reasons: I wanted to be able to quickly find information in our lab books. I wanted lab members to be able to share information more freely. I wanted to protect against loss of a notebook. I think switching to ELNs is inevitable and not only that I needed to do something about the paper notebooks: my group had amassed 100 in 10 years.

We took the plunge and went electronic. To recap, I decided to use WordPress as a platform for our ELN.

Getting started

We had a Linux box on which I could install WordPress. This involved installing phpMyAdmin and registering a mySQL database and then starting up WordPress. If that sounds complicated, it really isn’t. I simply found a page on the web with step-by-step instructions for my box. You could run this on an old computer or even on a Raspberry Pi, it just has to be on a local network.

Next, I set myself up as admin and then created a user account for each person in the lab. Users can have different privileges. I set all people in the lab to Author. This means they can make, edit and delete posts. Being an Author is better than the other options (Contributor or Editor) which wouldn’t work for users to make entries, e.g. Contributors cannot upload images. Obviously authors being able to delete posts is not acceptable for an ELN, so I removed this capability with a plugin (see below).

I decided that we would all write in the same ELN. This makes searching the contents much easier for me, the PI. The people in the lab were a bit concerned about this because they were each used to having their own lab book. It would be possible to set up a separate ELN for each person but this would be too unwieldy for the PI, so I grouped everyone together. However, it doen’t feel like writing in a communal notebook because each Author of a post is identifiable and so it is possible to look at the ELN of just one user as a “virtual lab book”. To do this easily, you need a plugin (see below).

If we lost the WP installation it would be a disaster, so I setup a backup. This is done locally with a plugin (see below). Additionally, I set up an rsync routine from the box that goes off weekly to our main lab server. Our main lab server uses ZFS and is backed up to a further geographically distinct location. So this is pretty indestructible (if that statement is not tempting fate…). The box has a RAID6 array of disks but in the case of hardware failure plus corruption and complete loss of the array, we would lose one week of entries at most.

Theme

We tried out a few before settling on one that we liked. We might change and tweak this more as we go on.

The one we liked was called gista. It looks really nice, like a github page. It is no longer maintained unfortunately. Many of the other themes we looked at have really big fonts for the posts, which gives a really bloggy look, but is not conducive to a ELN.

Two things needed tweaking for gitsta to be just right: I wanted the author name to be visible directly after the title and I didn’t want comments to show up. This meant editing the content.php file. Finally, the style.css file needs changing to have the word gista-child in the comments, to allow it to get dependencies from gitsta and to show up in your list of themes to select.

The editing is pretty easy, since there are lots of guides online for doing this. If you just want to download our edited version to try it, you can get it from here (I might make some more changes in the future). If you want to use it, just download it, rename the directory as gitsta-child and then place it in WordPress/wp-content/themes/ of your installation – it should be good to go!

Plugins

As you saw above, I installed a few plugins which are essential for full functionality

  • My Private Site – this plugin locks off the site so that only people with a login can access the site. Our ELN is secure – note that this is not a challenge to try to hack us – it sits inside our internal network and as such is not “on the internet”. Nonetheless, anyone with access to the network who could find the IP could potentially read our ELN. This plugin locks off access to everyone not in our lab.
  • Authors Widget – this plugin allows the addition of a little menu to the sidebar (widget) allowing the selection of posts by one author. This allows us to switch between virtual labbooks for each lab member. Users can bookmark their own Author name so that they only see their labbook if they want.
  • Capability Manager Enhanced – you can edit rights of each level of user or create new levels of user. I used this to remove the ability to delete posts.
  • BackWPup – this allows the local backup of all WP content. It’s highly customisable and is recommended.

Other plugins which are non-essential-but-useful

  • WP Statistics – this is a plugin that allows admin to see how many visits etc the ELN has had that day/week etc. This one works on a local installation like ours. Others will not work because they require the site to be on the internet.
  • WP-Markdown – this allows you to write your posts in md. I like writing in md, nobody in my lab uses this function.

Gitsta wants to use gust rather than the native WP dashboard. But gust and md were too complicated for our needs, so I uninstalled gust.

Using the ELN

Lab members/users/authors make “posts” for each lab book entry. This means we have formalised how lab book entries are done. We already had a guide for best practice for labbook entries in our lab manual which translates wonderfully to the ELN. It’s nothing earth-shattering, just that each experiment has a title, aim, methods, results and conclusion (just like we were taught in school!). In a paper notebook this is actually difficult to do because our experiments run for days (sometimes weeks) and many experiments run simultaneously. This means you either have to budget pages in the notebook for each separate experiment, interleave entries (which is not very readable) or write up at the end (which is not best practice). With ELNs you just make one entry for each experiment and update all of them as you go along. Problem solved. Edits are possible and it is possible to see what changes have been made and it is even possible to roll back changes.

Posts are given a title. We have a system in the lab for initials plus numbers for each experiment. This is used for everything associated with that experiment, so the files are easy to find, the films can be located and databases can cross-reference. The ELN also allows us to add categories and tags. So we have wide ranging categories (these are set by admin) and tags which can be more granular. Each post created by an author is identifiable as such, even without the experiment code to the title. So it is possible to filter the view to see posts:

  • by one lab member
  • on Imaging (or whatever topic)
  • by date or in a date range

Of course you can also search the whole ELN, which is the thing I need most of all because it gets difficult to remember who did what and when. Even lab members themselves don’t remember that they did an experiment two or more years previously! So this feature will be very useful in the future.

WordPress allows pictures to be uploaded and links to be added. Inserting images is easy to show examples of how an experiment went. For data that is captured digitally this is a case of uploading the file. For things that are printed out or are a physical thing, i.e. western films or gel doc pictures, we are currently taking a picture and adding these to the post. In theory we can add hard links to data on our server. This is certainly not allowed in many other ELNs for security reasons.

In many ways the ELN is no different to our existing lab books. Our ELN is not on the internet and as such is not accessible from home without VPN to the University. This is analogous to our current set up where the paper lab books have to stay in the lab and are not allowed to be taken home.

Finally, in response to a question on Twitter after the previous ELN post: how do we protect against manipulation? Well previously we followed best practice for paper books. We used hard bound books with numbered pages (ensuring pages couldn’t be removed), Tip-ex was not allowed, edits had to be done in a different colour pen and dated etc. I think the ELN is better in many ways. Posts cannot be deleted, edits are logged and timestamped. User permissions mean I know who has edited what and when. Obviously, as with paper books, if somebody is intent on deception, they can still falsify their own lab records in some way. In my opinion, the way to combat this is regular review of the primary data and also maintaining an environment where people don’t feel like they should deceive.

The post title is taken from “Notes To The Future” by Patti Smith , the version I have is recorded Live in St. Mark’s Church, NYC in 2002 from Land (1975-2002). I thought this was appropriate since a lab note book is essentially notes to your future self. ELNs are also the future of taking notes in the lab.

The Soft Bulletin: Electronic Lab Notebooks

We finally took the plunge and adopted electronic lab notebook (ELNs) for the lab. This short post describes our choice of software. I will write another post about how it’s going, how I set it up and other technical details.

tl;dr we are using WordPress as our ELN.

First, so you can understand my wishlist of requirements for the perfect ELN.

  1. Easy-to-use. Allow adding pictures and notes easily.
  2. Versioning (ability to check edits and audit changes)
  3. Backup and data security
  4. Ability to export and go elsewhere if required
  5. Free or low cost
  6. Integration with existing lab systems if possible
  7. Open software, future development
  8. Clarity over who owns the software, who owns the data, and where the information is stored
  9. Can be deployed for the entire lab

There are many ELN software solutions available, but actually very few fulfil all of those requirements. So narrowing down the options was quite straightforward in the end. Here is the path I went down.

Evernote

I have used Evernote as my ELN for over a year. I don’t do labwork these days, but I make notes when doing computer programming, data analysis and writing papers. I also use it for personal stuff. I like it a lot, but Evernote is not an ELN solution for a whole lab. First, there is an issue over people using it for work and for personal stuff. How do we archive their lab documents without accessing other data? How do we pay for it? What happens when they leave? These sorts of issues prevent the use of many of the available ELN software packages, for a whole lab. I think many ELN software packages would work well for individuals, but I wanted something to deploy for the whole lab. For example, so that I can easily search and find stuff long after the lab member has left and not have to go into different packages to do this.

OneNote

The next most obvious solution is OneNote from Microsoft. Our University provides free access to this package and so using it would get around any pricing problems. Each lab member could use it with their University identity, separating any problems with work/life. It has some nice features (shared by Evernote) such as photographing documents/whiteboards etc and saving them straight to notes. I know several individuals (not whole labs) using this as their ELN. I’m not a big fan of running Microsoft software on Macs and we are completely Apple native in the lab. Even so, OneNote was a promising solution.

I also looked into several other software packages:

I liked the sound of RSpace, but it wasn’t clear to me who they were, why they wanted to offer a free ELN service and where they would store our data and what they might want to do with it. Last year, the scare that Evernote were going to snoop on users’ data made me realise that when it came to our ELNs – we had to host the data. I didn’t want to trust a company to do this. I also didn’t want to rely on a company to:

  • continue to do what we sign up for, e.g. provide a free software
  • keep updating the software, e.g.  so that macOS updates don’t kill it
  • not sell up to an evil company
  • do something else that I didn’t agree with.

As I saw it, this left one option: self-hosting and not only that, there were only two possibilities.

Use a wiki

This is – in many ways – my preferred solution. Wikis have been going for years and they are widely used. I set one up and made a lab notebook entry. It was great. I could edit it and edits were timestamped. It looked OK (but not amazing). There were possibilities to add tables, links etc. However, I thought that doing the code to make an entry would be a challenge for some people in the lab. I know that wikis are everywhere and that editing them is simple, but I kept thinking of the project student that comes to the lab for a short project. They need to read papers to figure out their project, they have to learn to clone/run gels/image cells/whatever AND then they also have to learn to write in a wiki? Just to keep a log of what they are doing? For just a short stay? I could see this meaning that the ELN gets neglected and things didn’t get documented.

I know other labs are using a wiki as an ELN and they do it successfully. It is possible, but I don’t think it would work for us. I also needed to entice people in the lab to convert them from using paper lab notebooks. This meant something that looked nice.

Use WordPress

This option I did not take seriously at first. A colleague told me two years ago that WordPress would be the best platform for an ELN, and I smiled politely. I write this blog on a wordpress dot com platform, but somehow didn’t consider it as an ELN option. After looking for alternatives that we could self-host, it slowly dawned on me that WordPress (a self-hosted installation) actually meets all of the requirements for an ELN.

  1. It’s easy-to-use. My father, who is in his 70s, edits a website using WordPress as a platform. So any person working in the lab should be able to do it.
  2. Versioning. You can see edits and roll back changes if required. Not as granular as wiki but still good.
  3. Backup and data security. I will cover our exact specification in a future post. Our ELN is internal and can’t be accessed from outside the University. We have backup and it is pretty secure. Obviously, self-hosting means that if we have a technical problem, we have to fix it. Although I could move it to new hardware very quickly.
  4. Ability to export and go elsewhere if required. It is simple to pack up an xml and move to another platform. The ubiquity of WordPress means that this will always be the case.
  5. Free or low cost. WordPress is free and you can have as many users as you like! The hardware has a cost, but we have that hardware anyway.
  6. Integration with existing lab systems if possible. We use naming conventions for people’s lab book entries and experiments. Moving to WordPress makes this more formal. Direct links to the primary data on our lab server are possible (not necessarily true of other ELN software).
  7. Open software, future development. Again WordPress is ubiquitous and so there are options for themes and plugins to help make it a good ELN. We can also do some development if needed. There is a large community, meaning tweaking the installation is easy to do.
  8. Clarity over who owns the software, who owns the data, and where the information is stored. It’s installed on our machines and so we don’t have to worry about this.
  9. It can be deployed for the whole lab. Details in the follow-up post.

It also looks good and has a more up-to-date feel to it than a wiki. A screenshot of an innocuous lab notebook entry is shown to the right. I’ve blurred out some details of our more exciting experiments.

It’s early days. I started by getting the newer people in the lab to convert. Anyone who had only a few months left in the lab was excused from using the new system. I’m happy with the way it looks and how it works. We’ll see how it works out.

The main benefits for me are readability and being able to look at what people are doing. I’m looking forward to being able to search back through the entries, as this can be a serious timesuck with paper lab notebooks.

Edit 2017-04-26T07:28:43Z After posting this yesterday a few other suggestions came through that you might want to consider.

Labfolder, I had actually looked at this and it seems good but at 10 euros per user per month, I thought it was too expensive. I get that good software solutions have a cost and am not against paying for good software. I’d prefer a one-off cost (well, of course I’d prefer free!).

Mary Elting alerted me to Shawn Douglas’s lektor-based ELN. Again this ticks all of the boxes I mentioned above.

Manuel Théry suggested ELab. Again, I hadn’t seen this and it looks like it meets the criteria.

The Soft Bulletin is an occasional series of posts about software choices in research. The name comes from The Flaming Lips LP of the same name.