Videotape: From Microscope To Figure

I recently did a webinar for ASCB called “From Microscope To Figure“. For posterity, I am re-posting the webinar here with some additional info.

The webinar

The webinar on YouTube

Useful links

There was a request to share the tutorial I showed (in short form) to making montages in ImageJ.

Tutorial on figure making the quantixed way – hosted by CAMDU

Q&A

I didn’t get time to answer all the questions. So here are the remaining questions and answers.

Any recommended resources to help us workout a pipeline segmentation in FIJI?

I would recommend trying Image > Adjust > Auto Threshold and selecting “Try All” in the dialog box and seeing the results for a few sample images. If one of the automatic methods in Fiji work, then you are good to go. If not, pre-processing (filtering) the images can help to get these methods working.

We have had a lot of success with Labkit and ilastik for classification and segmentation. They are definitely worth trying. Ultimately it depends on your application. You will need to experiment!

Can you share with attendees the papers that were cited in this presentation?

  • Kuge O, Dascher C, Orci L, Rowe T, Amherdt M, Plutner H, Ravazzola M, Tanigawa G, Rothman JE, Balch WE. Sar1 promotes vesicle budding from the endoplasmic reticulum but not Golgi compartments. J Cell Biol. 1994 Apr;125(1):51-65. doi: 10.1083/jcb.125.1.51. PMID: 8138575; PMCID: PMC2120015.
  • Ferrandiz N, Downie L, Starling GP, Royle SJ. Endomembranes promote chromosome missegregation by ensheathing misaligned chromosomes. J Cell Biol. 2022 Jun 6;221(6):e202203021. doi: 10.1083/jcb.202203021. Epub 2022 Apr 29. PMID: 35486148; PMCID: PMC9066052.
  • Küey C, Sittewelle M, Larocque G, Hernández-González M, Royle SJ. Recruitment of clathrin to intracellular membranes is sufficient for vesicle formation. Elife. 2022 Jul 19;11:e78929. doi: 10.7554/eLife.78929. PMID: 35852853; PMCID: PMC9337851.
  • Jonkman J, Brown CM, Wright GD, Anderson KI, North AJ. Tutorial: guidance for quantitative confocal microscopy. Nat Protoc. 2020 May;15(5):1585-1611. doi: 10.1038/s41596-020-0313-9. Epub 2020 Mar 31. PMID: 32235926.
  • Miura K, Nørrelykke SF. Reproducible image handling and analysis. EMBO J. 2021 Feb 1;40(3):e105889. doi: 10.15252/embj.2020105889. Epub 2021 Jan 22. PMID: 33480052; PMCID: PMC7849301.
  • Lord SJ, Velle KB, Mullins RD, Fritz-Laylin LK. SuperPlots: Communicating reproducibility and variability in cell biology. J Cell Biol. 2020 Jun 1;219(6):e202001064. doi: 10.1083/jcb.202001064. PMID: 32346721; PMCID: PMC7265319.

Do you hire biologists who don’t know how to code and how do you recommend they learn?

Yes, I do. It’s probably easier to teach a biologist to code than to teach a coder to be a biologist! We have training in our Center for coding as well as a weekly coding drop-in session to help people with coding problems. My University runs courses as well, but these are not biology focussed. There’s no shortage of resources to learn but it helps to have a problem to solve in your project to motivate you to learn – rather than just learning coding for the sake of it.

Are there communities or forums that openly share code for microscopy image analysis?

I highly recommend https://forum.image.sc

I find segmentation like you said to be the most difficult. Its really difficult when using the same settings for different samples say- for different mutations which show a slightly different spatial behavior/clustering. Can we change segmentation parameters in such cases within the same experiment?

Yes, having a setting e.g. pixels over a fixed value, will cause this problem. The automated methods will adapt the value so the parameters do change with these auto methods and that’s OK, as long as you know which algorithm is used.

Tricks include segmenting a separate channel that doesn’t change in order to quantify the thing that does; or preprocessing the image to make the segmentation task easier.

How to quantify tissue slice image generated from confocal using fiji?

This really depends on the question. It’s really important to make sure that tissue slices are imaged at a similar depth and intensity (if penetration of label or light is a limitation).

Are you able to share your imagej scripts and R code?

Yes, all code from our papers is available on GitHub. https://github.com/quantixed Each repo is linked from the paper. My imagej scripts (for the quantixed update site) are available here https://github.com/quantixed/imagej-macros .

Is there any beginner tutorial Steve recommends for those who do not know how to code?

CAMDU at Warwick (where I am based) have a bunch of tutorials https://warwick.ac.uk/fac/sci/med/research/biomedical/facilities/camdu/training/

I really liked sololearn which is an app for your phone which teaches languages like python. However, having real problems to solve and practice your skills is the best way to learn.

Do images have to be presented as a rectangle?  In other words, if you have a round cell can you show a circular crop of your image to save space and reduce black space?

I advocate square images for montages, but I think it’s good to (literally) think outside the box! I’m not sure how much space it would save though. It’s true that there’s a lot of information-less space around a circular cell, however my concern would be: what are the authors hiding by cropping the cell so tightly?

Hi Steve, thanks so much for this. Any thoughts on the use of colocalization coefficients, such as Pearson correlation coefficient, or Li’s intensity correlation coefficient, as opposed to more manual threshold-based counting, such as how many green particles are red?

Colocalisation is one of the toughest problems in image analysis. It really depends on what you’d like to know and the resolution of your images. Personally I prefer object-based colocalization, where segmented objects are judged to overlap or not. I see colocalization coefficients poorly applied in some papers, but for some applications these can works well.

I use the Digital Cell book as a great source of information for new students. Unfortunately, when I bought the hard copy, I didn’t think to buy the ebook version. Any tips on getting a lab discount for the ebooks so that they can be shared with the students?

Thank you. It’s great to know that the book is useful. A lab discount or class purchase sounds like a good idea. I will mention it to the publisher.

Is FiJi good for performing Deconvolution of confocal images?

Traditionally deconvolution has been done used closed source software. We recommend deconvolutionlab2. This can run in Fiji, there are other options using clij.

What are your thoughts about pixel shifts to display co-localization?

I like this method although it depends on the density of objects. For those that don’t know, if an image has two channels and the objects are, let’s say small spots, one channel can be shifted up and left by a few pixels to show that the spots were overlapping. At high spot densities, this doesn’t work because the spots start to overlap with different ones, but otherwise this is a neat trick.

What is the strategy for publication or presentation purposes? do you choose an image with 8bits or do you save a 16bits image as PNG?

Most software and certainly publishers, will only use 8 bit images (or 24-bit RGB). We output our composite figures at 300 dpi as white background png files when including in a manuscript. This is more than detailed enough for online display (paper or presentation/poster). For submission to a journal, we send what is required, usually TIFF at 300 dpi or the Illustrator file.

Talking about not changing the image data – isn’t it sometimes necessary to e.g. subtract background before doing any measurements?

Yes, I skimmed over this is in the talk. As long as this is in your script so it’s reproducible this is fine, and in fact best practice. We typically measure a background ROI in the raw image and subtract that from the measurement of interest.

What’s your preferred font for displaying figures?

Helvetica! Seriously, it should be sans serif because those typefaces are easier to read for short text. I stick wo 12 point bold for A, B, C etc. and we aim for 10 or 9 point for other labels.

Thanks for the talk! What are your thoughts on using MATLAB for processing and analyzing images? You mainly covered imagej and R, but I have also seen people use MATLAB. What are some advantages and disadvantages to using either of these options? Is it sample specific?

MATLAB is great but it is licensed software. So it is hard for others to reuse your code. This is why I advocate ImageJ and R. The other reason is there is a large community of users with ImageJ and R, so getting help is easier and finding the tools you require is straightforward.

If there are samples which need to be imaged with different exposure time and/or intensity, how can we keep the acquisition settings same for all the samples?

If you use settings where the minimum intensity and maximum intensity samples can be imaged with the same parameters, then all is good. So it’s worth getting that set up properly at the start of the session. If you have samples that are so different that this is not possible, you will need to image your control samples using the different settings so that a comparison is possible regardless of the sample.

Is it possible to give the link of your book in the chatbox?

http://cshlpress.com/link/digitalcell.htm

This was very helpful! I am learning to code and I do have to say as a scientist, I think R ggplot is great to use for quantification figures. I feel it is easy to learn.

Thanks. It is great. Very easy to quickly make variations on a plot and lots of help available online if you get stuck.

The post title comes from “Videotape” by Radiohead off their “In Rainbows” album.