Emergence of a geometric pattern of cell fates from tissue-scale mechanics in the Drosophila eye
Data files
Mar 22, 2022 version files 2.16 GB
-
CP_epoch11.pth
-
scabrous_strongMutant_data.mat
-
scabrous_weakMutant_data.mat
-
wildtype1_data.mat
-
wildtype2_data.mat
Abstract
Pattern formation of biological structures involves the arrangement of different types of cells in an ordered spatial configuration. In this study, we investigate the mechanism of patterning the Drosophila eye into a precise triangular grid of photoreceptor clusters called ommatidia. Previous studies had led to a long-standing biochemical model whereby a reaction-diffusion process is templated by recently formed ommatidia to propagate a molecular prepattern across the eye epithelium. Here, we find that the templating mechanism is instead, mechanical in origin; newly born columns of ommatidia serve as a template to spatially pattern cell flows that move the cells in the epithelium into position to form each new column of ommatidia. Cell flow is generated by a pressure gradient that is caused by a narrow zone of cell dilation precisely positioned behind the growing wavefront of ommatidia. The newly formed lattice grid of ommatidia cells are immobile, deflecting and focusing the flow of other cells. Thus, the self-organization of a regular pattern of cell fates in an epithelium is mechanically driven.
Methods
Samples were imaged with a laser confocal microscope system at room temperature with z-stacks collected at precise 5 min intervals. We used Leica Sp5 (wildtype replicate 1), Zeiss LSM 800 with no airy scanner (wildtype replicate 2), and Leica Sp8 (scabrous strong and weak mutants) systems. Wildtype replicate 1 was imaged on a Leica Sp5 with a detector size of 1024 x 1024 using the 100x/1.44NA objective, 3x internal zoom, and the pinhole set to 1 airy unit, resulting in a voxel size of 0.0505 x 0.0505 x 0.5003 µm and a xy field-of-view of 51.72 x 51.72 µm. The sample was scanned at 8000 Hz using the resonance scanner with bidirectional scanning and 16x line averaging. A total depth of 29.71 µm was collected using 60 steps of 0.5 µm each. The E-cadherin-GFP reporter was excited using the 488 nm channel of the argon laser (set to 25%) at 15% and collected on a HyD-PMT (GaAsP) detector with 300% gain. Wildtype replicate 2 was was imaged on a Zeiss LSM 800 with a detector size of 634 x 634 using a 63x/1.2NA objective, 2x internal zoom, and the pinhole set to 1.29 airy units, resulting in a voxel size of 0.08 x 0.08 x 0.7 µm and a xy field-of-view of 50.71 x 50.71 µm. The sample was scanned at 1000 Hz using the point scanner with bidirectional scanning and 4x line averaging. A total depth of 28.7 µm was collected in 42 steps of 0.683 µm each. The E-cadherin-GFP reporter was excited using the 488nm laser at 0.8% and collected on a GaAsP-Pmt detector using 652 V of gain. Both scabrous datasets were collected on a Leica Sp8 with a detector size of 704 x 704 using a 63x/1.4NA oil immersion objective, 2x internal zoom, and the pinhole set to 1 airy unit, resulting in a voxel size of 0.0829 x 0.0829 x 0.4 µm and a xy field-of-view of 61.60 x 61.60 µm. A total depth of 9.2 µm was collected in 24 steps of 0.38 µm each. The E-cadherin-GFP reporter was excited using the Diode 488nm laser at 0.8% and collected on a HyD-PMT (GaAsP) detector with 200% gain.
Note that the different wildtype replicates were imaged using different microscope systems and yet had highly reproducible behaviors for a wide spectrum of tissue and cellular features. Moreover, the scabrous mutant discs also exhibited highly reproducible behaviors similar to wildtype. Thus, different imaging platforms had little or no effect on the biological behaviors that we analyzed. All samples experienced photobleaching, evidenced by diminishing GFP signal over the course of imaging, and most likely experienced phototoxicity. We did not systematically test the effect of phototoxicity on the tissue. Generally, bleaching of the GFP reporter leveled off after about 3 or 4 hours of imaging. Laser powers were selected to generate sufficiently bright signal for segmentation after the point when GFP bleaching stabilized. It was not necessary to excite the GFP reporter up to the point of saturating pixel brightness, as is often done with fixed samples. Instead, we found that exciting the GFP reporter such that the signal was easily visible by eye was sufficient for cell segmentation.
Imaging resolution was selected to be sufficient for accurate cell segmentation within the morphogenetic furrow (MF). Cells in the MF are maximally constricted in their apical domains with the smallest cell diameters being around 0.5 µm. The imaging resolution of wildtype replicate 1 resulted in ~10 pixels along a 0.5 µm cell edge, whereas the imaging resolution of wildtype replicate 2 and both scabrous datasets led to ~6 pixels along a 0.5 µm edge. While we were able to segment wildtype replicate 2 and the scabrous datasets, wildtype replicate 1 was considerably easier to segment because of the higher resolution. We would advise future datasets to be collected at this resolution.
All samples were imaged about half-way between the margin of the eye disc and the equator. We did not specifically image dorsal only or ventral only regions. We found this region to be most mechanically stable. Often, the equatorial region of the disc would slowly sink basally over the course of imaging, requiring extremely large z-volumes to ensure the apical surface remained within the z-stack over the course of the movie. This resulted in excessive bleaching and diminishing signal intensity as the distance between the apical surface and cover slip increased. The average spacing of ommatidia changes as a function of dorsal-ventral (DV) position within the tissue, with tighter spacing near the equator and larger spacing near the margin. Imaging at the same relative DV position in discs from animals of approximately the same age was essential for keeping the spacing of ommatidia similar across datasets.
Image processing
We created an image processing pipeline to segment and track all cells within a chosen field of view (Figure S1). This pipeline involved the following steps in this order: 1) image collection, 2) surface projection, 3) image registration, 4) cell segmentation, 5) cell tracking. These steps are explained in more detail below.
Surface detection and projection
Cells were fluorescently labeled using an E-cadherin-GFP fusion protein that localizes to the adherens junction (AJ) surface of the eye disc epithelium. The E-cadherin-GFP protein also labels cells in the peripodial membrane, an epithelium of squamous cells located apical to the disc epithelium and separated from the disc epithelium by a fluid-filled lumen (Figure S1). In order to get a 2D projection of the AJ plane with signal-to-noise that permits downstream image analysis, and without inclusion of the peripodial membrane signal, we used the open-source MATLAB based software ImSAnE (Image Surface Analysis Environment) (Heemskerk and Streichan, 2015).
The first step in the surface detection workflow is the surfaceDetection module. We used the MIPDetector class of the planarDetector method of the surfaceDetection module. The MIPDetector and planarEdgeDetector classes are useful for finding approximately planar surfaces such as the AJ surface in imaginal discs. The MIPDetector relies on finding the brightest z-position for every xy coordinate in the gaussian derivative of the 3D image stack (i.e. after smoothing with a Gaussian filter). ImSAnE does this in a rescaled version of the 3D image stack where the pixels have been interpolated so that the physical voxels they represent have a unit aspect ratio of 1 (our image voxels were anisotropic with z-size ~10x larger than xy size, see section above). Because cells are more densely packed within the AJ surface than in the peripodial membrane surface, the AJ surface creates a more continuous and brighter structure in the Gaussian filtered 3D stack, which allows for detection via the MIPDetector method. This method additionally relies on exclusion of outliers (with regard to z-position) by looking at the distribution of z-positions within local xy neighborhoods. Representative parameters for the MIPDetector are sigma 5, channels 1, zdir -3, maxIthresh 0.05, summedIthresh 0.5, sigZoutliers 1, scaleZoutliers 50, and seedDistance 20. For some time points, it proved impossible to get the MIPDetector to specifically recognize the AJ surface of the disc epithelium without inclusion of portions of the peripodial membrane. This would happen at xy coordinates where there were dark interior of large cells (often dividing cells) in the AJ surface aligned with bright portions of the peripodial membrane signal. For these time points, we found that manually masking the peripodial membrane signal using ImageJ allowed us to isolate the AJ surface in ImSAnE.
The next step in the surface detection workflow is the surfaceFitting module. We used the tpsFitter method of the surfaceFitter class of the surfaceFitting module. Briefly, the tpsFitter fits thin plate splines to the surface produced by the MIPDetector, which is stored as a 3D point cloud. This method only works for surfaces found using the MIPDetector or planarEdgeDetector classes of the planarDetector method, which have a single z point for every xy coordinate. We set the smoothing parameter to 1000 and used a grid size of 80 x 80 for all datasets. The typical image size ranged from 600 x 600 to 1024 x 1024 pixels. This smaller grid size was chosen because it significantly sped up the spline fitting process and did not affect the quality of the fit. ImSAnE also allows the surface to be shifted in the z direction using the fitOptions shift method. We shifted the surface by ~10 pixels in the interpolated z-space, which corresponds to ~0.5 µm in physical space, in order to intersect the denser region of signal. The final step of the surfaceFitting module is creating a surface pullback using the pullbackStack method. We additionally used the onionOpts class of the pullbackStack method, which creates a ‘z-stack’ around the tpsFitter surface with a specified number of slices at a specified spacing in the interpolated z-stack space; we used 21 slices with a spacing of 1 for all datasets. The onionOpts class generates a max intensity projection of this ‘z-stack’ around the surface. This MIP was used as the final 2D projection of the image for segmentation and further analysis. Using onionOpts was integral to the surface projection workflow, as there are always regions where the surface isn’t perfectly fit to the brightest portions of AJ signal and onionOpts helps capture the entirety of the signal in these regions.
ImSAnE also includes a surfaceAnalysis module with a Manifold2D class that provides a map between the 3D surface and 2D projection and tools for making distortion free measurements of area and paths along the surface. However, we did not use this module of ImSAnE for our analysis because the image registration we used to handle tissue drift (see below) created a new reference frame for the segmented images that is distinct from the reference frame of surface projection. We tested the error resulting from distance measurements in the 2D projection without regard for the 3D curvature of the surface vs. distance measurements made using the surfaceAnalysis module. Curvature is greatest along the AP axis, with little curvature along the dorsal-ventral (DV) axis. For distance measurements around the length scale separating cell centroids (~1-2 µm) along the anterior-posterior (AP) axis, which is also about the average velocity cells move within the morphogenetic furrow (MF) over 1 hour, we found that disregarding 3D curvature introduced a ~1.5% error compared to making the same measurements with ImSAnE’s surfaceAnalysis module (data not shown). Measurements made in the DV direction, where there is very little curvature, had only a ~0.1% error. The curvature of the surface is greatest in the AP direction around the location of the MF, where we observe periodic cell flows in the anterior direction. Therefore, this error is affecting our measurements of the velocity of all cells in the MF region and the magnitude of these periodic cell flows. However, this measurement error is not affecting our analysis of the DV organization of these cell flows because there is very little measurement error along that axis.
Image registration and alignment of the AP/DV axes
After creating 2D surface projections using ImSAnE, we next eliminated tissue drift using image registration. The ex vivo tissue had a tendency to drift in the anterior direction while imaging. This was often most severe towards the start of the ev vivo culture. The drift was sometimes so severe that it required adjustment of the imaging field-of-view over the course of imaging, such as with wildtype replicate 1. This was to prevent the region-of-interest near the morphogenetic furrow (MF) from drifting out the field-of-view. In order to eliminate the effect of tissue drift from our analysis of cell movement, we tested a number of different intensity and feature based registration algorithms in MATLAB and ImageJ and found that the Linear stack alignment with SIFT plugin for FIJI (Schindelin et al., 2012) produced the most stable movie - i.e. the smoothest movie with the least amount of random jittering between consecutive time points. The Linear stack alignment with SIFT plugin in FIJI is a JAVA implementation of the Scale-Invariant Feature Transform method (Lowe, 2004). Prior to registration, we padded the images with zeros in order to create a buffer region so that no portion of the image would be translated out of the field-of-view during the registration process. We registered together consecutive time points chronologically in time using a Rigid transformation and the following parameters. For the Scale Invariant Interest Point Detector, we used a 1.6 pixel kernel for the gaussian blur, 3 steps per octave, 64 pixel minimum image size, and 1024 pixel maximum image size. For the Feature Descriptor, we used a feature descriptor size of 4, 8 feature descriptor orientation bins, and a closest/next closest ratio of 0.92. For the Geometric Consensus Filter, we used a maximum alignment error of 10 pixels and an inlier ratio of 0.05.
After removing tissue drift with image registration, the movie was translated such that the MF was parallel to the y-axis of the image. Since the MF is also parallel to the dorsal-ventral (DV) axis, this ensure that the DV and y axes were aligned, and the anterior-posterior and x axes were also aligned.
After creating 2D surface projections using ImSAnE and eliminating tissue drift with image registration, the final pre-processing steps before analysis were cell segmentation and tracking. Cell segmentation was achieved using a combination of a convolutional neural network (CNN) for pixel classification and MATLAB scripts for cell detection and tracking. Cell tracking relied on a MATLAB implementation of the Munkres assignment algorithm (Cao, 2021; Kuhn, 1955). Our code is publicly available (https://github.com/K-D-Gallagher/eye-patterning).
We trained a CNN to classify pixels as either cell edges or background/cell interior (https://github.com/K-D-Gallagher/CNN-pixel-classification) (Figure S1). We did this using a Pytorch package that provides transfer learning via pre-trained encoders paired with a variety of untrained decoder architectures that can be learned towards your specific pixel classification task (Yakubovskiy, 2020). This allowed us easily explore a variety of CNN architectures and find the most accurate one for our data. We used a watershed transform to detect cells from the CNN pixel classification output and determined that our CNN was capable of accurately segmenting ~99.5% of cells relative to manually curate grown truth dataset (see below). However, when we attempted to track cells in datasets with ~0.5% error in cell segmentation, we were only able to accurately track ~80% of cells relative to our growth truth data. Errors in cell segmentation compound geometrically into errors in cell tracking. Therefore, we developed a custom MATLAB software to manually correct errors in segmentation. This software uses errors in cell tracking to discover the underlying errors in segmentation. Errors in cell segmentation were corrected using this software until no further errors in cell tracking could be detected. After manually correcting cell segmentation, ~35% of segmented cells could be tracked through the entire course of the movie. The remainder of tracked cells either appeared/disappeared off the boundary of the field-of-view (FOV) (~55% of segmented cells) or appeared/disappeared within the interior of the FOV (~10% of segmented cells). This latter category we surmised to be cell birth and death events (see below). In sum, we could account for the behavior of 100% of segmented cells as either 1) being tracked throughout the entire duration of the movie (~35% of all segmented cells), 2) biological birth/death events (~6% / ~4% of all segmented cells), or 3) appearing/disappearing over a boundary of the FOV (~55% of all segmented cells).
There are no publicly available datasets for training CNNs to segment fluorescently labeled epithelial cells. Therefore, to train our CNN, we generated our own ground truth training set. This was done using Ilastik, an open-source machine learning (random forest classifier) based image processing software that offers segmentation, classification, and tracking workflows (Berg et al., 2019). Mistakes remaining after pixel classification in Ilastik were hand corrected using the above-mentioned MATLAB software. Relative to this hand corrected ground truth dataset, Ilastik was capable of accurately segmenting ~93% of cells (compared to ~99.5% with our CNN). The initial manually corrected ground truth training dataset for our CNN was approximately 120 images (one wildtype replicate) but progressively grew as new datasets were segmented and manually corrected. We also obtained confocal data of the Drosophila wing imaginal disc and thorax generously shared by Ken Irvine and Yohanns Bellaïche, respectively. Including this data in our training data not only increased the training library volume, which increases CNN model accuracy, but also increased the variability in the training dataset, which confers greater generalizability to new datasets not represented in the training data. Our trained CNN model is publicly available (https://github.com/K-D-Gallagher/CNN-pixel-classification).
The entire FOV was not segmented in order to reduce the necessary amount of manual correction of segmentation errors, as this was the most rate limiting step in our pipeline. We only segmented ~20 µm +/- the location of the morphogenetic furrow (MF), as this was our region-of-interest for analysis. Because the MF moves towards the anterior edge of the FOV, the number of segmented cells anterior to the MF progressively diminishes over the course of the movie. For example, the segmented FOV of wildtype replicate 1 begins with 732 cells (compared to ~1000 cells in the total image, including non-segmented cells) and ends with 458 segmented cells at the last frame of the movie; the reduced number of segmented cells is primarily to the anterior of the MF.
Usage notes
No missing values