Annotated spiral ganglion neuron training data for object detection
Data files
Jun 09, 2025 version files 1.94 GB
-
README.md
2.06 KB
-
sgn_training_data.zip
1.94 GB
Abstract
Tissue clearing and light-sheet fluorescence microscopy were applied for 3D profiling of intact cochleae. However, the spiral ganglion neurons (SGNs) remain relatively understudied compared to hair cells and supporting cells, especially in large animal models. Here we: 1) introduced collagenase treatment to the current protocol of tissue clearing to enhance uniform antibody staining of SGNs within the pig cochlea, and 2) adopted a deep learning object detection model to locate and count SGNs in large 3D datasets via Spiner (Spiral ganglion neuron profiler). This dataset contains the training, validation, and test data for the deep learning object detection model (RetinaNet) used in the Spiner workflow. Bounding boxes of SGNs are annotated in the dataset.
Training, validation, and test data for RetinaNet model training, with annotated bounding boxes for spiral ganglion neurons in fluorescence images.
Description of the data and file structure
This is the annotated data used for training the RetinaNet model in the Spiner workflow. The trained RetinaNet model was used for the automated detection of spiral ganglion neurons in fluorescence images.
The images in this dataset were acquired using light sheet fluorescence microscopes, in which the spiral ganglion neurons (SGNs) were immunostained with PGP9.5 or TuJ1 antibody or in the autofluorescence channel obtained with illumination at 488nm.
The dataset contains three folders of images (for training, validation, and test respectively), bounding box annotations stored in three corresponding CSV files, and a “classes_0.csv” specifying the class label for model training. Details are shown below.
Image Folder | Annotation CSV | |
---|---|---|
Train | sgn-train-20231006 | sgn-train.csv |
Validation | sgn-val-20230927 | sgn-val.csv |
Test | sgn-test-20240109 | sgn-test.csv |
In the annotation CSV, each line contains one annotation (a boung box) in the image, formatted as: path/to/image,x1,y1,x2,y2,class_name
.
path/to/image
: relative path to the imagex1,y1
: top-left coordinates of the rectangular bounding boxx2,y2
: bottom-right coordinates of the bounding boxclass_name
: the object label, ‘SGN’ as in this dataset
When an image does not contain any labeled objects, x1
,y1
,x2
,y2
,class_name
are left empty and the annotation is: path/to/image,,,,,
.
In the “classes_0.csv”, the class indexing information is provided: SGN,0
.
Code/Software
The dataset can be directly used to train a RetinaNet object detection model using fizyr/keras-retinanet (Python).