Photographs of 15-day wound closure progress in C57BL/6J mice
Cite this dataset
Yang, Hsin-ya; Bagood, Michelle; Carrion, Hector; Isseroff, Rivkah (2022). Photographs of 15-day wound closure progress in C57BL/6J mice [Dataset]. Dryad. https://doi.org/10.25338/B84W8Q
Evaluating and tracking wound size is a fundamental metric for the wound assessment process. Good location and size estimates can enable proper diagnosis and effective treatment. Traditionally, laboratory wound healing studies include a collection of images at uniform time intervals exhibiting the wounded area and the healing process in the test animal, often a mouse. These images are then manually observed to determine key metrics —such as wound size progress— relevant to the study. However, this task is a time-consuming and laborious process. In addition, defining the wound edge could be subjective and can vary from one individual to another even among experts. Furthermore, as our understanding of the healing process grows, so does our need to efficiently and accurately track these key factors for high throughput (e.g., over large-scale and long- term experiments). Thus, in this study, we develop a deep learning-based image analysis pipeline that aims to intake non-uniform wound images and extract relevant information such as the location of interest, wound only image crops, and wound periphery size over-time metrics. In particular, our work focuses on images of wounded laboratory mice that are used widely for translationally relevant wound studies and leverages a commonly used ring-shaped splint present in most images to predict wound size. We apply the method to a dataset that was never meant to be quantified and, thus, presents many visual challenges. Additionally, the data set was not meant for training deep learning models and so is relatively small in size with only 256 images. We compare results to that of expert measurements and demonstrate preservation of information relevant to predicting wound closure despite variability from machine-to-expert and even expert-to-expert. The proposed system resulted in high fidelity results on unseen data with minimal human intervention. Furthermore, the pipeline estimates acceptable wound sizes when less than 50% of the images are missing reference objects.
The collection of the wound photo dataset was performed in J Invest Dermatol. 2021 Apr;141(4S):1071-1075.e4 (https://doi.org/10.1016/j.jid.2020.10.022). Two sterile silicon splints (10 mm inner diameter, 16 mm outer diameter, 1.6 mm thick) were glued to shaved back skin on either side of the spine, and six mm excisional, splinted wounds were generated by a biopsy punch and surgical microscissors on young (12-14 weeks old) and aged (22-24 months old) C57BL/6J mice (The Jackson Lab). A 16-mm circular plastic coverslip was applied on top of the splint, and the entire wound area was sealed with a transparent, semipermeable dressing Tegaderm. The animals were housed singly after wounding and the raw data of the daily wound photos were captured with a cell phone at a fixed distance of 12 cm from day 0 (the surgery day) to day 15 (the experimental endpoint). Detection and location of the wounded area in the photos were performed by a YOLOv3 based object detection algorithm trained on the dataset. Image cropping was done by an algorithm able to read-in output annotations and crop around the detected midpoint of the wound-of-interest (https://doi.org/10.1371/journal.pcbi.1009852).
These images are time series of photos during a 15-day wound closure process in C57BL/6J mice. One of the images is missing (Y8-2, Right wound, Day 9).
Defense Advanced Research Projects Agency, Award: A20-0427-S002