Data from: Deep learning unlocks X‐ray microtomography segmentation of multiclass microdamage in heterogeneous materials
Kopp, Reed et al. (2021), Data from: Deep learning unlocks X‐ray microtomography segmentation of multiclass microdamage in heterogeneous materials, Dryad, Dataset, https://doi.org/10.5061/dryad.ffbg79cwb
Four-dimensional quantitative characterization of heterogeneous materials using in situ synchrotron radiation computed tomography can reveal 3D sub-micron features, particularly damage, evolving under load, leading to improved materials. However, dataset size and complexity increasingly require time-intensive and subjective semi-automatic segmentations. Here, we present the first deep learning (DL) convolutional neural network (CNN) segmentation of multiclass microscale damage in heterogeneous bulk materials, teaching on advanced aerospace-grade composite damage using ≈65,000 (trained) human-segmented tomograms. The trained CNN machine segments complex and sparse (<<1% of volume) composite damage classes to ≈99.99% agreement, unlocking both objectivity and efficiency, with nearly 100% of the human time eliminated, which traditional rule-based algorithms do not approach. The trained machine is found to perform as well or better than the human due to ‘machine-discovered’ human segmentation error, with machine improvements manifesting primarily as new damage discovery and segmentation augmentation/extension in artifact-rich tomograms. Interrogating a high-level network hyperparametric space on two material configurations, we find DL to be a disruptive approach to quantitative structure-property characterization, enabling high-throughput knowledge creation (accelerated by two orders of magnitude) via generalizable, ultra-high-resolution feature segmentation.
See Methods section and Supporting Information associated with corresponding Advanced Materials paper (DOI: 10.1002/adma.202107817).
Database of thirty high-resolution synchrotron radiation computed tomography scans of advanced composites, wherein each scan is represented in a zipped file containing both raw tomographic images (TIFF) and corresponding trained-human annotations of polymer damage (TIFF), used in the development of deep learning datasets for semantic segmentation via the instructions outlined in the ReadMe document. Additional usage details can be found in the corresponding Advanced Materials paper (DOI: 10.1002/adma.202107817).