MTL subregion segmentation atlas with 7T and 3T multi-modality MRI
Data files
Oct 22, 2025 version files 3.64 GB
-
multi_modality_atlas.zip
3.64 GB
-
README.md
2.90 KB
Abstract
Volumetry of subregions in the medial temporal lobe (MTL) computed from automatic segmentation in MRI can track neurodegeneration in Alzheimer’s disease. However, image quality may vary in MRI. Poor-quality MR images can lead to unreliable segmentation of MTL subregions. Considering that different MRI contrast mechanisms and field strengths (jointly referred to as “modalities” here) offer distinct advantages in imaging different parts of the MTL, we developed a multi-modality segmentation model using both 7 tesla (7T) and 3 tesla (3T) structural MRI to obtain robust segmentation in poor-quality images. MRI modalities, including 3T T1-weighted, 3T T2-weighted, 7T T1-weighted, and 7T T2-weighted (7T-T2w) of 197 participants were collected from a longitudinal aging study at the Penn Alzheimer’s Disease Research Center. Among them, 7T-T2w was used as the primary modality, and all other modalities were rigidly registered to the 7T-T2w. A model derived from nnU-Net took these registered modalities as input and outputted subregion segmentation in 7T-T2w space. 7T-T2w images, most of which had high quality, from 25 selected training participants were manually segmented to train the multi-modality model. Modality augmentation, which randomly replaced certain modalities with Gaussian noise, was applied during training to guide the model to extract information from all modalities. To compare our proposed model with a baseline single-modality model in the full dataset with mixed high/poor image quality, we evaluated the ability of derived volume/thickness measures to discriminate Amyloid+ mild cognitive impairment (A+MCI) and Amyloid- cognitively unimpaired (A-CU) groups, as well as the stability of these measurements in longitudinal data. The multi-modality model delivered good performance regardless of 7T-T2w quality, while the single-modality model under-segmented subregions in poor-quality images. The multi-modality model generally demonstrated stronger discrimination of A+MCI versus A-CU. Intra-class correlation and Bland-Altman plots demonstrate that the multi-modality model had higher longitudinal segmentation consistency in all subregions while the single-modality model had low consistency in poor-quality images. The multi-modality MRI segmentation model provides an improved biomarker for neurodegeneration in the MTL that is robust to image quality. It also provides a framework for other studies which may benefit from multimodal imaging.
Description of the data and file structure
This is a multi-modality dataset of medial temporal lobe in brain MRI for the paper about multi-modality subregion segmentation model. 7T-T2w, 7T-T1w, 3T-T2w, and 3T-T1w data are available at ROI level from 25 anonymized subjects.
This dataset is included in the zip file "multi_modality_atlas.zip", which contains four subfolders, each corresponding to a different step in the multi-modality segmentation pipeline. Below is a description of the subfolder structure and their contents:
multi_modality_training_atlas: Atlas raw data with nnUNet format. All subjects were anonymized by being given a fake nnunet ID. Two consecutive numbers (e.g.,001and002,017and018, etc.) represent the left and right MTL ROIs, respectively, for the same subject. The primary modality is 7T-T2w (channel 0000), and the ROIs of the other modalities (7T-T1w inv1 [channel 0001], 7T-T1w inv2 [channel 0002], 3T-T2w [channel 0003], and 3T-T1w [channel 0004]) have been registered to the 7T-T2w ROI for each side. Detailed label correspondence can be found in the filedataset.json.multi_modality_trained_model: Trained model in nnUNet format. This is for model inference if the user wants to run our pipeline on a new test data.3tt1_template_roi: This contains the left and right ROIs in 3T-T1w template space that should be used in ROI cropping step. The original template file "template.nii.gz", which was synthetically generated by averaging 29 atlas images, can be downloaded from https://www.nitrc.org/projects/ashs (ashs_atlas_upennpmc_20170810.tar), the dataset published by paperYushkevich, P.A., Pluta, J.B., Wang, H., Xie, L., Ding, S.L., Gertje, E.C., Mancuso, L., Kliot, D., Das, S.R. and Wolk, D.A., 2015. Automated volumetry and regional thickness analysis of hippocampal subfields and medial temporal cortical structures in mild cognitive impairment. Human brain mapping, 36(1), pp.258-287.test_results_on_bnu_dataset: The output of running our model on an independent test set. (This dataset is available under the Creative Commons Attribution 4.0 International License (CC BY 4.0).) We acknowledge and credit the original creators as required by the license. Our dataset includes only derived outputs (ROIs and segmentations) and does not redistribute the raw original dataset.
Code/software
The code of this study is available at https://github.com/liyue3780/mmseg
