Deep learning software and revised 2D model to segment bone in micro-CT scans
Data files
Feb 03, 2026 version files 44.44 GB
-
_BONe_Avizo.zip
43.16 KB
-
_BONe_standalone.zip
80.39 KB
-
_Models.zip
3.55 GB
-
12R_12U_HF.zip
4.96 GB
-
19R_19U_HF.zip
12.12 GB
-
1R_1U_HF.zip
2.03 GB
-
2R_2U_HF.zip
4.17 GB
-
5R_5U_HF.zip
4.29 GB
-
7R_7U_HF.zip
5.02 GB
-
AMNH_Mammals_M-206440_mixed.zip
35.54 MB
-
AMNH_Mammals_M-89009_F.zip
7.99 GB
-
OMNH_Mammals_44262_HRU.zip
11.46 MB
-
OMNH_Mammals_53994_FTFi.zip
17.69 MB
-
OMNH_Mammals_53994_HRU.zip
13.60 MB
-
README.md
30.47 KB
-
UAM_Mamm_24789_FTFi.zip
17.47 MB
-
UAM_Mamm_67696_HF.zip
25.43 MB
-
UAM_Mamm_67696_TFiRU.zip
19.43 MB
-
UF_Mammals_23593-24550_HF.zip
30.37 MB
-
UF_Mammals_31151_HRU.zip
12.34 MB
-
UWBM_Mamm_78743_FTFi.zip
18.28 MB
-
UWBM_Mamm_81969_FTFi.zip
21.72 MB
-
UWBM_Mamm_81969_HRU.zip
20.06 MB
-
ZMB_Mam_30740_HRU.zip
71.55 MB
Feb 04, 2026 version files 44.64 GB
-
_BONe_Avizo.zip
48.22 KB
-
_BONe_standalone.zip
88.68 KB
-
_Models.zip
3.75 GB
-
12R_12U_HF.zip
4.96 GB
-
19R_19U_HF.zip
12.12 GB
-
1R_1U_HF.zip
2.03 GB
-
2R_2U_HF.zip
4.17 GB
-
5R_5U_HF.zip
4.29 GB
-
7R_7U_HF.zip
5.02 GB
-
AMNH_Mammals_M-206440_mixed.zip
35.54 MB
-
AMNH_Mammals_M-89009_F.zip
7.99 GB
-
OMNH_Mammals_44262_HRU.zip
11.46 MB
-
OMNH_Mammals_53994_FTFi.zip
17.69 MB
-
OMNH_Mammals_53994_HRU.zip
13.60 MB
-
README.md
30.47 KB
-
UAM_Mamm_24789_FTFi.zip
17.47 MB
-
UAM_Mamm_67696_HF.zip
25.43 MB
-
UAM_Mamm_67696_TFiRU.zip
19.43 MB
-
UF_Mammals_23593-24550_HF.zip
30.37 MB
-
UF_Mammals_31151_HRU.zip
12.34 MB
-
UWBM_Mamm_78743_FTFi.zip
18.28 MB
-
UWBM_Mamm_81969_FTFi.zip
21.72 MB
-
UWBM_Mamm_81969_HRU.zip
20.06 MB
-
ZMB_Mam_30740_HRU.zip
71.55 MB
Abstract
Deep learning (DL) enables automated bone segmentation in micro-CT datasets but can struggle to generalize across developmental stages, anatomical regions, and imaging conditions. We present BP-2D-03, which is a revised 2D Bone-Pores segmentation model. It was trained on a new dataset comprising 20 micro-CT scans spanning five mammalian species and 142,960 image patches. To tackle the substantially larger and more varied dataset, we developed a new DL software interface with modules for training (“BONe DLFit”), prediction (“BONe DLPred”), and evaluation (“BONe IoU”). These tools addressed issues with prior pipelines, such as slice-level data leakage, high memory usage, and limited multi-GPU support. BONe’s performance was evaluated through three complementary analyses. First, 5-fold cross-validation of the baseline model (U-Net with ResNet-18 backbone and 256-px patches) assessed the effect of dataset composition on model robustness and stability, showing generally high mean Intersection-over-Union (IoU) across folds and replicates. Second, 30 benchmarking experiments tested how model architecture, encoder backbone, and patch size influence segmentation IoU and computational efficiency. U-Net and UNet++ architectures with simple convolutional backbones (e.g., ResNet-18) achieved the highest predictivity and best performance-efficiency tradeoffs, with top models reaching mean IoU values of ~0.97, whereas transformer-based and atrous-convolution models benefited from larger patches but still underperformed in mean IoU. Third, cross-platform experiments confirmed that BONe produces stable results across different hardware configurations, operating systems, and implementations (Avizo 3D and standalone). Together, these analyses demonstrate that BONe delivers robust baseline performance and reproducible results across platforms.
Dataset DOI: 10.5061/dryad.4j0zpc8qq
Description of the data and file structure
Code and Data
File: _BONe_Avizo.zip
Description: contains the Avizo script files used to train models on micro-CT scans (BONe_DLFit), apply deep learning (DL) models to segment micro-CT scans (BONe_DLPred), and compare overlap between reference and predicted segmentations (BONe_IoU). These modules were designed to integrate with Avizo 2024.2 and its Python 3.11.x virtual environment (preliminary testing suggests compatibility with Avizo 2025.1). If needed, the functionality of the modules may be customized by the user by opening the rc, py, and pyscro files in the Python code editor of choice. The rc files control the location and appearance of the modules in the menu of modules. Deep learning functionality is controlled by the py and pyscro files.
Installation: The BONe modules for Avizo implement multiprocessing and require an external Python installation (3.12.x) separate from the 3.11.x version that is embedded in Avizo. After installing Python 3.12.x, copy the BONe folder, rc files, and pyscro files into Avizo/Amira’s “share/python_script_objects” folder. In Avizo/Amira, a new Python virtual environment must be created by opening the Prompt to Package Manager (EDM). Create a new Python environment called BONe (do NOT “Install Deep Learning Packages”). Close the terminal window, and open a new Prompt to Package Manager terminal. Type the following commands to install Python library requirements:
edm shell -e BONe
pip install torch==2.8.0 torchvision==0.23.0 torchaudio==2.8.0 --index-url https://download.pytorch.org/whl/cu129
pip install segmentation-models-pytorch==0.5.0
pip install segmentation-models-pytorch-3d==1.0.2
pip install torchmetrics==1.8.2
pip install fvcore==0.1.5.post20221221
pip install nvidia-ml-py==13.580.82
exit
exit
Note #1: The instructions above will install PyTorch 2.8.0+cu129. For compatibility with Nvidia Maxwell and Pascal generations of GPUs, please replace the torch installation command with the following:
pip install torch==2.8.0 torchvision==0.23.0 torchaudio==2.8.0 --index-url https://download.pytorch.org/whl/cu126
Note #2: There shouldn’t be any issues with installation on Windows 10/11 Pro. However, we encountered difficulties with installation on Linux. Although Avizo 3D 2024.2 installs/works on Linux Ubuntu 22.04 LTS, installation was not successful on 24.04 LTS. In addition, there is a bug in Linux Avizo 3D 2024.2 that prevents the user from switching from the embedded Python environment to a custom one like BONe. This has been patched in Avizo 3D 2025.1. The fix for users of Linux Avizo 3D 2024.2 is as follows:
- Create the BONe environment following the preceding steps
- Navigate to the custom environment's folder:
~/.edm/envs/BONe - Backup Avizo’s embedded Python environment:
/usr/local/AmiraAvizo3D/2024.2/python/ - Replace the subfolders of Avizo's embedded Python environment with BONe's subfolders
After restarting Avizo/Amira, the three new pyscro scripts can be accessed in the menu of modules under the Python Scripts category. The modules feature a point-and-click interface, which is detailed in the accompanying paper.
File structure:
_BONe_Avizo.zip/
├─ BONe
│ ├─ BONe_DLFit_preprocess.py
│ ├─ BONe_trainer.py
│ ├─ BONe_utils.py
│ ├─ convert_weights_mod.py
│ ├─ custom_3Dfpn.py
│ ├─ custom_3Dmanet.py
│ ├─ custom_3Dunet.py
│ ├─ custom_3Dunetplusplus.py
│ └─ empatches_anisotropic3D.py
├─ BONe_DLFit.pyscro
├─ BONe_DLFit.rc
├─ BONe_DLPred.pyscro
├─ BONe_DLPred.rc
├─ BONe_IoU.pyscro
└─ BONe_IoU.rc
File: _BONe_standalone.zip
Description: contains the standalone versions of the Avizo modules. Each standalone app functions the same as its Avizo module counterpart. BONe_DLFit trains PTH-formatted models on micro-CT scans; BONe_DLPred segments micro-CT scans using compatible models; and BONe_IoU calculates percentage overlap between reference and predicted segmentations. The standalone apps were tested on Windows 10/11 Pro and Ubuntu Linux 22.04 LTS. If needed, the functionality of each app may be customized by the user by opening the py files in the Python code editor of choice. Each venv file functions as the entry point and controls installation, checks for Python and PyTorch, and loads the corresponding app. Common functions shared among the apps are stored in the BONe_utils folder. The remaining app folders consist of: (1) a main.py that functions in top-level application control; (2) app subfolder with files that specify the look and behavior of the graphical interface; and (3) core subfolder with files that direct data loading, data pre-processing routines, initialization, fitting, prediction, and evaluation.
Installation and operation on Windows:
-
Install Python 3.12.x for full PyTorch support (as of 7/12/2025).
-
Unzip _BONe_standalone.zip to computer.
-
In the _BONe_standalone folder, double-click the desired app entry point:
BONe_DLFit_venv.py(to train a model) or
BONe_DLPred_venv.py(to segment a micro-CT scan using a BONe DL model) or
BONe_IoU_venv.py(to evaluate predicted segmentation)*Note: step 3 works as intended on your computer if py files are associated with Python 3.12.x. If not, right-click the desired py file and "Open with" Python 3.12.x. Alternatively, open a Terminal or Powershell window from the _BONe_standalone folder.
*
For a computer with a single Python installation, type:py BONe_DLFit_venv.py
For a computer with multiple Python installations, type:python3.12 BONe_DLFit_venv.py -
Each app will check for Python 3.12.x and pip. A Python virtual environment named BONe will be created if it is not already present. Each app will then check for PyTorch (2.8.0 as of 7/12/2025) before running. If PyTorch is not detected, the user will be asked to choose a CUDA version of PyTorch to install. Select CUDA 12.9 if your system has a Blackwell series Nvidia GPU. Older Nvidia GPUs are supported by CUDA 12.6. Users with no Nvidia GPU should select the CPU version of PyTorch to install with the following caveats: model segmentation (prediction) is possible using CPU only but there is a noticeable speed reduction; and model fitting is impractically slow using CPU only.
-
Each app has a point-and-click interface, which is explained in detail in the accompanying paper.
-
Close app window to exit.
Installation and operation on Ubuntu:
- Install Python 3.12.x for full PyTorch support (as of 7/12/2025).
(Suggested method of installing Python using Ubuntu 22.04 LTS)
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt update
sudo apt install python3.12{,-venv,-tk, -dev, -distutils} - Unzip _BONe_standalone.zip to computer (drive must be in ext4 format).
- Open a Terminal from the _BONe_standalone folder and type:
python3.12 BONe_DLFit_venv.py(train a model) or
python3.12 BONe_DLPred_venv.py(segment a micro-CT scan) or
python3.12 BONe_IoU_venv.py(evaluate predicted segmentation) - Each app will check for Python 3.12.x and pip. A Python virtual environment named BONe will be created if it is not already present. Each app will then check for PyTorch (2.8.0 as of 7/12/2025) before running. If PyTorch is not detected, the user will be asked to choose a CUDA version of PyTorch to install. Select CUDA 12.9 if your system has a Blackwell series Nvidia GPU. Older Nvidia GPUs are supported by CUDA 12.6. Users with no Nvidia GPU should select the CPU version of PyTorch to install with the following caveats: model segmentation (prediction) is possible using CPU only but there is a noticeable speed reduction; model fitting is impractically slow using CPU only.
- Each app has a point-and-click interface, which is explained in detail in the accompanying paper.
- Close app window to exit.
File structure:
_BONe_standalone.zip/
├─ BONe_DLFit/
│ ├─ app/
│ │ ├─ widgets/
│ │ │ ├─ _init.py
│ │ │ ├─ base.py
│ │ │ ├─ button_widgets.py
│ │ │ ├─ console_widgets.py
│ │ │ ├─ input_widgets.py
│ │ │ ├─ model_widgets.py
│ │ │ ├─ theme.py
│ │ │ └─ training_widgets.py
│ │ ├─ _init.py
│ │ └─ gui.py
│ ├─ core/
│ │ ├─ _init.py
│ │ ├─ convert_weights_mod.py
│ │ ├─ data_preprocess.py
│ │ ├─ fitting_loop.py
│ │ ├─ trainer.py
│ │ └─ volume_loader.py
│ └─ main.py
├─ BONe_DLPred/
│ ├─ app/
│ │ ├─ _init.py
│ │ ├─ gui.py
│ │ ├─ theme.py
│ │ └─ widgets.py
│ ├─ core/
│ │ ├─ _init.py
│ │ ├─ empatches_anisotropic3D.py
│ │ └─ prediction_loop.py
│ └─ main.py
├─BONe_IoU/
│ ├─ app/
│ │ ├─ _init.py
│ │ ├─ gui.py
│ │ ├─ theme.py
│ │ └─ widgets.py
│ ├─ core/
│ │ ├─ _init.py
│ │ ├─ data_loader.py
│ │ ├─ iou_calculator.py
│ │ ├─ iou_metrics.py
│ │ ├─ mask_utils.py
│ │ └─ utils.py
│ └─ main.py
├─ BONe_utils/
│ ├─ _init.py
│ ├─ custom_3Dfpn.py
│ ├─ custom_3Dmanet.py
│ ├─ custom_3Dunet.py
│ ├─ custom_3Dunetplusplus.py
│ ├─ model_factory.py
│ └─ utils.py
├─ requirements/
│ ├─ base.txt
│ ├─ cpu.txt
│ ├─ cuda126.txt
│ └─ cuda129.txt
├─ _README.txt
├─ BONe_DLFit_venv.py
├─ BONe_DLPred_venv.py
└─ BONe_IoU_venv.py
File: _Models.zip
Description: contains the current DL models (.pth) and training logs (.txt) for Bone-Pores (BP) segmentation. Models are formatted for PyTorch (version 2.8.0+cu129) and were constructed using the Python library segmentation-models-pytorch (0.5.0). These PTH models are intended for use with the BONe modules in the commercial software Avizo/Amira or the BONe standalone apps. Note: these PTH-formatted models are not interchangeable with the HDF5(.h5)-formatted models used by Avizo's default deep learning modules.
File structure:
_Models.zip/
├─ BP-Models/
│ ├─ BP-2D-03.pth
│ ├─ BP-2D-03_log.txt
│ ├─ BP-DeepLabV3-plus_EffNet-b3_256px_best_epoch_12_val_jaccard_0.9408.pth
│ ├─ BP-DeepLabV3-plus_EffNet-b3_256px_log.txt
│ ├─ BP-DeepLabV3-plus_EffNet-b3_512px_best_epoch_19_val_jaccard_0.9462.pth
│ ├─ BP-DeepLabV3-plus_EffNet-b3_512px_log.txt
│ ├─ BP-DeepLabV3-plus_mit-b1_256px_best_epoch_21_val_jaccard_0.8813.pth
│ ├─ BP-DeepLabV3-plus_mit-b1_256px_log.txt
│ ├─ BP-DeepLabV3-plus_mit-b1_512px_best_epoch_18_val_jaccard_0.9381.pth
│ ├─ BP-DeepLabV3-plus_mit-b1_512px_log.txt
│ ├─ BP-DeepLabV3-plus_resnet18_256px_best_epoch_21_val_jaccard_0.9414.pth
│ ├─ BP-DeepLabV3-plus_resnet18_256px_log.txt
│ ├─ BP-DeepLabV3-plus_resnet18_512px_best_epoch_18_val_jaccard_0.9441.pth
│ ├─ BP-DeepLabV3-plus_resnet18_512px_log.txt
│ ├─ BP-DeepLabV3-plus_resnet50_256px_best_epoch_16_val_jaccard_0.9436.pth
│ ├─ BP-DeepLabV3-plus_resnet50_256px_log.txt
│ ├─ BP-DeepLabV3-plus_resnet50_512px_best_epoch_25_val_jaccard_0.9456.pth
│ ├─ BP-DeepLabV3-plus_resnet50_512px_log.txt
│ ├─ BP-SegFormer_EffNet-b3_256px_best_epoch_21_val_jaccard_0.9430.pth
│ ├─ BP-SegFormer_EffNet-b3_256px_log.txt
│ ├─ BP-SegFormer_EffNet-b3_512px_best_epoch_XX_val_jaccard_0.XXXX.pth
│ ├─ BP-SegFormer_EffNet-b3_512px_log.txt
│ ├─ BP-SegFormer_mit-b1_256px_best_epoch_23_val_jaccard_0.9152.pth
│ ├─ BP-SegFormer_mit-b1_256px_log.txt
│ ├─ BP-SegFormer_mit-b1_512px_best_epoch_11_val_jaccard_0.9401.pth
│ ├─ BP-SegFormer_mit-b1_512px_log.txt
│ ├─ BP-SegFormer_resnet18_256px_best_epoch_21_val_jaccard_0.9424.pth
│ ├─ BP-SegFormer_resnet18_256px_log.txt
│ ├─ BP-SegFormer_resnet18_512px_best_epoch_21_val_jaccard_0.9431.pth
│ ├─ BP-SegFormer_resnet18_512px_log.txt
│ ├─ BP-SegFormer_resnet50_256px_best_epoch_18_val_jaccard_0.9424.pth
│ ├─ BP-SegFormer_resnet50_256px_log.txt
│ ├─ BP-SegFormer_resnet50_512px_best_epoch_22_val_jaccard_0.9434.pth
│ ├─ BP-SegFormer_resnet50_512px_log.txt
│ ├─ BP-U-Net_EffNet-b3_256px_best_epoch_20_val_jaccard_0.9796.pth
│ ├─ BP-U-Net_EffNet-b3_256px_log.txt
│ ├─ BP-U-Net_EffNet-b3_512px_best_epoch_8_val_jaccard_0.9778.pth
│ ├─ BP-U-Net_EffNet-b3_512px_log.txt
│ ├─ BP-U-Net_mit-b1_256px_best_epoch_16_val_jaccard_0.9730.pth
│ ├─ BP-U-Net_mit-b1_256px_log.txt
│ ├─ BP-U-Net_mit-b1_512px_best_epoch_18_val_jaccard_0.9762.pth
│ ├─ BP-U-Net_mit-b1_512px_log.txt
│ ├─ BP-U-Net_resnet18_256px_best_epoch_18_val_jaccard_0.9790.pth
│ ├─ BP-U-Net_resnet18_256px_log.txt
│ ├─ BP-U-Net_resnet18_512px_best_epoch_9_val_jaccard_0.9786.pth
│ ├─ BP-U-Net_resnet18_512px_log.txt
│ ├─ BP-U-Net_resnet50_256px_best_epoch_21_val_jaccard_0.9778.pth
│ ├─ BP-U-Net_resnet50_256px_log.txt
│ ├─ BP-U-Net_resnet50_512px_best_epoch_5_val_jaccard_0.9781.pth
│ ├─ BP-U-Net_resnet50_512px_log.txt
│ ├─ BP-UNet-plus-plus_EffNet-b3_256px_best_epoch_10_val_jaccard_0.9796.pth
│ ├─ BP-UNet-plus-plus_EffNet-b3_256px_log.txt
│ ├─ BP-UNet-plus-plus_EffNet-b3_512px_best_epoch_7_val_jaccard_0.9803.pth
│ ├─ BP-UNet-plus-plus_EffNet-b3_512px_log.txt
│ ├─ BP-UNet-plus-plus_resnet18_256px_best_epoch_12_val_jaccard_0.9769.pth
│ ├─ BP-UNet-plus-plus_resnet18_256px_log.txt
│ ├─ BP-UNet-plus-plus_resnet18_512px_best_epoch_10_val_jaccard_0.9791.pth
│ ├─ BP-UNet-plus-plus_resnet18_512px_log.txt
│ ├─ BP-UNet-plus-plus_resnet50_256px_best_epoch_18_val_jaccard_0.9777.pth
│ ├─ BP-UNet-plus-plus_resnet50_256px_log.txt
│ ├─ BP-UNet-plus-plus_resnet50_512px_best_epoch_13_val_jaccard_0.9791.pth
│ └─ BP-UNet-plus-plus_resnet50_512px_log.txt
├─ Cross-validation/
│ ├─ BP-U-Net_resnet18_256px_Set1_best_epoch_18_val_jaccard_0.9790.pth
│ ├─ BP-U-Net_resnet18_256px_Set1_log.txt
│ ├─ BP-U-Net_resnet18_256px_Set1_seed1701_best_epoch_18_val_jaccard_0.9824.pth
│ ├─ BP-U-Net_resnet18_256px_Set1_seed1701_log.txt
│ ├─ BP-U-Net_resnet18_256px_Set1_seed1864_best_epoch_20_val_jaccard_0.9123.pth
│ ├─ BP-U-Net_resnet18_256px_Set1_seed1864_log.txt
│ ├─ BP-U-Net_resnet18_256px_Set2_best_epoch_13_val_jaccard_0.9755.pth
│ ├─ BP-U-Net_resnet18_256px_Set2_log.txt
│ ├─ BP-U-Net_resnet18_256px_Set2_seed1701_best_epoch_17_val_jaccard_0.9817.pth
│ ├─ BP-U-Net_resnet18_256px_Set2_seed1701_log.txt
│ ├─ BP-U-Net_resnet18_256px_Set2_seed1864_best_epoch_10_val_jaccard_0.9764.pth
│ ├─ BP-U-Net_resnet18_256px_Set2_seed1864_log.txt
│ ├─ BP-U-Net_resnet18_256px_Set3_best_epoch_19_val_jaccard_0.8885.pth
│ ├─ BP-U-Net_resnet18_256px_Set3_log.txt
│ ├─ BP-U-Net_resnet18_256px_Set3_seed1701_best_epoch_4_val_jaccard_0.9784.pth
│ ├─ BP-U-Net_resnet18_256px_Set3_seed1701_log.txt
│ ├─ BP-U-Net_resnet18_256px_Set3_seed1864_best_epoch_10_val_jaccard_0.9784.pth
│ ├─ BP-U-Net_resnet18_256px_Set3_seed1864_log.txt
│ ├─ BP-U-Net_resnet18_256px_Set4_best_epoch_22_val_jaccard_0.8464.pth
│ ├─ BP-U-Net_resnet18_256px_Set4_log.txt
│ ├─ BP-U-Net_resnet18_256px_Set4_seed1701_best_epoch_9_val_jaccard_0.9755.pth
│ ├─ BP-U-Net_resnet18_256px_Set4_seed1701_log.txt
│ ├─ BP-U-Net_resnet18_256px_Set4_seed1864_best_epoch_3_val_jaccard_0.9169.pth
│ ├─ BP-U-Net_resnet18_256px_Set4_seed1864_log.txt
│ ├─ BP-U-Net_resnet18_256px_Set5_best_epoch_10_val_jaccard_0.9016.pth
│ ├─ BP-U-Net_resnet18_256px_Set5_log.txt
│ ├─ BP-U-Net_resnet18_256px_Set5_seed1701_best_epoch_11_val_jaccard_0.9725.pth
│ ├─ BP-U-Net_resnet18_256px_Set5_seed1701_log.txt
│ ├─ BP-U-Net_resnet18_256px_Set5_seed1864_best_epoch_13_val_jaccard_0.8631.pth
│ └─ BP-U-Net_resnet18_256px_Set5_seed1864_log.txt
└─ Linux_v_Windows/
├─ Hopper_linux_avizo_dual-gpu/
│ ├─ BP-U-Net_resnet18_256px_hopper_linux_dual-gpu.pth
│ └─ BP-U-Net_resnet18_256px_hopper_linux_dual-gpu_log.txt
├─ Hopper_linux_avizo_single-gpu/
│ ├─ BP-U-Net_resnet18_256px_hopper_linux_single-gpu.pth
│ └─ BP-U-Net_resnet18_256px_hopper_linux_single-gpu_log.txt
├─ Hopper_linux_standalone_dual-gpu/
│ ├─ BP-U-Net_resnet18_256px_hopper_linux_standalone_dual-gpu.pth
│ └─ BP-U-Net_resnet18_256px_hopper_linux_standalone_dual-gpu_log.txt
├─ Hopper_linux_standalone_single-gpu/
│ ├─ BP-U-Net_resnet18_256px_hopper_linux_standalone_single-gpu.pth
│ └─ BP-U-Net_resnet18_256px_hopper_linux_standalone_single-gpu_log.txt
├─ Hopper_win_avizo_dual-gpu/
│ ├─ BP-U-Net_resnet18_256px_hopper_win_dual-gpu.pth
│ └─ BP-U-Net_resnet18_256px_hopper_win_dual-gpu_log.txt
├─ Hopper_win_avizo_single-gpu/
│ ├─ BP-U-Net_resnet18_256px_hopper_win_single-gpu.pth
│ └─ BP-U-Net_resnet18_256px_hopper_win_single-gpu_log.txt
├─ Hopper_win_standalone_dual-gpu/
│ ├─ BP-U-Net_resnet18_256px_hopper_win_standalone_dual-gpu.pth
│ └─ BP-U-Net_resnet18_256px_hopper_win_standalone_dual-gpu_log.txt
├─ Hopper_win_standalone_single-gpu/
│ ├─ BP-U-Net_resnet18_256px_hopper_win_standalone_single-gpu.pth
│ └─ BP-U-Net_resnet18_256px_hopper_win_standalone_single-gpu_log.txt
├─ Jarvis_linux_avizo_dual-gpu/
│ ├─ BP-U-Net_resnet18_256px_Set1.pth
│ └─ BP-U-Net_resnet18_256px_Set1_log.txt
├─ Jarvis_linux_avizo_single-gpu/
│ ├─ BP-U-Net_ResNet-18_256px_Set1_single-gpu.pth
│ └─ BP-U-Net_ResNet-18_256px_Set1_single-gpu _log.txt
├─ Jarvis_linux_standalone_dual-gpu/
│ ├─ BP-U-Net_ResNet-18_256px_Set1_standalone_dual-gpu.pth
│ └─ BP-U-Net_ResNet-18_256px_Set1_standalone_dual-gpu_log.txt
└─ Jarvis_linux_standalone_single-gpu/
├─ BP-U-Net_ResNet-18_256px_Set1_standalone_single-gpu.pth
└─ BP-U-Net_ResNet-18_256px_Set1_standalone_single-gpu_log.txt
Remaining zip files
The remaining 20 zip files comprise the data used to fit models. Six of the files comprise the mouse sample, which has not been uploaded to another repository. Each of these files contain a scan folder with raw slice data (.tif) and associated metadata (.xtekct & .info that can be opened by a text editor) as well as a BP-label folder with reference segmentations (tif). Eleven of the files comprise the river otter sample stored in Dryad (https://doi.org/10.5061/dryad.b2rbnzsq4). Only the updated reference segmentations are included here. The remaining three files comprise scans stored in MorphoSource: AMNH M-206440 (ark:/87602/m4/598442); AMNH M-89009 (ark:/87602/m4/430024), and ZMB-MAM-30740 (ark:/87602/m4/M70721). Only the reference segmentations are included for AMNH M-206440 and ZMB-MAM-30740. Because the full AMNH M-206440 scan could not be successfully segmented, only a subvolume was used for training; this subvolume and its corresponding reference segmentation are included here.
1R_1U_HF.zip/
└─ _Data/
└─ 1R_1U_HF/
├─ _Scan/
│ ├─ 1R_1U_HF.xtekct
│ ├─ 1R_1U_HF-cropped_.info
│ ├─ 1R_1U_HF-cropped_0000.tif
│ ├─ 1R_1U_HF-cropped_XXXX.tif
│ └─ 1R_1U_HF-cropped_1791.tif
└─ BP-label/
├─ 1R_1U_HF-cropped_bone-pores-label_.info
├─ 1R_1U_HF-cropped_bone-pores-label_0000.tif
├─ 1R_1U_HF-cropped_bone-pores-label_XXXX.tif
└─ 1R_1U_HF-cropped_bone-pores-label_1791.tif
2R_2U_HF.zip/
└─ _Data/
└─ 2R_2U_HF/
├─ _Scan/
│ ├─ 2R_2U_HF.xtekct
│ ├─ 2R_2U_HF-cropped_.info
│ ├─ 2R_2U_HF-cropped_0000.tif
│ ├─ 2R_2U_HF-cropped_XXXX.tif
│ └─ 2R_2U_HF-cropped_2111.tif
└─ BP-label/
├─ 2R_2U_HF-cropped_bone-pores-label_.info
├─ 2R_2U_HF-cropped_bone-pores-label_0000.tif
├─ 2R_2U_HF-cropped_bone-pores-label_XXXX.tif
└─ 2R_2U_HF-cropped_bone-pores-label_2111.tif
5R_5U_HF.zip/
└─ _Data/
└─ 5R_5U_HF/
├─ _Scan/
│ ├─ 5R_5U_HF.xtekct
│ ├─ 5R_5U_HF-cropped_.info
│ ├─ 5R_5U_HF-cropped_0000.tif
│ ├─ 5R_5U_HF-cropped_XXXX.tif
│ └─ 5R_5U_HF-cropped_2047.tif
└─ BP-label/
├─ 5R_5U_HF-cropped_bone-pores-label_.info
├─ 5R_5U_HF-cropped_bone-pores-label_0000.tif
├─ 5R_5U_HF-cropped_bone-pores-label_XXXX.tif
└─ 5R_5U_HF-cropped_bone-pores-label_2047.tif
7R_7U_HF.zip/
└─ _Data/
└─ 7R_7U_HF/
├─ _Scan/
│ ├─ 7R_7U_HF.xtekct
│ ├─ 7R_7U_HF-cropped_.info
│ ├─ 7R_7U_HF-cropped_0000.tif
│ ├─ 7R_7U_HF-cropped_XXXX.tif
│ └─ 7R_7U_HF-cropped_2047.tif
└─ BP-label/
├─ 7R_7U_HF-cropped_bone-pores-label_.info
├─ 7R_7U_HF-cropped_bone-pores-label_0000.tif
├─ 7R_7U_HF-cropped_bone-pores-label_XXXX.tif
└─ 7R_7U_HF-cropped_bone-pores-label_2047.tif
12R_12U_HF.zip/
└─ _Data/
└─ 12R_12U_HF/
├─ _Scan/
│ ├─ 12R_12U_HF.xtekct
│ ├─ 12R_12U_HF-cropped_.info
│ ├─ 12R_12U_HF-cropped_0000.tif
│ ├─ 12R_12U_HF-cropped_XXXX.tif
│ └─ 12R_12U_HF-cropped_2047.tif
└─ BP-label/
├─ 12R_12U_HF-cropped_bone-pores-label_.info
├─ 12R_12U_HF-cropped_bone-pores-label_0000.tif
├─ 12R_12U_HF-cropped_bone-pores-label_XXXX.tif
└─ 12R_12U_HF-cropped_bone-pores-label_2047.tif
19R_19U_HF.zip/
└─ _Data/
└─19R_19U_HF/
├─ _Scan/
│ ├─ 19R_19U_HF.xtekct
│ ├─ 19R_19U_HF-cropped_.info
│ ├─ 19R_19U_HF-cropped_0000.tif
│ ├─ 19R_19U_HF-cropped_XXXX.tif
│ └─ 19R_19U_HF-cropped_1919.tif
└─ BP-label/
├─ 19R_19U_HF-cropped_bone-pores-label_.info
├─ 19R_19U_HF-cropped_bone-pores-label_0000.tif
├─ 19R_19U_HF-cropped_bone-pores-label_XXXX.tif
└─ 19R_19U_HF-cropped_bone-pores-label_1919.tif
AMNH_Mammals_M-89009_F.zip/
└─ _Data/
└─ AMNH_Mammals_M-89009_F/
├─ Scan_/
│ ├─ AMNH_Mammals_M-89009_F_.info
│ ├─ AMNH_Mammals_M-89009_F_0000.tif
│ ├─ AMNH_Mammals_M-89009_F_XXXX.tif
│ └─ AMNH_Mammals_M-89009_F_4249.tif
└─ BP_label/
├─ AMNH_Mammals_M-89009_F_bone-pores-label_v3_.info
├─ AMNH_Mammals_M-89009_F_bone-pores-label_v3_0000.tif
├─ AMNH_Mammals_M-89009_F_bone-pores-label_v3_XXXX.tif
└─ AMNH_Mammals_M-89009_F_bone-pores-label_v3_4249.tif
AMNH_Mammals_M-206440_mixed.zip/
└─ _Data/
└─ AMNH_Mammals_M-206440_mixed/
└─ BP_label/
├─ AMNH_Mammals_M-206440_mixed_bone-pores-label_v3_.info
├─ AMNH_Mammals_M-206440_mixed_bone-pores-label_v3_0000.tif
├─ AMNH_Mammals_M-206440_mixed_bone-pores-label_v3_XXXX.tif
└─ AMNH_Mammals_M-206440_mixed_bone-pores-label_v3_1671.tif
OMNH_Mammals_44262_HRU.zip/
└─ _Data/
└─ OMNH_Mammals_44262_HRU/
└─ BP_label/
├─ OMNH_Mammals_44262_HRU_bone-pores-label_v3_.info
├─ OMNH_Mammals_44262_HRU_bone-pores-label_v3_0000.tif
├─ OMNH_Mammals_44262_HRU_bone-pores-label_v3_XXXX.tif
└─ OMNH_Mammals_44262_HRU_bone-pores-label_v3_1661.tif
OMNH_Mammals_53994_FTFi.zip/
└─ _Data/
└─ OMNH_Mammals_53994_FTFi/
└─ BP_label/
├─ OMNH_Mammals_53994_FTFi_bone-pores-label_v3_.info
├─ OMNH_Mammals_53994_FTFi_bone-pores-label_v3_0000.tif
├─ OMNH_Mammals_53994_FTFi_bone-pores-label_v3_XXXX.tif
└─ OMNH_Mammals_53994_FTFi_bone-pores-label_v3_2215.tif
OMNH_Mammals_53994_HRU.zip/
└─ _Data/
└─ OMNH_Mammals_53994_HRU/
└─ BP_label/
├─ OMNH_Mammals_53994_HRU_bone-pores-label_v3_.info
├─ OMNH_Mammals_53994_HRU_bone-pores-label_v3_0000.tif
├─ OMNH_Mammals_53994_HRU_bone-pores-label_v3_XXXX.tif
└─ OMNH_Mammals_53994_HRU_bone-pores-label_v3_1808.tif
UAM_Mamm_24789_FTFi.zip/
└─ _Data/
└─ UAM_Mamm_24789_FTFi/
└─ BP_label/
├─ UAM_Mamm_24789_FTFi_bone-pores-label_v3_.info
├─ UAM_Mamm_24789_FTFi_bone-pores-label_v3_0000.tif
├─ UAM_Mamm_24789_FTFi_bone-pores-label_v3_XXXX.tif
└─ UAM_Mamm_24789_FTFi_bone-pores-label_v3_2097.tif
UAM_Mamm_67696_HF.zip/
└─ _Data/
└─ UAM_Mamm_67696_HF/
└─ BP_label/
├─ UAM_Mamm_67696_HF_bone-pores-label_v3_.info
├─ UAM_Mamm_67696_HF_bone-pores-label_v3_0000.tif
├─ UAM_Mamm_67696_HF_bone-pores-label_v3_XXXX.tif
└─ UAM_Mamm_67696_HF_bone-pores-label_v3_1622.tif
UAM_Mamm_67696_TFiRU.zip/
└─ _Data/
└─ UAM_Mamm_67696_TFiRU/
└─ BP_label/
├─ UAM_Mamm_67696_TFiRU_bone-pores-label_v3_.info
├─ UAM_Mamm_67696_TFiRU_bone-pores-label_v3_0000.tif
├─ UAM_Mamm_67696_TFiRU_bone-pores-label_v3_XXXX.tif
└─ UAM_Mamm_67696_TFiRU_bone-pores-label_v3_2320.tif
UF_Mammals_23593-24550_HF.zip/
└─ _Data/
└─ UF_Mammals_23593-24550_HF/
└─ BP_label/
├─ UF_Mammals_23593-24550_HF_bone-pores-label_v3_.info
├─ UF_Mammals_23593-24550_HF_bone-pores-label_v3_0000.tif
├─ UF_Mammals_23593-24550_HF_bone-pores-label_v3_XXXX.tif
└─ UF_Mammals_23593-24550_HF_bone-pores-label_v3_1754.tif
UF_Mammals_31151_HRU.zip/
└─ _Data/
└─ UF_Mammals_31151_HRU/
└─ BP_label/
├─ UF_Mammals_31151_HRU_bone-pores-label_v3_.info
├─ UF_Mammals_31151_HRU_bone-pores-label_v3_0000.tif
├─ UF_Mammals_31151_HRU_bone-pores-label_v3_XXXX.tif
└─ UF_Mammals_31151_HRU_bone-pores-label_v3_1659.tif
UWBM_Mamm_78743_FTFi.zip/
└─ _Data/
└─ UWBM_Mamm_78743_FTFi/
└─ BP_label/
├─ UWBM_Mamm_78743_FTFi_bone-pores-label_v3_.info
├─ UWBM_Mamm_78743_FTFi_bone-pores-label_v3_0000.tif
├─ UWBM_Mamm_78743_FTFi_bone-pores-label_v3_XXXX.tif
└─ UWBM_Mamm_78743_FTFi_bone-pores-label_v3_2149.tif
UWBM_Mamm_81969_FTFi.zip/
└─ _Data/
└─ UWBM_Mamm_81969_FTFi/
└─ BP_label/
├─ UWBM_Mamm_81969_FTFi_bone-pores-label_v3_.info
├─ UWBM_Mamm_81969_FTFi_bone-pores-label_v3_0000.tif
├─ UWBM_Mamm_81969_FTFi_bone-pores-label_v3_XXXX.tif
└─ UWBM_Mamm_81969_FTFi_bone-pores-label_v3_2194.tif
UWBM_Mamm_81969_HRU.zip/
└─ _Data/
└─ UWBM_Mamm_81969_HRU/
└─ BP_label/
├─ UWBM_Mamm_81969_HRU_bone-pores-label_v3_.info
├─ UWBM_Mamm_81969_HRU_bone-pores-label_v3_0000.tif
├─ UWBM_Mamm_81969_HRU_bone-pores-label_v3_XXXX.tif
└─ UWBM_Mamm_81969_HRU_bone-pores-label_v3_1994.tif
ZMB_Mam_30740_HRU.zip/
└─Data/
└─ ZMB_Mam_30740_HRU/
└─ BP_label/
├─ ZMB_Mam_30740_HRU_bone-pores-label_v3.info
├─ ZMB_Mam_30740_HRU_bone-pores-label_v3_0000.tif
├─ ZMB_Mam_30740_HRU_bone-pores-label_v3_XXXX.tif
└─ ZMB_Mam_30740_HRU_bone-pores-label_v3_3608.tif
Dataset collection
The deep learning dataset was assembled from three sources (Table 1). First, we included 11 micro-CT scans of long bones from the North American river otter (Lontra canadensis) that were previously analyzed by Lee et al. (2025). Second, we downloaded three scans of long bones from capybara (Hydrochoerus hydrochaeris; AMNH:Mammals:M-206440), leopard (Panthera pardus; AMNH:Mammals:M-89009), and sea otter (Enhydra lutris; ZMB:Mam:30740) from MorphoSource. Third, we collected six new micro-CT scans from a sample of laboratory mouse (Mus musculus) that is described below.
Forty male C57BL/6 mice (4-wk old) were purchased from Charles River Laboratory (Wilmington, MA, USA) and maintained for 25 weeks. After the mice were euthanized, the limb bones (humerus, radius, ulna, femur, tibia, and fibula) were dissected, fixed in 10% neutral buffered formalin for 24 hours, stored in 70% ethanol. All animal care was conducted in accordance with established guidelines, and all protocols used were approved by Midwestern University’s Institutional Animal Care and Use Committee (IACUC #AZ-4205).
Imaging of the mouse sample
Micro-CT scanning was performed on a Nikon XT H 225 ST (Nikon Metrology Inc., Brighton, MI, USA) with settings at 120–160 kV, 58–112 µA, and 9.1–11.3 µm isotropic voxel size. Each scan consisted of the left humerus and femur from two individuals. Out the 20 scans that were collected, six were selected for the current deep learning dataset (Table 1).
Table 1. Micro-CT included in the deep learning dataset
| Scan ID | Bones | 2D Tiles | Voxel size (µm) | Source |
|---|---|---|---|---|
| 1R 1U | HF | 1,792 | 11.3 | 1 |
| 2R 2U | HF | 2,112 | 9.1 | |
| 5R 5U | HF | 2,048 | 9.1 | |
| 7R 7U | HF | 2,048 | 9.1 | |
| 12R 12U | HF | 2,048 | 9.1 | |
| 19R 19U | HF | 1,920 | 9.1 | |
| AMNH:Mammals:M-89009 | H | 4,250 | 66.8 | 2 |
| AMNH:Mammals:M-206440 | Mixed | 1,672 | 120.7 | 3 |
| OMNH:Mammals:44262 | HRU | 1,662 | 50.0 | 4 |
| OMNH:Mammals:53994 | FTFi | 2,216 | 50.0 | |
| OMNH:Mammals:53994 | HRU | 1,809 | 50.0 | |
| UAM:Mamm:24789 | FTFi | 2,098 | 50.0 | |
| UAM:Mamm:67696 | HF | 1,623 | 50.0 | |
| UAM:Mamm:67696 | TFiRU | 2,321 | 50.0 | |
| UF:Mammals:23593 / 24550 | HF | 1,755 | 50.0 | |
| UF:Mammals:31151 | HRU | 1,660 | 50.0 | |
| UWBM:Mamm:78743 | FTFi | 2,150 | 50.0 | |
| UWBM:Mamm:81969 | FTFi | 2,195 | 50.0 | |
| UWBM:Mamm:81969 | HRU | 1,995 | 50.0 | |
| ZMB:Mam:30740 | HRU | 3,609 | 30.0 | 5 |
Bone abbreviations: F=femur; Fi=fibula; H=humerus; R=radius; T=tibia; U=ulna
Museum abbreviations: AMNH=American Museum of Natural History; OMNH=Sam Noble Oklahoma Museum of Natural History; UAM=University of Alaska Museum of the North; UF=Florida Museum of Natural History; UWBM=University of Washington, Burke Museum; ZMB=Museum für Naturkunde
Source abbreviations: 1= doi.org/10.5061/dryad.4j0zpc8qq; 2= ark:/87602/m4/430024; 3=ark:/87602/m4/598442; 4= doi.org/10.5061/dryad.b2rbnzsq4; 5= ark:/87602/m4/M70721
Preparing the reference masks
The scans were processed in Avizo 3D 2024.2 following an established segmentation protocol (Lee et al., 2025). Bone tissue and pores were identified using Otsu thresholding, filtering, and ambient occlusion, with manual corrections where algorithms misclassified deep concavities.
New Deep Learning Modules for Avizo
We developed three Python-based deep learning modules for Avizo 3D 2024.2.
“BONe DLFit” is a configurable model-fitting module that supports up to 20 scan–reference pairs, performs training/validation splits at the scan-level, and enables single- or multi-GPU training through PyTorch’s DataParallel. Users may choose among 2D, 2.5D, or 3D models, nine architectures, and 58 backbones (via segmentation_models_pytorch), with options for patch-based sampling, Z-score or min-max normalization, augmentation (flips, 90° rotations, brightness/contrast adjustments), and custom hyperparameters. The backend handles normalization-statistic computation using an external Python interpreter, initializes the model with user-specified weights, and trains using Adam optimization, Jaccard loss, single-cycle cosine-annealing learning-rate scheduling, and automatic mixed precision. Training and validation proceed in an epoch-based loop with on-the-fly augmentation, GPU/VRAM monitoring, and automatic saving of improved model weights.
“BONe DLPred” performs inference on scans using PyTorch-formatted (PTH) models. The module loads the PTH model with its embedded metadata (architecture, backbone, normalization type, etc.), chunks the input volume in 2D, 2.5D, or 3D, normalizes the chunks, runs prediction in parallel batches (single- or multi-GPU), and reconstructs full-resolution probability maps using overlap-aware merging. Final voxel labels are assigned using user-specified confidence thresholds, and performance/benchmark statistics are reported.
“BONe IoU” replaces an earlier TCL-based IoU calculator with a faster Python module that automatically computes class-wise and mean IoU. The calculation is GPU-accelerated when CUDA is available.
Standalone versions of “BONe DLFit”, “BONe DLPred”, and “BONe IoU” were developed for users without Avizo. These retain the same interfaces and functionality but operate on folders of TIFF images and run in a packaged Python 3.12.11 environment. Model weights remain fully interchangeable between the Avizo and standalone versions.
Fitting the baseline model: BP-2D-03
The baseline model (BP-2D-03: U-Net with ResNet-18 backbone, 2D fitting mode, 256-px patch size, and random seed 42) was fitted on Training/Validation Pool 1 (Table 2). Model fitting was performed on a high-performance workstation (“Jarvis”: dual RTX PRO 6000 Blackwell Max-Q and 512 GB RAM). Four random patches were extracted from each 2D tile (slice), resulting in a dataset comprising 120,520 training patches and 22,440 validation patches (scan-level split of 81.25:18.75). Data augmentation was enabled and included random flips, rotations in 90° increments, crops, and domain-shift transformations. Z-score normalization was performed on the patches. The model was initialized with ImageNet-trained weights. Training proceeded for 25 epochs using a batch size of 64, an initial global learning rate of 0.001 with cosine-annealing scheduling, Adam optimizer, Jaccard loss as the optimization objective, and IoU as the evaluation metric. To reduce fitting time, dual-GPU mode was enabled.
Table 2. Overview of the 20 scans used for 5-fold cross-validation
| Order | Scan ID | Test Fold |
|---|---|---|
| 1 | UF_Mammals_31151_HRU | 1 |
| 2 | OMNH_Mammals_44262_HRU | |
| 3 | 2R_2U_HF | |
| 4 | OMNH_Mammals_53994_HRU | |
| 5 | UWBM_Mamm_81969_HRU | 2 |
| 6 | UWBM_Mamm_78743_FTFi | |
| 7 | 12R_12U_HF | |
| 8 | AMNH_Mammals_M-206440_mixed | |
| 9 | OMNH_Mammals_53994_FTFi | 3 |
| 10 | UWBM_Mamm_81969_FTFi | |
| 11 | UF_Mammals_23593-24550_HF | |
| 12 | UAM_Mamm_67696_HF | |
| 13 | 19R_19U_HF | 4 |
| 14 | 1R_1U_HF | |
| 15 | AMNH_Mammals_M-89009_F | |
| 16 | 7R_7U_HF | |
| 17 | UAM_Mamm_24789_FTFi | 5 |
| 18 | 5R_5U_HF | |
| 19 | ZMB_Mam_30740_HRU | |
| 20 | UAM_Mamm_67696_TFiRU |
Model Evaluation
Model performance was assessed through 5-fold cross-validation, repeated across three random seeds to evaluate generalization and stability, producing 15 total models whose mIoU scores were averaged across folds. Additional experiments tested 30 combinations of architectures, backbones, and patch sizes using the most stable cross-validation split, with training conditions held constant except when VRAM limits or convergence issues required adjustments to batch size or learning rate.
Related paper
Lee, A. H., Moore, J. M., Vera Covarrubias, B., and Lynch, L. M. (2025). Segmentation of cortical bone, trabecular bone, and medullary pores from micro-CT images using 2D and 3D deep learning models. Anat Rec, 1–23. doi: 10.1002/ar.25633
Changes after Feb 3, 2026:
List of updated files:
_BONe_Avizo.zip
_BONe_standalone.zip
_Models.zip
Additional features and optimizations were added to the BONe Avizo modules and standalone apps. They have been updated from version 1.0.0 to 1.12.6.
BONe DLFit 1.12.6
- (Avizo only) Normalization during volume statistics calculation is more precise and matches normalization used by standalone BONe DLFit as well as of Avizo and standalone BONe DLPred.
- Calculation of volume statistics for normalization uses 70-85% less RAM. The number of scans processed simultaneously was reduced from 16 to 2. RAM pressure was also reduced by using a chunked bitwise exact method to calculate volume mean, standard deviation, minimum, and maximum.
- Calculation of volume statistics no longer runs using an external Python executable, so that code was removed.
- Added versioning to the graphical user interface and metadata.
- Added toggle to switch between shuffling scans before the training/validation split (legacy behavior) or not shuffling the scans (new deterministic behavior).
- Added custom 3D stride option for 3D U-Net with ResNet backbones.
- Added optimizer menu with Adam, AdamW, and SGD as options. AdamW exposes a customizable weight decay setting, and SGD exposes customizable weight decay and momentum settings.
- Added more loss functions. The options now include Jaccard, Dice, Lovasz-Softmax Jaccard, Focal, Tversky, and Focal Tversky. Focal loss exposes a gamma setting; Tversky loss exposes Tversky alpha and beta settings; and Focal Tversky loss exposes alpha, beta, and gamma settings.
- Added menus to customize the number of DataLoader workers and the Prefetch factor. These menus are not customizable in Windows.
- Adjusted loss calculation to account for future implementation of gradient accumulation (not currently customizable in the user interface).
BONe DLPred 1.12.6
- Added versioning to the graphical user interface.
- Added "Full-tile" option to Chunk size setting. Segmentation is performed on the full width and height of the tile (2D/2.5D slice or 3D slab) instead of smaller overlapping chunks. However, the model still retains the original receptive window (patch size) from fitting.
Minor update to models
The main production model BP-2D-03 was updated to BP-2D-03a. It was fitted using BONe DLFit 1.12.6 in Avizo 2025.1.1 (Ubuntu 22.04 LTS) with a slight improvement to numerical precision during normalization. More importantly, the logs record the massive reduction in RAM usage during fitting. The updated log and weights files can be located in the BP-Models folder. A duplicate pair of these files is also stored in the Linux_v_Windows/Jarvis_linux_avizo_dual-gpu_NVMe folder. Finally, an updated version of the model fitted using standalone BONe DLFit 1.12.6 is located in the Linux_v_Windows/Jarvis_linux_standalone_dual-gpu_NVMe folder.
