Data from: BubbleID: A deep learning framework for bubble interface dynamics analysis
Data files
May 23, 2025 version files 946.59 MB
-
ned3-001_BubbleID.zip
946.59 MB
-
README.md
2.98 KB
Abstract
This paper presents BubbleID, a sophisticated deep learning architecture designed to comprehensively identify both static and dynamic attributes of bubbles within sequences of boiling images. By amalgamating segmentation powered by Mask R-CNN with SORT-based tracking techniques, the framework is capable of analyzing each bubble's location, dimensions, interface shape, and velocity over its lifetime and capturing dynamic events such as bubble departure. BubbleID is trained and tested on boiling images across diverse heater surfaces and operational settings. This paper also offers a comparative analysis of bubble interface dynamics prior to and post-critical heat flux conditions.
Dataset DOI: 10.5061/dryad.ksn02v7gx
Description of the data and file structure
This dataset includes raw bubble images, annotated images, and saved model weights for BubbleID, an open-source package for bubble identification, tracking, and analysis, and liquid-vapor interface dynamics analysis. The bubble images are collected using a high-speed camera (Phantom VEO 710L) during pool boiling tests performed using the facility described in Pandey et al. 2024. The bubble images are manually annotated for the liquid-vapor interfaces using the labelme package.
Files and variables
File: ned3-001_BubbleID.zip
Description: The zip file includes four folders, i.e., AnnotatedData, Models, TestData, and Tutorials.
The AnnotatedData folder includes two subfolders and a readme file. This dataset contains pool boiling image data and annotations made in labelme.
-
Classes-1: Annotations for liquid-vapor interfaces, including the training set, testing set, and a set from one pool boiling experiment, which does not appear in the other folders (Unseen-Test).
-
Classes-2: Annotations for bubbles distinguishing between attached and detached bubbles.
The Models folder includes PyTorch model checkpoint file (.pth) saving the weights of preitrained models, including
- seg_model.pth: pre-trained instance segmentation model
- class_model.pth: pre-trained classification model
The TestData includes two subfolders:
- SteadyStateTest for image sequences sampled at 3000 fps.
- TransientSmallTst for image sequences sampled at 150 fps.
The tutorial folder includes a Python Notebook script illustrating the steps for using the pretrained model and the BubbleID class for saving and displaying data extracted from bubble images/ videos.
Code/software
- The images (.jpg) and annotations (.json) can be viewed using the labelme package.
Yes (and it should be a simple as installing LabelMe and pointing to the directory of the images and JSON files. - The tutorial (.ipynb) can be opened using any Python Notebook packages, e.g., Jupyter Notebook, Google Colab.
It can be viewed in Google Colab, but I don’t think there is a way (at least not a simple one) to run it in Colab. It can be viewed in Jupyter Notebook and run there if installed correctly - The PyTorch package can import the checkpoint file (.pth) in a Python script. This can be done by specifying the path to the weights using the BubbleID class, e.g., test120=BubbleID.DataAnalysis(imagesfolder,videopath,savefolder,extension,modelweights,”cpu”)