DeepInMiniscope: Deep-learning-powered physics-informed integrated miniscope
Abstract
Mask-based integrated fluorescence microscopy is a compact imaging technique for biomedical research. It can perform snapshot 3D imaging through a thin optical mask with a scalable field of view (FOV) and a thin device thickness. Integrated microscopy uses computational algorithms for object reconstruction, but efficient reconstruction algorithms for large-scale data have been lacking. Here, we developed DeepInMiniscope, a miniaturized integrated microscope featuring a custom-designed optical mask and a multi-stage physics-informed deep learning model. This reduces the computational resource demands by orders of magnitude and facilitates fast reconstruction. Our deep learning algorithm can reconstruct object volumes over 4×6×0.6 mm3. We demonstrated substantial improvement in both reconstruction quality and speed compared to traditional methods for large-scale data. Notably, we imaged neuronal activity with near-cellular resolution in awake mouse cortex, representing a substantial leap over existing integrated microscopes. DeepInMiniscope holds great promise for scalable, large-FOV, high-speed, 3D imaging applications with compact device footprint.
https://doi.org/10.5061/dryad.6t1g1jx83
Description of the data and file structure
DeepInMiniscope: Learned Integrated Miniscope
Datasets, models and codes for 2D and 3D sample reconstructions.
Dataset for 2D reconstruction includes test data for green stained lens tissue.
Input: measured images of green fluorescent stained lens tissue, dissembled into sub-FOV patches.
Output: the slide containing green lens tissue features.
Dataset for 3D sample reconstructions includes test data for 3D reconstruction of in-vivo mouse brain video recording.
Input: Time-series standard-derivation of difference-to-local-mean weighted raw video.
Output: reconstructed 4-D volumetric video containing a 3-dimensional distribution of neural activities.
Files and variables
Download data, code, and sample results
- Download data
data.zip, codecode.zip, resultsresults.zip. - Unzip the downloaded files and place them in the same main folder.
- Confirm that the main folder contains three subfolders:
data,code, andresults. Inside thedataandcodefolder, there should be subfolders for each test case.
Data
2D_lenstissue
data_2d_lenstissue.mat: Measured images of green fluorescent stained lens tissue, disassembled into sub-FOV patches.
- Xt: stacked 108 FOVs of measured image, each centered at one microlens unit with 720 x 720 pixels. Data dimension in order of (batch, height, width, FOV).
- Yt: placeholder variable for reconstructed object, each centered at corresponding microlens unit, with 180 x 180 voxels. Data dimension in order of (batch, height, width, FOV).
reconM_0308: Trained Multi-FOV ADMM-Net model for 2D lens tissue reconstruction.
gen_lenstissue.mat: Generated lens tissue reconstruction by running the model with code 2D_lenstissue.py
- generated_images: stacked 108 reconstructed FOVs of lens tissue sample by multi-FOV ADMM-Net, the assembled full sample reconstruction is shown in results/2D_lenstissue_reconstruction.png
3D_mouse
reconM_g704_z5_v4: Trained 3D Multi-FOV ADMM-Net model for 3D sample reconstructions
t_img_recd_video0003 24-04-04 18-31-11_abetterrecordlong_03560_1_290.mat: Time-series standard-deviation of difference-to-local-mean weighted raw video.
- Xts: test video with 290 frames and each frame 6 FOVs, with 1408 x 1408 pixels per FOV. Data dimension in order of (frames, height, width, FOV).
gen_img_recd_video0003 24-04-04 18-31-11_abetterrecordlong_03560_1_290_v4.mat: Generated 4D volumetric video containing 3-dimensional distribution of neural activities.
- generated_images_fu: frame-by-frame 3D reconstruction of recorded video in uint8 format. Data dimension in order of (batch, FOV, height, width, depth). Each frame contains 6 FOVs, and each FOV has 13 reconstruction depths with 416 x 416 voxels per depth.
Variables inside saved model subfolders (reconM_0308 and reconM_g704_z5_v4):
- saved_model.pb: model computation graph including architecture and input/output definitions.
- keras_metadata.pb: Keras metadata for the saved model, including model class, training configuration, and custom objects.
- assets: external files for custom assets loaded during model training/inference. This folder is empty, as the model does not use custom assets.
- variables.data-00000-of-00001: numerical values of model weights and parameters.
- variables.index: index file that maps variable names to weight locations in .data.
Code/software
Set up the Python environment
- Download and install the Anaconda distribution.
- The code was tested with the following packages
- python=3.9.7
- tensorflow=2.7.0
- keras=2.7.0
- matplotlib=3.4.3
- scipy=1.7.1
Code
2D_lenstissue.py: Python code for Multi-FOV ADMM-Net model to generate reconstruction results. The function of each script section is described at the beginning of each section.
lenstissue_2D.m: Matlab code to display the generated image and reassemble sub-FOV patches.
sup_psf.m: Matlab script to load microlens coordinates data and to generate PSF pattern.
lenscoordinates.xls: Microlens units coordinates table.
3D mouse.py: Python code for Multi-FOV ADMM-Net model to generate reconstruction results. The function of each script section is described at the beginning of each section.
mouse_3D.m: Matlab code to display the reconstructed neural activity video and to calculate temporal correlation.
Access information
Other publicly accessible locations of the data:
