Quantum-inspired computational wavefront shaping enables turbulence-resilient distributed aperture synthesis imaging
Data files
Nov 24, 2025 version files 130.11 MB
-
QiCWS_code_data.zip
130.11 MB
-
README.md
4.33 KB
Abstract
Inspired by quantum nonlocal aberration cancellation, the method proposes a computational wavefront shaping (CWS) approach to address the heavy hardware demands of correcting complex aberrations in optical imaging. By exploiting classical correlated light, CWS digitally corrects aberrations on the signal path by introducing a virtual wavefront corrector during the computational propagation of a reference field, entirely bypassing the need for physical corrective elements. The optimal correction is determined by optimizing a image sharpness metric of the computationally reconstructed image, rather than using physical wavefront sensors or interferometric detection. This closed-loop process—encompassing aberration characterization, wavefront correction, and image reconstruction—is performed computationally using only a single pixel detector, thereby significantly relaxing hardware requirements.
Experimental results (In this dataset ZIP file) confirmed that CWS effectively restores image quality under various aberration conditions. In the proof of principle experiments of CWS(In "Fig 3" file), strong aberrations were introduced as a random phase screen to simulate highly complex scattering media. Diffraction-limited image can be obtained under the guidance of image gradient sharpness metric. In the DASI(distributed aperture synthesis imaging) configuration(In "Fig 4" file), high-resolution image was computationally recovered without co-phasing or adaptive optics. Compared to conventional imaging and correction methods, CWS shifts the burden of wavefront shaping from hardware to the computational domain. This approach is particularly advantageous given rapidly advancing computing power and algorithms, showing significant promise for applications ranging from biomedical imaging to standoff atmospheric sensing. These findings not only validate the physical principles of CWS but also demonstrate its practical potential in complex optical environments.
Brief summary
This repository contains code for “Quantum-inspired computational wavefront shaping enables turbulence-resilient distributed aperture synthesis imaging”. The optimization is guided by the image sharpness metric through a PyTorch-based optimizer (Adam) and optional GPU acceleration. The code can reconstruct two example datasets corresponding to fig3 and fig4 in the original analysis.
Description of the data and file structure
Top-level files and folders in QiCWS_code_data.zip are listed as following:
opt.py: main Python script implementing reconstruction and optimization using PyTorch.fig3/,fig4/: dataset folders. Each dataset folder containsbktsequence.mat,phir_array.mat, and amask.matfile containing the sampling mask which is loaded on the spatial light modulator.intermediate/: output directory where per-iteration.matfiles and PNG reconstructions are saved.requirements.txt: Python dependency list.
File relationships and usage:
- The mask (
S0) defines which pixels are modulated; the number of positive entries in the mask equals the number of optimized phase variables. phir_array.matcontains the per-sample modulation phase patterns used in forward model simulations;bktsequence.matcontains the bucket (single-pixel) signals recorded for each pattern.- The Python pipeline reads the mask, modulation phases and bucket sequence, reconstructs an image via a ghost-imaging forward model, and then optimizes the mask's phase values to maximize a sharpness metric (implemented as the image gradient).
Sharing / Access information
Dataset DOI: https://doi.org/10.5061/dryad.5tb2rbph4
If you host or mirror the data elsewhere, list the additional links here.
Code / Software
This repository provides the Python implementation (opt.py) that reproduces the main workflow. Key details:
- Language: Python 3.8+
- Main libraries:
numpy,scipy(for.matIO),matplotlib(for visualization),torch(PyTorch) for GPU-accelerated computation and optimization.
Typical workflow (high level):
- Load
mask.mat(or fallbackC16.mat/S64.mat),phir_array.mat, andbktsequence.mat. - Use the forward model (FFT-based propagation) to compute a per-sample reference field and reconstruct the image via correlation with bucket signals.
- Compute the sharpness metric (image gradient) from the reconstructed image.
- Use PyTorch Adam to optimize the phase values placed at the mask positions, updating them by gradient descent.
- Save intermediate phase maps and reconstructions to
intermediate/for inspection.
How to run
- Create and activate a virtual environment, then install dependencies (Windows PowerShell example):
cd ".\QiCWS_code_data"
python -m venv venv
.\venv\Scripts\Activate.ps1
pip install --upgrade pip
pip install -r requirements.txt
# For GPU acceleration, install a CUDA-compatible PyTorch per instructions on https://pytorch.org/
- Run reconstruction and optimization. Choose dataset
fig3orfig4(it may take a few hours):
# default (fig4)
python opt.py
# specify fig3
python opt.py --dataset fig3
Outputs are written to intermediate/:
generation_{iter}.mat— containsphase_trans(the mask with current phase values)generation_{iter}.png— reconstructed image for that iterationfinal_solution.mat,final_solution.png— saved after the final iteration
Notes on variables and compatibility
opt.pysearches for the first 2D numeric array in themask.matfile to use asS0. If your mask is stored under a specific variable name (e.g.,S64orC16), consider renaming it to a genericmaskor updateopt.pyto select that variable.
Reproducibility and best practices
- Record the versions of Python, PyTorch and CUDA used in your run. Set random seeds in
opt.pyif deterministic behavior is required. - The current implementation performs per-sample FFTs inside a loop (memory efficient). For larger datasets or to exploit GPU throughput, vectorized batched FFTs can be implemented (faster but higher memory usage).
