A large-scale coherent 4D imaging sensor
Data files
Feb 13, 2026 version files 30.05 MB
-
Figure_Data.zip
30.04 MB
-
README.md
3.81 KB
Abstract
Detailed and accurate three-dimensional (3D) mapping of dynamic environments is essential for machines to interface with their surroundings, and for human-machine interaction. Although considerable effort has been spent to create the equivalent of the CMOS image sensor for the 3D world, scalable, high-performance, reliable solutions have proven elusive. Focal plane array (FPA) sensors using frequency modulated continuous wave (FMCW) light detection and ranging (LiDAR) have shown potential to meet all the requirements and also provide direct measurement of radial velocity as a fourth dimension (4D). Prior demonstrations, while promising, have not achieved the simultaneous scale and performance required by commercial applications. In this paper, we present a large-scale, coherent LiDAR FPA enabled by comprehensive chip-scale optoelectronic integration. A 4D imaging camera is built around the FPA and used to acquire point clouds. At the core is a 352x176 pixel two-dimensional FMCW LiDAR FPA comprising over 0.6 million photonic components, all integrated on-chip together with their associated electronics. This represents a five times increase in pixel count with respect to previous demonstrations. The pixel architecture combines the outbound and inbound optical path within the pixel in a monostatic configuration, together with coherent detectors and electronics. Frequency modulated light is directed sequentially to groups of pixels by in-plane thermo-optic switches with integrated electronics for driving and calibration. An integrated serial digital interface controls both optical switching and readout synchronously. Point clouds of objects ranging from 4 to 65 meters with per-pixel integration time compatible with frame rates from 3 to 15 fps are shown. This result demonstrates the capabilities of FMCW LiDAR FPA sensors as enablers of ubiquitous, low cost, compact coherent 4D imaging cameras.
Dataset DOI: 10.5061/dryad.6t1g1jxcm
Description of the data and file structure
All the data that was used to generate the figures in the publication is included in the attached archive file (.zip).
There are multiple folders in the archive file. Name of the folders match the Figures from the associated manuscript. (i. e. data from Figure 3c is in the folder named "Fig_3c")
In each folder, there are datasets (in .csv form) and Python scripts in Jupyter Notebook form (when necessary). When these scripts are run together with the corresponding data from the same directory, they generate the figures in the paper that match the folder name.
Note: There is no data affiliated with Fig. 1 and Fig. 2 available.
Below are the summary of the data from the main text figures existing in the archive.
Folder "Fig_3a": .csv data together with Jupyter Notebook that generate the pointcloud in Fig 3a. The columns in the .csv are named as x_m, y_m, and z_m, they correspond to Z, X and Y axis in the plotted figure.
Folder "Fig_3b4x": .csv data together with Jupyter Notebook that generate the pointcloud in Fig 3b. The columns in the .csv are named as x_m, y_m, and z_m, they correspond to Z, X and Y axis in the plotted figure.
Folder "Fig_3c": .csv data together with Jupyter Notebook that generate the velocity pointcloud in Fig 3c. The columns in the .csv are named as x_m, y_m, z_m, vel_f_m; they correspond to Z, X, Y axis, and velocity of the points in the plotted figure.
Folder "Fig_3def": Contains the pictures used in Fig. 3 d, e and f. No data available.
Folder "Fig_4a": .csv data together with Jupyter Notebook that generate the histogram in Fig 4a. The column in the .csv is named as data; and it corresponds to the optical power at the pixels.
Folder "Fig_4b": .csv data together with Jupyter Notebook that generate the histogram in Fig 4b. The column in the .csv is named as data; and it corresponds to the Kappa values. The fit is also generated in the Jupyter script.
Folder "Fig_4c": .csv data together with Jupyter Notebook that generate the plot in Fig 4c. The column in the .csv is named as XX, ratio_, YY, YY_fit, optical_signal_power and popt. The 'nan' values are omitted by the script, they exist since the lengths of the arrays are different. The fit was performed elsewhere, and here is the processed data that is used to generate the plots directly.
Folder "Fig_4d": .csv data together with Jupyter Notebook that generate the pointcloud in Fig 4d. The columns in the .csv are named as x_m, y_m, and z_m, they correspond to Z, X and Y axis in the plotted figure.
Folder "Fig_4e": Contains the pictures used in Fig. 4e. No data available.
Folder "Fig_4fg": Two .csv data files together with Jupyter Notebook that generate the histograms in Fig 4f and 4g. The columns in the .csv files correspond to the data (distance error and velocity error respectively) obtained from different targets: Retro, 50 %, 10 %, 5 %.
The folders that are names as 'FigExt_##' are also organized similarly. Each folder stores the data in the .csv form, and each folder contains one Jupyter notebook file that calls the data and generates the figures associated with the name of the folder.
Files and variables
File: Figure_Data.zip
Description: The main archive file containing the subfolders for each Figure in the manuscript.
Code/software
All the scripts are run with Python 3.
All the scripts rely only on widely available free libraries. (Numpy, Panda, Matplotlib, Plotly)
In each folder, there is one python notebook and there are corresponding datafiles in .csv format. The python script should generate the figure in the manuscript using the data attached to it.
