Hypermultiplexed integrated-photonics-based optical tensor processor
Data files
Jun 03, 2025 version files 29.40 GB
-
Fig_3.zip
149.83 KB
-
Fig_S9-Precision_Check.ipynb
303.90 KB
-
Fig2.zip
367.07 KB
-
Fig4.zip
6.68 MB
-
Fig5.zip
8.08 GB
-
README.md
34.18 KB
-
Supplementary.zip
21.31 GB
-
Supplementary2.zip
5.11 MB
Abstract
The escalating data volume and complexity resulting from the rapid expansion of artificial intelligence (AI), internet of things (IoT), and 5G/6G mobile networks is creating an urgent need for energy-efficient, scalable computing hardware. Here we demonstrate a hypermultiplexed integrated photonics-based tensor optical processor (HITOP) that can perform trillions of operations per second (TOPS) at the energy efficiency of 40 TOPS/W. Space-time-wavelength three-dimensional (3D) optical parallelism enables O(N2) operations per clock-cycle using O(N) modulator devices. The system is built with wafer-fabricated III/V micron-scale lasers and high-speed thin-film Lithium-Niobate electro-optics for encoding at 10s femtojoule/symbol. Lasing threshold incorporates an analog inline rectifier (ReLu) nonlinearity for low-latency activation. The system scalability is verified with machine learning models of 405,000 parameters. A combination of high clockrates, energy-efficient processing, and programmability unlocks the potential of light for large-scale AI accelerators in applications ranging from training of large AI models to real-time decision making in edge deployment.
https://doi.org/10.5061/dryad.kprr4xhgj
This repo shares the data used to produce the figures in the journal paper Hypermultiplexed Integrated-Photonics-based Optical Tensor Processor (link: https://arxiv.org/abs/2401.18050). The code for data analysis is also provided. For more clear description of the data, please check our paper.
Description of the data and file structure
- All the data and code are categorized by the sequence of figures; each compressed file is named by the corresponding figure and contains all the data used to draw the corresponding figure.
- Notice: Be careful when accessing files inside, all the data processing code is well-tuned by the path of each file. If there is a file missing, it would cause errors when re-processing the data
- The file structure is as follows, we left out the tree for some of the files (data of the inference, etc).
- If there is data that is not mentioned, it should be the data saved in the intermediate process and would not affect the final result.
Fig 2
│ Fig2_prepared.ipynb
│ PIV VCSEL PD.csv
│ RTOFunc.py
│ your_output_file.mat
│
└─11012023-VCSEL-OSA
C26 (1).CSV
C26.CSV
C27 (1).CSV
C27.CSV
C28.CSV
C282 (1).CSV
C282.CSV
C29.CSV
C30.CSV
C31-2.CSV
C31.CSV
C32.CSV
C33.CSV
C34.CSV
C35.CSV
C36.CSV
C37.CSV
W20231102_132050.CSV
- All data could be regenerated by using code
Fig2_prepared.ipynb - The spectrum data is stored under the folder
11012023-VCSEL-OSA- Each
.csvcontains one spectrum as wavelength(nm,1st column) vs power(dBm, 2nd column)
- Each
- The experimental nonlinear response comes from the file
PIV VCSEL PD.csv- 1st Column is the driving voltage(V)
- 2nd column is the driving current(A)
- 3rd column is the output photonic voltage(mV)
- The bandwidth measurement is generated by
your_output_file.matfreq(VCSEL) andfreq2(LNM) are the spectrum frequency, unit as Hzfinalandfinal2are the measured response, unit as dB(already normalized)finalsmoothandfinalsmooth2are the smoothed responses, units of dB
- For the
RTOFunc.py- This file mainly provides some assisting functions for controlling and data processing
- Function
prepare_RTO,AutoAdjust,ReadWFis used for instrument controlling, could be ignored - Function
normal: normalizing the input data to the range [0,1] - Function
findcorr: calculate the correlation between two waveforms, could be ignored - Function
plotWF: plot two waveforms with the same time axis - Function
saveWF: save the waveform in binary format - Function
averageWF: average the input data usnig mean(1), mode(2) or median(3) - Function
LNres: sample the input signal by averaging during each time step - Function
errorate: calculate the error rate of the signal with the given reference - Function
getslope: calculate the slope of the response with input as a variable - Function
Fitcurve: find one period out of a mult-period signal, calculate the slope of the extracted signal, return the extracted signal and the parameters of the linear fit. - Function
ResponseFit: generated the fitted linearized signal with the given parameters
Fig 3
│ Fig3_prepared.ipynb
│
└─1k_rand
1k_measured_data.csv
1k_truth_data.csv
- All data could be regenerated by using the code
Fig3_prepared.ipynb - The time trace data for the random data multiplication is stored under the folder
1k_rand- 1k_truth_data contains normalized truth value to be compared with 1k_measured_data (also normalized) for accuracy characterization. Both datasets are unitless, with the x-axis in time trace (nanoseconds).
Fig 4
├─Fig4
│ │ Fig4_prepared.ipynb
│ │
│ └─TimeTrace
│ index_MNIST_cal.bin
│ index_MNIST_ori.bin
│ input_cali_cal.bin
│ input_cali_ori.bin
│ LNM_7data_ori.bin
│ LNM_cali_index.bin
│ LNM_cali_value.bin
│ LNM_data_cali.bin
│ LNM_data_norm.bin
│ LNM_data_ori.bin
│ MarkerTI.bin
│ ref_7data_ori.bin
│ ref_MNIST_cal.bin
│ ref_MNIST_ori.bin
│ ref_random_ori.bin
│ response_cali_cal.bin
│ response_cali_ori.bin
│ response_MNIST_cal.bin
│ response_MNIST_ori.bin
│ response_random_cal.bin
│ response_random_ori.bin
│ TIindex.bin
│ TIresponse_MNIST_cal.bin
│ TIresponse_MNIST_ori.bin
│ VCSEL_7data_ori.bin
│ VCSEL_data_ori.bin
│
└─model
MNIST_Nonlinear_Scaled_2_Layers_100_weights_12_95.86_testacc.mat
MNIST_Table_2_Linear_weights_1_91.93_testacc.mat
mnist_testset_images.mat
mnist_testset_labels.mat
- All data could be regenerated by using code
Fig4_prepared.ipynb - The time trace data for the random data multiplication is stored under the folder
TimeTracein binary format, all the units are photonic voltage(V) and time step(s)response_MNIST_cal.binresponse after the modulator with MNIST image encodedref_MNIST_ori.binsimulated response as referenceTIresponse_MNIST_cal.binintegrated real responseindex_MNIST_cal.binintegration window- Other files are not needed
- The data for all the matrices are generated in Fig S8
- All the models and input data used are stored in the folder
modelmnist_testset_imagesThe images encoded to VCSEL with keymnist_testset_arraymnist_testset_labelsThe corresponding label with keymnist_testset_labelsMNIST_Table_2_Linear_weights_1_91.93_testaccsingle layer inference model with only keyW1MNIST_Nonlinear_Scaled_2_Layers_100_weights_12_95.86_testaccdouble layer inference model with 2 keyW1(layer 1) andW2(layer 2)
Fig 5
├─Fig5
│ │ Fig5_prepared.ipynb
│ │
│ ├─EMNIST-1000
│ │ 0_layer1.svg
│ │ 1Dkernal1.svg
│ │ 1Dkernal2.svg
│ │ 1Dkernal3.svg
│ │ 1Dkernal4.svg
│ │ 1Dkernal499.svg
│ │ 1Dkernal5.svg
│ │ 1Dkernal500.svg
│ │ 1_layer1.svg
│ │ 2nd_2d0.svg
│ │ 2nd_2d1.svg
│ │ 2nd_2d2.svg
│ │ 2nd_2d24.svg
│ │ 2nd_2d25.svg
│ │ 2nd_2d3.svg
│ │ 2nd_2d4.svg
│ │ 2_layer1.svg
│ │ 3_layer1.svg
│ │ 497_layer1.svg
│ │ 498_layer1.svg
│ │ 499_layer1.svg
│ │ 4_layer1.svg
│ │ 5_layer1.svg
│ │ C-outvec.svg
│ │ C.svg
│ │ dataprocess.ipynb
│ │ diff_index.dat
│ │ EMNIST_digital_blue.svg
│ │ EMNIST_digital_winter.svg
│ │ EMNIST_distribution.svg
│ │ EMNIST_optical_blue.svg
│ │ EMNIST_optical_winter.svg
│ │ layer2.svg
│ │ mul_TIindex.dat
│ │ response_TI_C35_background.dat
│ │ run0_ref_TI_C35_data_cal.dat
│ │ run0_response_TI_C35_data_cal.dat
│ │ run1000_ref_TI_C35_data_cal.dat
│ │ run1000_response_TI_C35_data_cal.dat
│ │ (similar files)
│ │ S-outvec.svg
│ │ S.svg
│ │ U-outvec.svg
│ │ U.svg
│ │ U_input.svg
│ │ wrong_index.dat
│ │
│ └─Fashion-1000
│ │ Bag.svg
│ │ Bag_out_digital.svg
│ │ Bag_out_optical.svg
│ │ Data process_paper.ipynb
│ │ dataProcess_DAQ_fashion.ipynb
│ │ diff_index.dat
│ │ FashionMNIST_digital
│ │ FashionMNIST_digital.pdf
│ │ FashionMNIST_digital.svg
│ │ FashionMNIST_distribution.svg
│ │ FashionMNIST_optical.pdf
│ │ FashionMNIST_optical.svg
│ │ MNIST_BPD_scope.ipynb
│ │ Sandal.svg
│ │ Sandal_out_digital.svg
│ │ Sandal_out_optical.svg
│ │ T-shirt.svg
│ │ T-shirt_out_digital.svg
│ │ T-shirt_out_optical.svg
│ │
│ └─fashion-1000
│ │ MNIST_BPD_scope.ipynb
│ │ mul_TIindex.dat
│ │ response_TI_C35_background.dat
│ │ run0_ref_TI_C35_data_cal.dat
│ │ run0_response_TI_C35_data_cal.dat
│ │ run100_ref_TI_C35_data_cal.dat
│ │ run100_response_TI_C35_data_cal.dat
│ (similar files)
│ │ wrong_index.dat
│ │
│ └─diff
│ run126_ref_TI_C35_data_cal.dat
│ run126_response_TI_C35_data_cal.dat
│ (similar files)
│
└─model
EMNIST_images.mat
EMNIST_labels.mat
EMNIST_Table_2_VCSEL_weights_1_0.51_testacc.mat
FashionMNIST_linear_Scaled_2_Layers_100_weights_12_93.33_testacc.mat
FMNIST_images.dat
FMNIST_label.dat
- All data could be regenerated by using the code
Fig5_prepared.ipynb
- All the
.pdfand.svgfiles are those generated plots, which could be ignored
- The data for the EMNIST matrix is stored in the folder
EMNIST-1000, all the signal is recorded as photonic voltage (V) with a time step of 10ns- Each pair of
run*_response_TI_C35_data_cal.datandrun*_ref_TI_C35_data_cal.datcorresponds to a set of measurement data containing one time trace for 1 or more images mul_TIindex.datis the time trace index for the time integrator, indicating the integration windowwrong_index.datshows the index of the images with wrong inference resultsdifffolder contains the images that yield wrong results.
- Each pair of
- The data for the EMNIST matrix is stored in the folder
Fashion-1000\fashion-1000- The content of each data file is formatted similarly to the EMNIST data.
- All the models and input data are stored in the folder
modelEMNIST_images.mat&FMNIST_images.datare the whole image dataset used for the inference for EMNIST and FashionMNIST; they are not needed in data processingEMNIST_labels.mat(with keylabel) &FMNIST_labels.datare the corresponding labels for EMNIST and FashionMNISTEMNIST_Table_2_VCSEL_weights_1_0.51_testacc.matis the pre-trained model for EMNIST inference with 2 keyW1(layer 1) andW2(layer 2).FashionMNIST_linear_Scaled_2_Layers_100_weights_12_93.33_testacc.matis the pre-trained model for FashionMNIST inference with 2 keyW1(layer 1) andW2(layer 2).
Supplementary Information
\---Supplementary
├─model
│ MNIST_Nonlinear_Scaled_2_Layers_100_weights_12_95.86_testacc.mat
│ MNIST_Table_2_Linear_weights_1_91.93_testacc.mat
│ mnist_testset_images.mat
│ mnist_testset_labels.mat
│
└─SI
│ Supplementary_prepared.ipynb
│
├─FigS10
│ ├─1ch-C31-C37
│ │ │ dataProcess_DAQ.ipynb
│ │ │ LNM_cali_index.csv
│ │ │ LNM_value.csv
│ │ │ MNIST_DAQ.ipynb
│ │ │ mul_TIindex.dat
│ │ │ ref_ramp_ori.csv
│ │ │ response_C34_ramp_ori.csv
│ │ │ response_cali_ori.csv
│ │ │ run0_ref_TI_C34_data_cal.dat
│ │ │ run0_ref_TI_C35_data_cal.dat
│ │ │
│ │ ├─0713
│ │ │ MNIST_DAQ.ipynb
│ │ │
│ │ ├─C31-93.5%
│ │ │ 1layer_200im_cal_1.1.A.mat
│ │ │ 1layer_200im_cal_1.1.B.mat
│ │ │ 1layer_200im_cal_1.1.C.mat
│ │ │ 1layer_200im_cal_1.1.D.mat
│ │ │ 1layer_200im_cal_back_1.1.A.mat
│ │ │ 1layer_200im_cal_back_1.1.B.mat
│ │ │ 1layer_200im_cal_back_1.1.C.mat
│ │ │ 1layer_200im_cal_back_1.1.D.mat
│ │ │ LNM_cali_index.csv
│ │ │ LNM_value.csv
│ │ │ mul_TIindex.dat
│ │ │ ref_ramp_ori.csv
│ │ │ response_C34_ramp_ori.csv
│ │ │ response_cali_ori.csv
│ │ │ run0_ref_TI_C34_data_cal.dat
│ │ │ run0_ref_TI_C35_data_cal.dat
│ │ │
│ │ ├─C31-test
│ │ │(similar structure)
│ │ ├─C32-91
│ │ │(similar structure)
│ │ ├─C33-92
│ │ │(similar structure)
│ │ ├─C34
│ │ │(similar structure)
│ │ ├─C34-test
│ │ │(similar structure)
│ │ ├─C35-92.5-delay
│ │ │(similar structure)
│ │ ├─C36-93.5
│ │ │(similar structure)
│ │ └─C37-93
│ │ │(similar structure)
│ │
│ ├─2ch-2layer-MNIST-500-C31-C36
│ │ │ 2layer_20im_cal_1.1.A.mat
│ │ │ 2layer_20im_cal_1.1.B.mat
│ │ │ dataProcess_DAQ.ipynb
│ │ │ dataProcess_DAQ_1000.ipynb
│ │ │ input_cali_ori.dat
│ │ │ LNM_cali_index.dat
│ │ │ LNM_value.dat
│ │ │ MNIST_DAQ.ipynb
│ │ │ ref_ramp_ori.dat
│ │ │ response_C34_ramp_ori.dat
│ │ │ response_C35_ramp_ori.dat
│ │ │ response_cali_ori.csv
│ │ │ run0_ref_TI_C34_data_cal.dat
│ │ │ run0_ref_TI_C35_data_cal.dat
│ │ │
│ │ ├─0-20-1000
│ │ │(similar structure)
│ │ ├─100-120-1000
│ │ │(similar structure)
│ │ ├─120-140-1000
│ │ │(similar structure)
│ │ ├─140-160-1000
│ │ │(similar structure)
│ │ ├─160-180-1000
│ │ │(similar structure)
│ │ ├─180-200-1000
│ │ │(similar structure)
│ │ ├─20-40-1000
│ │ │(similar structure)
│ │ ├─200-220-1000
│ │ │(similar structure)
│ │ ├─220-240-1000
│ │ │(similar structure)
│ │ ├─240-260-1000
│ │ │(similar structure)
│ │ ├─260-280-1000
│ │ │(similar structure)
│ │ ├─280-300-1000
│ │ │(similar structure)
│ │ ├─300-320-1000
│ │ │(similar structure)
│ │ ├─320-340-1000
│ │ │(similar structure)
│ │ ├─340-360-1000
│ │ │(similar structure)
│ │ ├─360-380-1000
│ │ │(similar structure)
│ │ ├─380-400-1000
│ │ │(similar structure)
│ │ ├─40-60-1000
│ │ │(similar structure)
│ │ ├─400-420-1000
│ │ │(similar structure)
│ │ ├─420-440-1000
│ │ │(similar structure)
│ │ ├─440-460-1000
│ │ │(similar structure)
│ │ ├─460-480-1000
│ │ │(similar structure)
│ │ ├─480-500-1000
│ │ │(similar structure)
│ │ ├─60-80-1000
│ │ │(similar structure)
│ │ ├─80-100-1000
│ │ │(similar structure)
│ │ ├─back
│ │ │ 2layer_20im_cal_back_1.1.A.mat
│ │ │ 2layer_20im_cal_back_1.1.B.mat
│ │ │ mul_TIindex.dat
│ │ └─model
│ │ mul_TIindex.dat
│ │
│ └─2ch-2layer-MNIST-500-C34-C35
│ │ dataProcess_DAQ.ipynb
│ │ dataProcess_DAQ_1000.ipynb
│ │ MNIST_DAQ.ipynb
│ │
│ ├─0-20-1000
│ (similar structure)
│ ├─100-120-1000
│ (similar structure)
│ ├─120-140-1000
│ (similar structure)
│ ├─140-160-1000
│ (similar structure)
│ ├─160-180-1000
│ (similar structure)
│ ├─180-200-1000
│ (similar structure)
│ ├─20-40-1000
│ (similar structure)
│ ├─200-220-1000
│ (similar structure)
│ ├─220-240-1000
│ (similar structure)
│ ├─240-260-1000
│ (similar structure)
│ ├─260-280-1000
│ (similar structure)
│ ├─280-300-1000
│ (similar structure)
│ ├─300-320-1000
│ (similar structure)
│ ├─320-340-1000
│ (similar structure)
│ ├─340-360-1000
│ (similar structure)
│ ├─360-380-1000
│ (similar structure)
│ ├─380-400-1000
│ (similar structure)
│ ├─40-60-1000
│ (similar structure)
│ ├─400-420-1000
│ (similar structure)
│ ├─420-440-1000
│ (similar structure)
│ ├─440-460-1000
│ (similar structure)
│ ├─460-480-1000
│ (similar structure)
│ ├─480-500-1000
│ (similar structure)
│ ├─60-80-1000
│ (similar structure)
│ ├─80-100-1000
│ (similar structure)
│ ├─back
│ │ 2layer_20im_cal_back_1.1.A.mat
│ │ 2layer_20im_cal_back_1.1.B.mat
│ │ mul_TIindex.dat
│ │
│ └─model
│ FashionMNIST_linear_Scaled_2_Layers_100_weights_12_93.33_testacc.mat
│ FMNIST_images.dat
│ FMNIST_label.dat
│ MNIST_Nonlinear_Scaled_2_Layers_100_weights_12_95.86_testacc.mat
│ mnist_testset_images.mat
│ mnist_testset_labels.mat
│ mul_TIindex.dat
│
├─FigS3
│ │ balanceHLIM-1480-1640.dat
│ │ balanceHLIM-1540-1555.dat
│ │ balanceHLIM-waveguide-1480-1640-true.dat
│ │ balanceHLIM-waveguide-1480-1640.dat
│ │ balanceHLIM-waveguide-1540-1555-true.dat
│ │ balanceHLIM-waveguide-1540-1555.dat
│ │
│ └─linear
│ │ data1_cal.bin
│ │ data1_norm.bin
│ │ data1_ori.bin
│ │ data2_cal.bin
│ │ data2_norm.bin
│ │ data2_ori.bin
│ │ input_LNM1.bin
│ │ input_LNM2.bin
│ │ response_LNM1.bin
│ │ response_LNM2.bin
│ │
│ ├─10000p-4.2con-start1-er1.39
│ │ └─mul
│ │ data1_ori.bin
│ │ data2_cal.bin
│ │ data2_norm.bin
│ │ data2_ori.bin
│ │ mul_cal_data1.bin
│ │ mul_cal_data2.bin
│ │ mul_cal_mul.bin
│ │ mul_ori_data1.bin
│ │ mul_ori_data2.bin
│ │ mul_ori_mul.bin
│ │
│ ├─1000p-4.2con-start1-er1.6
│ │ └─mul
│ │ data1_ori.bin
│ │ data2_cal.bin
│ │ data2_norm.bin
│ │ data2_ori.bin
│ │ mul_cal_data1.bin
│ │ mul_cal_data2.bin
│ │ mul_cal_mul.bin
│ │ mul_ori_data1.bin
│ │ mul_ori_data2.bin
│ │ mul_ori_mul.bin
│ │
│ ├─100p-1-4.2con-er1.3
│ │ │ cut_input_LNM2.bin
│ │ │ cut_response_LNM2.bin
│ │ │ cut_smooth_response_LNM2.bin
│ │ │ cut_x_LNM2.bin
│ │ │ data2_calinput.bin
│ │ │ data2_calresponse.bin
│ │ │ data2_interpo.bin
│ │ │ data2_norm befor interpo.bin
│ │ │ norm_smooth_response_LNM2.bin
│ │ │ ori_input_LNM2.bin
│ │ │ ori_response_LNM2.bin
│ │ │
│ │ └─mul
│ │ data1_ori.bin
│ │ data2_cal.bin
│ │ data2_norm.bin
│ │ data2_ori.bin
│ │ mul_cal_data1.bin
│ │ mul_cal_data2.bin
│ │ mul_cal_mul.bin
│ │ mul_ori_data1.bin
│ │ mul_ori_data2.bin
│ │ mul_ori_mul.bin
│ │
│ ├─20p-1-er1.3-thermal4.2-cut500head
│ │ │ cut_input_LNM2.bin
│ │ │ cut_response_LNM2.bin
│ │ │ cut_smooth_response_LNM2.bin
│ │ │ cut_x_LNM2.bin
│ │ │ data2_calinput.bin
│ │ │ data2_calresponse.bin
│ │ │ data2_interpo.bin
│ │ │ data2_norm befor interpo.bin
│ │ │ norm_smooth_response_LNM2.bin
│ │ │ ori_input_LNM2.bin
│ │ │ ori_response_LNM2.bin
│ │ │
│ │ └─mul
│ │ data1_ori.bin
│ │ data2_cal.bin
│ │ data2_norm.bin
│ │ data2_ori.bin
│ │ mul_cal_data1.bin
│ │ mul_cal_data2.bin
│ │ mul_cal_mul.bin
│ │ mul_ori_data1.bin
│ │ mul_ori_data2.bin
│ │ mul_ori_mul.bin
│ │
│ ├─20p-1-er1.8-thermal4.2-1000head100tail
│ │ │ cut_input_LNM2.bin
│ │ │ cut_response_LNM2.bin
│ │ │ cut_smooth_response_LNM2.bin
│ │ │ cut_x_LNM2.bin
│ │ │ data2_calinput.bin
│ │ │ data2_calresponse.bin
│ │ │ data2_interpo.bin
│ │ │ data2_norm befor interpo.bin
│ │ │ norm_smooth_response_LNM2.bin
│ │ │ ori_input_LNM2.bin
│ │ │ ori_response_LNM2.bin
│ │ │
│ │ └─mul
│ │ data1_ori.bin
│ │ data2_cal.bin
│ │ data2_norm.bin
│ │ data2_ori.bin
│ │ mul_cal_data1.bin
│ │ mul_cal_data2.bin
│ │ mul_cal_mul.bin
│ │ mul_ori_data1.bin
│ │ mul_ori_data2.bin
│ │ mul_ori_mul.bin
│ │
│ ├─20points-1-er1.2-thermal4.3-quadarate
│ │ │ cut_input_LNM2.bin
│ │ │ cut_response_LNM2.bin
│ │ │ cut_smooth_response_LNM2.bin
│ │ │ cut_x_LNM2.bin
│ │ │ data2_calinput.bin
│ │ │ data2_calresponse.bin
│ │ │ data2_interpo.bin
│ │ │ data2_norm befor interpo.bin
│ │ │ norm_smooth_response_LNM2.bin
│ │ │ ori_input_LNM2.bin
│ │ │ ori_response_LNM2.bin
│ │ │
│ │ └─mul
│ │ data1_ori.bin
│ │ data2_cal.bin
│ │ data2_norm.bin
│ │ data2_ori.bin
│ │ mul_cal_data1.bin
│ │ mul_cal_data2.bin
│ │ mul_cal_mul.bin
│ │ mul_ori_data1.bin
│ │ mul_ori_data2.bin
│ │ mul_ori_mul.bin
│ │
│ ├─20points-1-er1.7-thermal4.1-extinc
│ │ │ cut_input_LNM2.bin
│ │ │ cut_response_LNM2.bin
│ │ │ cut_smooth_response_LNM2.bin
│ │ │ cut_x_LNM2.bin
│ │ │ data2_calinput.bin
│ │ │ data2_calresponse.bin
│ │ │ data2_interpo.bin
│ │ │ data2_norm befor interpo.bin
│ │ │ norm_smooth_response_LNM2.bin
│ │ │ ori_input_LNM2.bin
│ │ │ ori_response_LNM2.bin
│ │ │
│ │ └─mul
│ │ data1_ori.bin
(similar structure)
│ │
│ └─Result-20points-1.6
│ cut_input_LNM2.bin
│ cut_response_LNM2.bin
│ cut_smooth_response_LNM2.bin
│ cut_x_LNM2.bin
│ data1_ori.bin
│ data2_cal.bin
│ data2_calinput.bin
│ data2_calresponse.bin
│ data2_interpo.bin
│ data2_norm befor interpo.bin
│ data2_norm.bin
│ data2_ori.bin
│ mul_cal_data1.bin
│ mul_cal_data2.bin
│ mul_cal_mul.bin
│ mul_ori_data1.bin
│ mul_ori_data2.bin
│ mul_ori_mul.bin
│ norm_smooth_response_LNM2.bin
│ ori_input_LNM2.bin
│ ori_response_LNM2.bin
│
├─FigS4
│ ├─LNM
│ │ ├─linear
│ │ │ cut_input_LNM2.bin
│ │ │ cut_x_LNM2.bin
│ │ │
│ │ └─random
│ │ │ data2_cal.bin
│ │ │ data2_ori.bin
│ │ │ mul_cal_mul.bin
│ │ │ mul_ori_mul.bin
│ │ │ mul_ref.bin
│ │ │
│ │ └─0.45
│ │ input_cali_cal.csv
│ │ input_cali_ori.csv
│ │ input_data_cal.csv
│ │ input_data_ori.csv
│ │ LNM_cali_index.csv
│ │ LNM_data_cali.csv
│ │ LNM_data_norm.csv
│ │ LNM_data_ori.csv
│ │ LNM_value.csv
│ │ response_cali_cal.csv
│ │ response_cali_ori.csv
│ │ response_data_cal.csv
│ │ response_data_ori.csv
│ │
│ └─VCSEL
│ input_10MHz.bin
│ response_10MHz.bin
│
└─FigS8
LNM_data_ori.csv
mul_TIindex.csv
ref_C34_data_cal.dat
response_C34_I1_data_cal.dat
response_C34_I2_data_cal.dat
response_C34_RF_data_cal.dat
- All data could be regenerated by using the code
Supplementary_prepared.ipynb- Function
waveformcut: extract part of the time trace - Function
norm: normalize the waveform into the range [0,1] - Function
errorate: calculate the error between the result and the reference - Function
ave: average the waveform by each time step - Function
samplingData: down-sample the data to filter out the noise, similar toave - Function
imagegen: generate the image time trace loaded on the LNM - Function
weightsgen: generate the image time trace loaded on the VCSEL - Function
sampleTI: simulating the integration of experimental results - Function
digitTI: simulating the integration of the reference signal - Function
sampleTImultiple: time-integrating multiple images - Function
loadnn: load the pre-trained model - Function
readDAQ: load the time trace(.matfile, check the key inSupplementary_prepared.ipynb)
- Function
- All the models and input data are stored in the folder
model
- The data for all 7 channels is stored in the folder
FigS10\1ch-C31-C37- All the signal is recorded as photonic voltage (V) with a time step of 10ns
1layer_200im_cal_1.1.(A,C,D).matcould be ignored, it is the data for triggering acquisition and integration1layer_200im_cal_back_1.1.*.matis the background signal for calibration, having the same structure as the experiment result aboveLNM_cali_index.csvis the voltage index for the linearity calibrationLNM_value.csvis the corresponding data for the linearity calibrationmul_TIindex.datis the trigger for integrationref_ramp_ori.csvis the reference calibration indexresponse_C34_ramp_ori.csvis the original calibration indexresponse_cali_ori.csvis the original responserun0_ref_TI_C34_data_cal.datis the simulated reference for the C34 channelrun0_ref_TI_C35_data_cal.datis the simulated reference for the C35 channel
- The data for the furthest channels is stored in the folder
FigS10\2ch-2layer-MNIST-500-C31-C36with a similar file structure - The data for the adjacent channels is stored in the folder
FigS10\2ch-2layer-MNIST-500-C34-C35with a similar file structure - The data for the wavelength response is stored in the folder
FigS3
balanceHLIM-1480-1640.datis the response of MZM with first column as wavelength(nm) and second column as photonic voltage(V)
- The data for the linear calibration is stored in the folder
FigS3\linear\20p-1-er1.3-thermal4.2-cut500head, other files in thelinearfolder could be ignored - The data for the VCSEL encoding accuracy is stored in the folder
FigS4\VCSEL- All data here is measured with the unit in V/s
- The data for the Lithium Niobate Modulator encoding accuracy is stored in the folder
FigS4\LNM\random\0.45, onlyinput_data_ori.csvandresponse_data_cal.csvare used; other files could be ignored.- All data here is measured with the unit in V/s
Supplementary2
\---Supplementary2
1.7b-0.5-0.5.csv
1.7b-0.5-0.5_1g.csv
1.7b-0.5-0.5_5g.csv
10G.csv
1G.csv
5G.csv
imagedata.csv
precision.ipynb
precision.ipynbcontains the code to regenerate the time trace calibration plot, MNIST images, error histogram, and error analysis plot.imagedata.csvcontains the MNIST image encoded1.7b-0.5-0.5.csv,1.7b-0.5-0.5_1g.csv,1.7b-0.5-0.5_5g.csvare the raw data for 10GHz, 1GHz, and 5GHz encoding10G.csv,1G.csv,5G.csvare the time trace data after calibration
Fig_S9-Precision_Check.ipynb
- This is the simulation code for neural network inference accuracy under different bit-precision
Sharing/Access information
- For the inference demonstration, all the data was accessed from the MNIST dataset, downloaded and imported directly from PyTorch.
- For more information, see the link:
https://docs.pytorch.org/vision/main/datasets.html
Code Description
- To run the code for data processing, please check whether you've installed all the libraries and imported them correctly
- Notice: Be careful when accessing files inside, all the data processing code is well-tuned by the path of each file. If there is a file missing, it would cause errors when re-processing the data
- Before running, check where you store and extract the files, and change the
FileDicaccordingly. With the same file path tree, the code should be functioning. This is the only variable needed to change when you want to re-process the data and redraw the figure - Most of the plots are generated as SVG files, so the size would look different from the published version. If you want to check more details, please use software to access the generated SVG file and resize the figures
Fig 4
Fig 4b
- The data for the whole time trace is provided
- To check the time trace at different time windows, change the
headandtailin theSlicesection to different time steps and rerun theSlicesection
Fig 4C
- Here we only show how we process the data with the example
7 - To check other examples, see Fig S8
Fig 4d-e
- All the matrices given are generated by Fig S8
Fig 5
Fig 5a
- Similar to fig 4a, but just switch the dataset, you could try to switch the dataset and the models loaded, and get a similar figure
Fig 5b
- To get this distribution, you need to run the section
Fig 5cfirst - Then run section
Fig 5b - To check other examples, change the
loopstartto a different index for selecting other images
Fig 5c
- We provided the whole measured data for this matrix
- Notice, this would need about 1 hour to run with RTX3070 GPU
Figure 5d-e
- Notice, this would need about 5 min to run with RTX3070 GPU
- After running fig fe, you could run fig 5d and change the
loopstartto check the result of different images
Fig S3
- We provide the original data
- The left plot corresponds to the
leftsection and the right plot corresponds to therightsection
Fig S4
- We provide the original data
- The left plots correspond to the
VCSEL leftsection, and the right plots correspond to theLNM rightsection
Fig S5-7
- We provide the original data
Fig S8
- We provide the original data
- By running all sections in sequence should be able to get all the plots
Fig S9
- We provide the simulation code for the neural network trained for MNIST with different bit-precision.
- By running all the code blocks, you could generate each curve for different datasets and put them together, as shown in the supplementary information.
Fig S10
Fig S10a
- Here we provide the data for generating all 7 matrices.
- To check different channels, go to path
<your path>\SI\FigS10\1ch-C31-C37, select the corresponding path folder, then changefilepathto your desired folder and rerun the code
Fig S10b
- For the adjacent channel, the two sections are linked to 2 adjacent channels, and each section yields a set of confusion matrices.
Fig S10c
- Same as before, but for the furthest channels in HITOP, the two sections are linked to 2 adjacent channels, and each section yields a set of confusion matrices.
