Data from: Machine learning without a processor: Emergent learning in a nonlinear analog network
Abstract
The capabilities of digital artificial neural networks grow rapidly with their size, however the time and energy required to train them does as well. The tradeoff is far better for Brains, where the constituent parts (neurons) update their analog connections in ignorance of the actions of other neurons, eschewing centralized processing. Recently introduced analog electronic contrastive local learning networks (CLLNs) share this important decentralized property. However their capabilities were limited because existing implementations are linear. In this dataset we include experimental demonstrations of a nonlinear CLLN, establishing a new paradigm for scalable learning. Included here are data and scripts required to generate figures 2-6 of the manuscript titled "Machine learning without a processor: Emergent learning in a nonlinear analog network".
https://doi.org/10.5061/dryad.8w9ghx3vx
Data from experiments using a nonlinear Contrastive Local Learning Network (CLLN).
Description of the data and file structure
Figures are generated using numbered MATLAB scripts. Data and helper scripts are contained in separate zip folders and should be uncompressed to allow scripts to run properly: unzip and place in the following file structure, and ensure the MATLAB filepath includes both folders.
>helper_scripts/ {all .m files: Experiment2.m, ExperimentGroup.m, logic_truth_table3.m, makeOrthonormalModes.m, makePlotPrettyNow.m, network_project_superclass2.m}
>data/{all .mat files}
Data stored in OscilloscopeData.mat: single array T2 with columns: Time (sec), Input Voltage (V), Output Voltage (V).
Data stored in all other mat files is within an “experiment” object (class in Experiment2.m), which represents a single experimental run, and includes the following data fields. All units, calculations, etc to create the figures are in-built into the scripts included.
MES: number of measurement steps for this experiment.
TRA: number of training datapoints
TST: number of training datapoints
SOR: number of sources (inputs) used
TAR: number of targets (outputs) used
ETA: 128*nudge factor. Unitless.
TLOC (TAR,2): for each output (target) node, gives its location in the network (row/column) with index starting at 0.
SLOC (SOR,2): for each input (source) node, gives its location in the network (row/column) with index starting at 0.
NODEMULT: maximum voltage measurable on nodes of the network (Volts)
GATEMULT: maximum voltage measurable on gate capacitors of the network (Volts)
TRAIN (SOR+TAR x TRA): training dataset. Sources then targets are rows, datapoints are columns. Values are 0-1, multiply by NODEMULT to get values in volts.
TEST (SOR+TAR x TST): testing dataset. Sources then targets are rows, datapoints are columns. Values are 0-1, multiply by NODEMULT to get values in volts.
HorizontalCapacitors, VerticalCapacitors (4x4xMES): Gate voltage values (stored on capacitors) for each (horizontal/vertical) edge (first two indices are row/column) for each measurement step. Values are 0-1, multiply by GATEMULT to get values in volts.
LearnTimes (1xMES): time spent learning between each measurement step (microseconds)
TestFreeState (4×4×MESxTST): Node values (first two indices are row/column) of the entire Free network at each measurement step (third index) for each test datapoint (fourth index). Values are 0-1, multiply by NODEMULT to get values in volts.
TestClampedState (4×4×MESxTST): Node values (first two indices are row/column) of the entire Clamped network at each measurement step (third index) for each test datapoint (fourth index). Values are 0-1, multiply by NODEMULT to get values in volts.
Sharing/Access information
Data was taken in experiments described in detail in the manuscript.
Code/Software
Scripts were written using MATLAB R2020b on Windows 10.
Methods are described in detail the associated manuscript.