Data from: Illusory speeding-up and slowing-down of objects moving at constant speed emerges from natural motion detection algorithms
Data files
Jan 27, 2025 version files 12.59 KB
Abstract
The footsteps illusion is a perceptual illusion in which two bars moving at the same constant speed on a stripey background are seen as alternately accelerating and decelerating like footsteps. The cortical mechanisms that give rise to footsteps and similar illusions remain to be fully understood and may reveal important neural computations. Using an implementation of the biologically inspired correlational model of motion detection, the 2DMD, this study had three aims. First, reproducing perceptual speed oscillations in model simulations. Second, mapping empirical reports of multiple illusion configurations onto model outputs. Third, inferring from the successful model, the role of multi-scale spatiotemporal channels in perception. We developed a 2DMD implementation adding a global (single value) frame-by-frame dynamic readout to quantify the continuous and oscillating response components. We confirmed that an expected signature oscillatory motion response corresponded to the footsteps illusion, demonstrating that its amplitude varied according to empirically measured illusion strength. We showed that with a global readout, the inherent pattern and contrast dependence of correlation detectors is sufficient to reproduce the surprising perceptual illusion. This evidence suggests spacetime correlation may be a fundamental sensory computation. Across species, filtering and global pooling operations might be adapted to process various complex phenomena.
README: Illusory speeding-up and slowing-down of objects moving at constant speed emerges from natural motion detection algorithms: simulation code & data
https://doi.org/10.5061/dryad.f4qrfj74m
Description of the data and file structure
There are three submitted data files from previous experiments used for the plots in comparisons with the simulations. They are:
Kitaoka&Anstis_Footsteps-Table1-data02_UploadJan2025.csv
This file contains data summarising the illusion strength reports from seven participants (columns 3-9) each presented with 27 configurations of the Footsteps illusion numbered 1-27 in the first column. The ratings are on a scale of 1-10 or slightly above for stronger illusions. Columns 10 and 11 are the means and standard deviations. The names of the illusion variants are given in column 2, three cases 10, 11 and 18 are recreated in the simulations in the manuscript, in Figures 1 and 4.
Sunaga_et_al_Fig4rawJan2025.csv
This file contains data from the experiment by Sunaga et al., 2008 in which they asked nine observers to report the footsteps illusion strength using a spatial displacement comparison. The data is taken from nine observers (columns 2-10) and three conditions are tested labelled in column 13. The third of these conditions with data shown in the last eight rows, the drifting gratings are tested in the manuscript and displayed in Figure 5, the speed manipulation.
Sunaga_et_al_Fig6raw_UploadJan2025.csv
This file contains data tested for sine and square wave grating backgrounds, for the nine participants. The participants are identified by their numbers in column 1. The spatial and temporal conditions for the square and sine wave gratings are separately tested for comparisons made in the upper and lower visual field from the centre, all labeled and recorded in columns 3-6. The averages are used as empirical plots in the manuscript within Figure 6.
Code/software
The submitted code folder contains Matlab scripts using Matlab Mathworks 2022. They are organized into four categories.
A. Stimulus generation and preparation
anTwoDCorrMotDetBaseParamsB.m
Run this script to predefine all the model background parameters for spatial and temporal operations at default values. It generates the spatial structure 'emd' which is used later. Must be run for the model to be used at all.
GenSqrGrat.m
Generate an image sequence of drifting square waves as input into the motion detection model
INPUTS: (Lx,Ly,Fr, depth, hparams, vparams) - [i.e. x and y dimension lengths, number of frames, bit depth of output image, horizontal frequency and motion parameters and equivalent vertical parameters]
GenGratDrift.m
Generate an image sequence of drifting sinusoidal waves as input into the motion detection model
INPUTS: (Lx,Ly,Fr, depth, hparams, vparams) - [i.e. x and y dimension lengths, number of frames, bit depth of output image, horizontal frequency and motion parameters and equivalent vertical parameters]
GenSqrGratOverlay.m
Generate a grating background like GenSqrGrat.m, with a horizontal overlay where the grating is covered.
INPUTS: (Lx,Ly,Fr, depth, hparams, vparams) -[i.e. x and y dimension lengths, number of frames, bit depth of output image, horizontal frequency and motion parameters and equivalent vertical parameters - overlay parameters hard wired in this version according to Kitaoka & Anstis, 2021 clearing in forest config]
GenFootstepBar.m
Generate a moving bar to simulate the footsteps illusion. Can be a single black/white bar or both.
INPUTS: (Lx,Ly,Fr, depth, aparams, bparams) - [i.e. x and y dimension lengths, number of frames, bit depth of output image, horizontal frequency and motion parameters and equivalent vertical parameters for bar]
GenCombo.m
Combine two generated stimuli into a single one using either a linear rule or other nonlinear combination.
INPUTS: (backIM, forIM, CombRule) - [image to place at the back, image at the front, code for combination rule]
ConvMovToImSeq.m
Code to convert a video file like avi or mov into an image sequence for model responses.
INPUTS: (filename,seq_name,imtype_out) - [output file name, name of input matrix and file type]
B. Core EMD correlation functions
MakeFilter.m
Make a 2D spatial filter to be used for bandpass filter convolutions
INPUTS: (dPhi, filtParams) - [sampling distance in pixels and filter spatial and bandwidth parameters]
OUTPUTS: [ oFilti Norms ] - [output filter and normalisation factors containing max and min values]
ExclReg.m
An operation to identify indeces for spatial regions on the edges to be excluded and used for colourmaps
INPUTS: (inIM, MM) - [input image and size of the spatial filter kernel]
OUTPUTS: [ExcInd ExBinIM ] - [Indeces of the exclude region and the image of the output with exclusions]
DoLowPass.m
Do temporal low pass filtering on the input time sequence. Run using difference quotient method
INPUTS: (InSeries, nSteps, tau_cor) - [time series in, number of intermediate steps in filter and low pass time constant]
DoFilterArray.m
Run spatial filter on the image before the correlation steps
INPUTS: (inIMseq, dPhi, filtParams) - [input image sequence, sampling distance of detector in pixels and filter spatial parameters]
DoEMDArrays.m
Run core EMD correlation computation across image sequences and generate the outputs
INPUTS: (fImage,tau, dPhi, emd) - [filtered image as input, temporal filter constant, sampling distance in pixels, emd parameters ]
OUTPUTS: [oEMDiiH oEMDiiV oEMDiiR oEMDiiThet ] - [horizontal and vertical output matrices, in polar coordinates the response magnitude and motion directions]
GlobMotionFita.m
A function which fits a dynamic fucntion to the frame by frame output from the correlation model. It uses nonlinfit to run an iterative fit.
INPUTS: (params,x) - [starting parameters specified in the paper and the x vector of input]
C. Simulations run for paper figures
SimFig2TestBar_NoIteration.m
Run the Correlation estimation for a pair of bars, black and white... generate global responses from each frame for the largest responses defined by Gp and plot the results. Used in Figure 2
SimFig4TestPairBar.m
Complete simulation to compare output strength for three different variants of the footsteps illusion and generate figure 4 in the paper.
SimFig5TestParametric.m
Simulation to run through illusion predictions as speed is varied across 8 values. Used to generate figure 5
SimFig6Test_MultiChannelSimulations.m
Simulation of multichannel responses when sine and square wave backgrounds are compared to each other. Loops are used to run through multiple values and figures are generated after the plots.
D. Additional supporting code and external functions
skycolormap.m
A colourmap used to generate easily visible plots
remove_empty_bars.m
External function used during visualisation of data
nlinfit.m
Variant of matlab standard non linear fitting function used to control the fit specifications
INPUTS: (X,y,model,beta0) - [input x values and data/y values, model function and initial parameters]
OUTPUTS: [beta,r,J] - [output parameters, residuals and Jacobian of fits]
GroupedBarWithSD.m
Bespoke plotting function with specified colours
ColorMapZAxis.m
Adjusting colormap function for use with 3D plots
Access information
Other publicly accessible locations of the data:
- None
Data was derived from the following sources:
Kitaoka, A., & Anstis, S. (2021). A review of the footsteps illusion. Journal of Illusion, 2. https://doi.org/10.47691/joi.v2.5612
Sunaga, S., Sato, M., Arikado, N., & Jomoto, H. (2008). A Static Geometrical Illusion Contributes Largely to the Footsteps Illusion. Perception, 37(6), 902–914. https://doi.org/10.1068/p5689
Methods
For the simulations of the experimental data from the work done by colleagues (Kitaoka & Anstis, 2021;) and ( Sunaga et al., 2008), corresponding input stimuli were generated using bespoke functions written in Mathworks MATLAB 2022 on PCs running Windows 10. The generated FI image sequences recreated the three manipulations of spatial configuration, speed and background grating waveform with matrices of 256 x 256 pixels in dimensions (w x h), an 8-bit luminance depth played over 100 frames (f). Each stimulus consisted of two moving bars (black and white) and a grayscale luminance grating background with values running from -1 (black) to 1 (white). Each foreground bar was characterised by its size BW x BH (typically 16 x 32 for a vertically oriented bar) and bar speed BS in pixels per frame. The generated stimulus was approximately matched to the relative scales and proportions in the studies. The generated stimulus matrix I(x,y,t) was used as an input into the models with x, y and t denoting functions of horizontal space, vertical space and time in pixels and frames respectively.
The code included runs simulations to generate the plots in the published work in Figures 2-6.