Skip to main content
Dryad

Data from: Simple visual stimuli are sufficient to drive responses in action observation and execution neurons in the macaque ventral premotor cortex

Cite this dataset

De Schrijver, Sofie; Decramer, Thomas; Janssen, Peter (2024). Data from: Simple visual stimuli are sufficient to drive responses in action observation and execution neurons in the macaque ventral premotor cortex [Dataset]. Dryad. https://doi.org/10.5061/dryad.kwh70rzc7

Abstract

Neurons responding during action execution and action observation were discovered in the ventral premotor cortex three decades ago. However, the visual features that drive the responses of Action Observation/Execution Neurons (AOENs) have not been revealed at present. We investigated the neural responses of AOENs in ventral premotor area F5c of four macaques during the observation of action videos and crucial control stimuli. The large majority of AOENs showed highly phasic responses during the action videos, with a preference for the moment that the hand made contact with the object. They also responded to an abstract shape moving towards but not interacting with an object, even when the shape moved on a scrambled background, implying that most AOENs in F5c do not require the perception of causality or a meaningful action. Additionally, the majority of AOENs responded to static frames of the videos. Our findings show that very elementary stimuli, even without a grasping context, are sufficient to drive responses in F5c AOENs. 

README: Simple visual stimuli are sufficient to drive responses in action observation and execution neurons in the macaque ventral premotor cortex

This README file was generated on 2024-04-04 by Sofie De Schrijver

GENERAL INFORMATION

  1. Title of Dataset: Dataset for 'Simple visual stimuli are sufficient to drive responses in action observation and execution neurons in the macaque ventral premotor cortex'

  2. Author Information
    Principal Investigator (Research Facilitites) Information
    Name: Peter Janssen
    ORCID:0000-0002-8463-5577
    Institution: KU Leuven
    Address: Laboratory for Neuro- and Psychophysiology
    Department of Neurosciences
    ON2 - Herestraat 49
    3000 Leuven (Belgium)
    Email: peter.janssen@kuleuven.be

    First Author Information
    Name: Sofie De Schrijver
    ORCID: 0000-0002-9063-8790
    Institution: KU Leuven
    Address: Laboratory for Neuro- and Psychophysiology
    Department of Neurosciences
    ON2 -Herestraat 49
    3000 Leuven (Belgium)
    Email: sofie.deschrijver@kuleuven.be

  3. Date of data collection:
    2019-02-21 to 2023-05-11

  4. Geographic location of data collection:
    Leuven/Flemish Brabant, Belgium

  5. Information about funding sources that supported the collection of the data:
    Fonds Wetenschappelijk Onderzoek (FWO): G.097422N
    KU Leuven: C14/18/100
    KU Leuven: C14/22/134
    The funders had no role in study design, data collection and interpretation,
    or the decision to submit the work for publication.

SHARING/ACCESS INFORMATION

  1. Licenses/restrictions placed on the data: N/A

  2. Links to publications that cite or use the data: N/A yet

  3. Links to other publicly accessible locations of the data: N/A

  4. Links/relationships to ancillary data sets: N/A

    Was data derived from another source? No

  5. Recommended citation for this dataset:
    De Schrijver S; Decramer T; Janssen Peter (2024), Simple visual stimuli are sufficient to
    drive responses in action observation and execution neurons in the macaque ventral premotor cortex.

DATA & FILE OVERVIEW

  1. Description of dataset

The following .mat files correspond to the filtered data acquired along the study and organized per figure. The name of each file includes the task, the animal name, date of collection and hour of collection.

Monkey1 = Isaac, Monkey2 Right = Sky1, Monkey2 Left = Sky2, Monkey3 = Loki, Monkey4 = Vino

For each .mat file, there is a corresponding .nex5 file that contains the sorted spikes.
For files that were generated after 2020, there is an extra .mat file named *_ReplayedMUA.mat. This file contains the complete multi-unit activity, since only a part was recorded and stored in the original .mat file due to a change in the settings during recording of the signal.
Addionally, .mat files were generated starting from June 2020 with the name 'mgrasp_dark'. This task refers to the same grasping task but performed in the dark.
This data was not used to generate figures.

  1. File List:

Figure 2 and supplementary Figures 1,5:
Isaac_vgrasp_20190221_1121_B
Isaac_vgrasp_20190225_1101_B
Isaac_vgrasp_20190226_1052_D
vgrasp_Sky1_20200107_1143_C
vgrasp_Sky1_20200109_1032_A
vgrasp_Sky1_20200110_1007_A
vgrasp_Sky2_20200615_1015_A
vgrasp_Sky2_20200616_0925_A
vgrasp_Sky2_20200617_0924_A
vgrasp_Loki_20210525_1000_B
vgrasp_Loki_20210528_0929_A
vgrasp_Loki_20210531_0938_A
vgrasp_Vino_20221014_0955_A
vgrasp_Vino_20221017_0940_A
vgrasp_Vino_20221027_1052_B

Figure 3:
Isaac_mirrormovies_20190226_1059_A

Figures 4,5,6,7 and supplementary Figures 1,2,3,4,5:
Isaac_mirrormovies_20190221_1127_A
Isaac_mirrormovies_20190225_1107_B
Isaac_mirrormovies_20190226_1059_A
mirrormovies_Sky1_20200107_1103_B
mirrormovies_Sky1_20200109_1044_A
mirrormovies_Sky1_20200110_1038_B
mirrormovies_Sky2_20200615_1048_A
mirrormovies_Sky2_20200616_0958_A
mirrormovies_Sky2_20200617_1006_A
mirrormovies_Loki_20210525_1037_B
mirrormovies_Loki_20210528_0950_A
mirrormovies_Loki_20210531_1004_A
mirrormovies_Vino_20221014_1028_A
mirrormovies_Vino_20221017_1014_A
mirrormovies_Vino_20221027_1127_A

Supplementary Figure 3:
mirrormovies_Loki_20230511_0940_A

METHODOLOGICAL INFORMATION

  1. Description of methods used for collection/generation of data

Surgery and Recording Procedures:
Four male rhesus monkey (Macaca mulatta, 8 kg) were implanted with a titanium head post that was fixed to the skull with dental acrylic and titanium screws. After training in a passive fixation task and a grasping task, a 96-channel microelectrode Utah array with 1.5 mm electrode length and an electrode spacing of 400 µm (4x4 mm; Blackrock Neurotech, UT, USA) was inserted during general anesthesia and guided by stereotactic coordinates and anatomical landmarks. We inserted the arrays using a pneumatic inserter (Blackrock Neurotech) with a pressure of 1.034 bar and an implantation depth of 1 mm. During all surgical procedures,
the monkey was kept under propofol anesthesia (10 mg/kg/h) and strict aseptic conditions. Postoperative anatomical scans (Siemens 3T scanner, 0.6 mm resolution) verified the position of the Utah array in ventral premotor area F5c (Figure 1A top left panel), contralateral to the monkey’s working hand. The remaining three panels of Figure 1A show the exact location of the five implantations in the four monkeys. The posterior edge of the 4x4 mm arrays was located 0 to 5 mm anterior to a vertical line extending down from the spur of the arcuate sulcus. Thus, our recording sites covered a substantial part of the inferior frontal convexity.
Due to an implant failure in the second monkey, another Utah array was implanted in area F5c in the other hemisphere (‘Monkey 2 Left’ in Figure 1A). All surgical and experimental procedures were approved by the ethical committee on animal experiments of the KU Leuven and performed according to the National Institute of Health’s Guide for the Care and Use of Laboratory Animals and the EU Directive 2010/63/EU.
During a recording session, data were collected using a 96-channel digital headstage (Cereplex M) connected to a digital neural processor and sent to a Cerebus data acquisition system (Blackrock Neurotech, UT, USA). Single- and multiunit signals were high pass filtered (750 Hz) and sampled at 30 kHz. The threshold to detect multiunit activity was set to 95% of the average noise level. The data were subsequently sorted offline with a refractory period of 1 ms to isolate single units, using the Offline Spike sorter software (Plexon, Inc., Dallas, TX, USA). Overall, we had a good yield over the implanted array in each monkey with approximately
60 channels with detectable single units in every session.

Experimental setup:
The monkey was trained to sit upright in a primate chair with his head fixed during all experimental procedures. During the recording of neuronal activity, the monkey had to perform two different tasks sequentially in blocks (typically 20 trials per condition): a grasping task and a passive fixation task.
For the grasping task, a custom-built object containing three identical small spheres was placed in front of the monkey at a 28 cm viewing distance 55. The spheres (2.5 cm diameter) were attached to a disk (15 cm diameter) with springs, allowing the monkey to pull the spheres. Each sphere contained a blue LED that could be turned on and off individually, and was positioned at an angle of 120 degree relative to the other two spheres. In the center of the disk, a green LED served as the fixation point, and the dimming of this green LED was the go-signal for grasping. During the grasping task (Figure 1B), the monkey had to grasp one of
the three identical spheres (indicated with the blue LED) in a pseudorandom order. The position of the hand was monitored using infrared laser beams, which were interrupted when the hand was positioned on the resting position. An infrared-based camera system (Eyelink 1000; SR Research, Ontario, Canada) monitored the eye movements to ensure fixation on the object during the duration of each trial.
For the passive fixation task, a display (17.3 inch) was placed in front of the monkey at the same viewing distance as the object in the grasping task. The monkey had to maintain fixation on a red dot in the center of the screen during the presentation of different videos. Eye movements were monitored to ensure fixation inside a ~2 degree fixation window using an the same infrared-based camera system as in the VGG task. A photodiode attached to the lower right corner of the screen registered the onset of each video by the detection of a bright square (not visible to the monkey) that appeared simultaneously with the onset of the video.
Photodiode pulses were sampled at 30 kHz on the Cerebus data acquisition system to allow synchronization with the neural data.

Visually guided and memory-guided grasping task (VGG and MGG):
In every recording session, the monkey had to perform a delayed visually guided reach-to-grasp task (Figure 1B). To start a trial, the monkey had to place its hand on a resting position in complete darkness. After 500 ms of fixation on a green LED in the center of the disk, an external light illuminated the object. At the same time, a blue LED appeared on one of the three spheres, indicating the sphere to-be-grasped. After a variable time (700-1000 ms), the green LED dimmed (i.e. the go cue), instructing the monkey to release the resting position, grasp the object with the illuminated blue LED, and pull it to obtain a juice reward.
The complete movement, from releasing the resting position to pulling the object, could maximally last 1000 ms to ensure the shortest and most efficient reach trajectory. During the grasping task, the opposite hand was gently restrained to avoid movement. In the memory-guided version of the grasping task (MGG), all events were identical to the VGG task, with the exception of the light above the object and the blue LED on the target, which both went off after 300 ms, so that the animal had to grasp and pull the object in the dark after a delay of 700 to 1000 ms. To avoid any influence of the reward on the activity around the pull of the object,
reward was administered at least 40 ms after the detection of the pull of the object.

Action observation task:
To initiate a trial, the monkey had to fixate on a small red dot that appeared in the center of the screen. After 300 ms of passive fixation, a video started. The monkey had to maintain its gaze on the fixation point during the presentation of the stimulus (15.5 x 10.3 visual degrees). In total, twelve conditions (one video per condition) were shown in pseudorandom order during the task (Figure 1C, supplemental material). In brief, the stimulus set included six videos filmed from the point of view (Viewpoint 1) of the monkey and six videos filmed from the side (Viewpoint 2). Both viewpoints included a monkey and a human performing the VGG task
(‘Monkey grasp’ and ‘Human grasp’, respectively), a human performing the same task without pulling the sphere (‘Human touch’) and a video without any movement and no human or monkey hand visible (‘Static’). After pulling the sphere, the hand moved back to the starting position. Additionally, four videos were shown in which an ellipse (a scrambled version of the monkey hand, major axis ± 95 mm, minor axis ± 43 mm, as in 12) moved towards the object with the same kinetic parameters as the hand in the action videos. Since the period before movement onset in the ellipse videos differed from the one in the action videos, the total length of the ellipse videos
was slightly different compared to the action videos. The background of the video was either the natural (‘Ellipse’) or a scrambled version of the natural background (‘SCR background’). We also presented videos of a static frame of the Human Grasp action video in which the hand was halfway towards the object and in which the hand was interacting with the object. All videos were made using the object with the three spheres from the grasping task and lasted between 2.6 and 3.5 sec. Both arms of the monkey were restrained during the action observation task to prevent movement.

  1. Methods for processing the data

All data were analyzed using custom written Matlab scripts (the MathWorks R2019b, MA, USA). For each trial, we calculated the net spike rate in 50 ms bins by subtracting the baseline activity (average spike rate of 200 ms interval before object onset or before onset of the action video) from the spike rate after stimulus start (either object or action video). We analyzed three recording sessions for each implantation. Because the recording signal was unstable in the first weeks after implantation, we considered all spikes recorded on different days as different units. However, we verified that the results were essentially the same when analyzing a single
recording session for each of the three implantations. All analyses were calculated on SUA and MUA independently, and averaged across the three spheres that had to be grasped. Task-related neurons were significantly positively modulated (at least five spikes/s, and three standard errors above baseline activity for at least 150 ms) during the VGG task in any of three epochs of the task (Go cue, Lift of the hand, and Pull). Action Observation/Execution Neurons (AOENs) were defined as task-related (VGG) and significantly positively modulated (at least five spikes/s, and three standard errors above baseline activity for at least 200 ms) during passive viewing
of any of the action videos. Likewise, neurons were considered significantly negatively modulated during a task when the minimal spike rate was no more than five spikes/s and the average activity was at least three standard errors below the baseline activity for at least 200ms. The average net spike rate was calculated for 15 to 35 repetitions per condition for each task.
To assess the selectivity of our AOEN sample, we calculated the average responses to each action video as the average spike rate in a 200 ms interval around the maximum, and then calculated a two-way analysis of variance on these average responses with factors viewpoint (Viewpoint 1 and Viewpoint 2) and action type (Human touch, Human Grasp, and Monkey Grasp). This way, we tested for viewpoint selectivity and congruence of the action during execution and observation. Additionally, we calculated the d’ selectivity index for each neuron to quantify viewpoint selectivity:
d^'=(mean(VP1)-mean(VP2))/√((var(VP1)+var(VP2))/2) with VP1 = viewpoint 1 and VP2 = viewpoint 2. We determined the preferred video as the one eliciting the highest firing rate for each SUA or MUA site. To capture the phasic nature of the responses during action observation, we used the matlab function ‘findpeaks’ detecting peaks in the normalized (by dividing by the maximum) net firing rate to each video with a minimal prominence (i.e., the decline in spike rate on either side of the peak) of 0.8, discarding sites that had more than three peaks due to noisy responses. We analyzed the peaks that were identified by the Matlab function regardless of the time epoch to account for the variable length of the videos.
To characterize the degree of tuning during passive viewing of the action video, we calculated the full width at half maximum (FWHM) of the spike rate around the peak response to the preferred action video. For all AOENs, we then plotted the x and y position of the hand with respect to the object and the Euclidean distance (in pixels) to the object 50 ms before the peak response occurred (to account for the latency of the neuronal response). A Kruskal-Wallis one-way ANOVA was used to test whether the neurons showed a significant preference for one of the three movement intervals: approaching the object, interacting with the object, and receding from the object.
Additionally, to test whether static frames of the action video could account for the observed responses, we plotted the peak firing rate during the preferred action video against the peak firing rate during the static frame videos (in which the hand was either touching the object or halfway its trajectory towards the object). We then calculated the Pearson correlation coefficient between the two peak firing rates (action video and static frame video) for each static frame video. This control experiment was performed in a subset of the recorded F5c sample.
Because almost no AOEN responded during the entire video but rather during specific epochs, we compared the maximal spike rate during the preferred action video to the maximal spike rate during the corresponding ellipse video with the normal background. Note that our analysis ignored the exact timing of the maximal firing rate because the action videos and the ellipse videos differed in length. Ellipse neurons were defined as AOENs with a maximal spiking response to the ellipse video that was at least 50% of the maximal spike rate during viewing of the preferred action video, analogous to 12. Furthermore, to assess whether the F5c ellipse neurons were selective
for the direction or the orientation of the movement, we used a Mann-Whitney U test to compare the average activity of each neuron during the approaching and the receding phase of the ellipse when it moved on the scrambled background. Since our action execution task only included object grasping and in line with 2, we defined strictly congruent AOENs as neurons that showed a significant preference for Human grasp over Human touch (based on a two-way ANOVA with factors perspective and action type, main effect of action type p < 0.05). Broadly congruent AOENs were defined as neurons in which the main effect of action type was not significant.
All post-hoc tests were calculated with the Tukey’s honestly significant difference procedure.
Suppression AOENs were defined as neurons that were significantly positively modulated during the action execution task and significantly negatively modulated during the action observation task, as in 8. To assess whether suppression AOENs also responded to the ellipse control video, we calculated the Pearson correlation coefficient between the average spike rate in a 200 ms interval around the most inhibitory activity during the preferred action video and the average spike rate in the corresponding interval in the ellipse video. In contrast to excitatory AOENs, suppression AOENs did not exhibit a phasic response to the videos. Therefore, we calculated the average
spike rate in an interval instead of using the spike rate in one bin. For each AOEN we defined the preferred action video as the action video with the lowest net spike rate during the movement of the hand.
To investigate whether muscle activity contributed to the neural responses observed during the action observation task, we measured the electromyographic (EMG) activity of the thumb and bicep muscle of the hand used in the VGG task during passive fixation of the action videos. The EMG signal was recorded in Monkey 3 using dry self-adhesive electrodes. The ground electrode was placed next to the recording electrode on the bicep muscle. Data were obtained with a multi-channel amplifier (EMG100C, BIOPAC systems Inc., CA, US) and sampled at 10000Hz with a gain of 5000. After applying a bandpass filter between 2 and 30Hz, the rectified EMG signal was aligned to the neural data.
We then correlated the rectified EMG signal (in 50 ms bins) with the spiking activity of each AOEN in a 1000 ms interval (500 ms around the peak response and 500 ms one second before the peak response).

  1. Instrument- or software-specific information needed to interpret the data: All data analyses were performed in MATLAB (MathWorks, MA).

  2. Quality-assurance procedures performed on the data

For behavioral monitoring, we have also continuously recorded the right eye position of each animal using an infrared-based camera system (Eye Link II, SR
Research, Ontario, Canada), sampling the pupil position at 500 Hz.
We replicated all results in 5 implantations in 4 monkeys to assure the robustness of the results.

  1. People involved with sample collection, processing, analysis and/or submission:

The same operator (Sofie De Schrijver) has performed all the experimental work and the data processing/analyses.

DATA-SPECIFIC INFORMATION

  1. Number of variables, label, description and units (for all the uploaded .mat files):

Each .mat file contains 2 variables that are necessary for the analysis of the data: cerebusDataA and tnsTrials. The third variable 'tnsData' is only necessary when analysing the EMG data in the next file: mirrormovies_Loki_20230511_0940_A.
Some .mat files have an additional variable 'cerebusDataB' that contains the same structure as cerebusDataA but for additional electrodes. When numbering electrodes, electrodes of cerebusDataB follow those of cerebusDataA. So if cerebusDataA contains 128 electrodes and cerebusDataB 64 electrodes, the electrodes of cerebusDataB will be electrode 129 until 192.

A) cerebusDataA contains the neural data
Action execution task (vgrasp)
var1: elecSpikes; timestamps of each electrode. _0 in the name indicates that it is multi-unit activity. Each timestep is the timing of a spike event as trespassing the set threshold (in milliseconds).
var2: Cue; timing of the Cue (in milliseconds) used to align the neural data to the task data.
other vars: N/A for this task.
Action observation task (mirrormovies)
var1: elecSpikes; timestamps of each electrode. _0 in the name indicates that it is multi-unit activity. Each timestep is the timing of a spike event as trespassing the set threshold (in milliseconds).
var2: PhotoEvents; timing of the photoevents (in milliseconds) used to align the neural data to the task data. Each photoevent is recorded when a bright white square is presented on the screen.
var3: anInputData; the 4th column contains the raw EMG data
other vars: N/A for this task.

B) tnsTrials contains the task data
Action execution task (vgrasp)
var1: Index; trial number
var2: Start; start of the trial
var3: Stop; stop of the trial
var4: Answer; answer is 1 if the animal performs the trial correctly.
var5: TargetObject; sphere that needs to be grasped [0:3]
var11: Light; external light turns on and illuminates the object.
var12: Target; when the monkey pulls the correct sphere.
var14: Cue; two values that indicate when the cue light goes on and off. So the second value is the go cue.
other vars: N/A for this task
Action observation task (mirrormovies)
var1: Index; trial number
var2: Start; start of the trial
var3: Stop; stop of the trial
var4: Answer; answer is 1 if the animal performs the trial correctly.
var5: Stimulus; name of the video that is shown (POV = point-of-view of the monkey = Viewpoint 1; Side = filmed from the side = Viewpoint 2). Each stimulus name with its corresponding name in the paper: Human_Grip = Human Grasp; Human_Fist = Human touch; Obi_Monkey = Monkey; Human_Nogo = Static; Object_Ellipse = Ellipse; SCR_Ellipse = SCR background; Human_staticgrip = Static Interaction; Human_StaticApproach = Static Approach; Human_Disappear was not used in this study.
var11: PhotoEvents; timing of photoevents (in milliseconds), 3 in a correct trial: (1) start of the trial, (2) start of the video, (3) end of the video.
other vars: N/A for this task

For files that were generated after 2020, there is an extra .mat file named *_ReplayedMUA.mat. This file contains 1 variable 'cerebusDataA'. Use this variable when the *_ReplayedMUA.mat file is present, otherwise the correct multi-unit activity is stored in the general .mat file. The variable 'elecSpikes' has the same structure in both .mat files.
Each .mat file has a .nex5 file that contains the timesteps (in milliseconds) of the sorted spikes per electrode.
The variable structure is the same for the mgrasp_dark.mat files and the vgrasp.mat files, with the exception that var11 in the tnsTrials variable contains 2 values; 1 for when the external light goes on and 1 for when the light goes off.

All data shown in the paper are from the ventral premotor area F5c. However, not all electrodes that are included in the files are implanted in this area. This list lists the electrodes that are located in the ventral premotor area:
The electrode numbering is the same in the action execution (vgrasp) and the action observation task (mirrormovies).

20190221 - elec 1:96
20190225 - elec 1:64
20190226 - elec 1:64
20200107 - elec 97:128
20200109 - elec 97:128
20200110 - elec 1:96
20200615 - elec 1:96
20200616 - elec 1:96
20200617 - elec 1:96
20210525 - elec 1:96
20210528 - elec 1:96
20210531 - elec 1:96
20221014 - elec 193:256
20221017 - elec 193:256
20221027 - elec 1:64 & elec 97:128
20230511 - elec 1:64 & elec 97:128

  1. Missing data codes: None

  2. Abbreviations used: N/A

Methods

We recorded single unit and multi-unit activity in four macaques that were implanted with a 96-channel Utah array (Blackrock Neurotech, UT, USA) in ventral premotor area F5c. The neural signals were high pass filtered (750Hz) and sampled at 30kHz. The data were subsequently sorted offline with a refractory period of 1 ms to isolate single units, using the Offline Spike sorter software (Plexon, Inc., Dallas, TX, USA). Neural data was recorded while the monkeys performed an action execution task and an action observation task to assess the visual selectivity of action observation/execution neurons (AOENs) in area F5c.

Funding

Research Foundation - Flanders, Award: G.097422N

KU Leuven, Award: C14/18/100

KU Leuven, Award: C14/22/134