An intracortical brain-machine interface based on macaque ventral premotor activity
Abstract
The majority of brain-machine interface (BMI) studies have focused on decoding intended movements based on neural activity of primary motor (M1) and dorsal premotor cortex (PMd). The ventral premotor cortex (PMv), and more specifically area F5c, has been implicated in object grasping and action observation, and may represent an alternative for motor BMI control due to its phasic modulation during action observation. Using chronically implanted Utah arrays in F5c, PMd, and M1 in two male macaques, we compared the efficacy of controlling a motor BMI based on neural activity of each area. PMv decoding reached similar or even higher success rates than M1 and PMd in a 2D cursor control task, especially when controlling for the number of motion selective channels that were used by the decoder. We found similar results during a 2D robot avatar control task in a simulated 3D environment. At both the multi-unit and the population level, neural responses in all areas were highly similar during the training phase (passive observation of cursor movements) and the online decoding phase, and only a small subset of neurons modulated its selectivity for the direction of motion. Thus, ventral premotor area F5c may represent an alternative for online motor BMI control.
This README.md file was generated on 2026-01-22 by Sofie De Schrijver
Data for the study 'An intracortical brain-machine interface based on macaque ventral premotor activity'
[Access this dataset on Dryad, DOI: 10.5061/dryad.jsxksn0qd]
This dataset was generated to assess the effectivity of a brain-computer interface (BCI) based on ventral premotor activity. During BCI control in this study, the subjects tried to move a cursor or a virtual avatar robot arm on a screen with only his brain activity. It has been shown that this can be done with neural activity from the primary motor cortex (M1) and the dorsal premotor cortex (PMd), but it is unknown if this also works with activity from the ventral premotor cortex (PMv), and more specifically from ventral premotor area F5c. Therefore, this study compares the BCI performance of two subjects while they use their own neural activity from either PMv, PMd, or M1. The study employed three different tasks that all had the same goal: move the cursor or avatar from a central starting point on a 2D screen to one of eight possible targets that appear on the screen. The three tasks are:
A) a Parallel task in which the subject did not only control the cursor movements with the BCI but the subject also performed the movements himself by touching the center and the target subsequently.
B) a Passive cursor task in which the subject only controlled the cursor movements with the BCI; the subject did not move his hands in this task.
C) a Passive avatar task that was identical to the Passive cursor task, but here the cursor was replaced by a virtual robot avatar arm and the subject needed to control the avatar movements with the BCI. The Passive avatar task proved to be quite difficult to execute with a BCI, so a second version of this task was created in which the avatar movements were assisted and the subject was thus helped in their attempt to reach the target.
The study was conducted in two male macaques that were implanted with three 96 electrode micro-electrode arrays (one in each of the three areas: PMv, PMd, and M1). Overall, this study showed that neural activity for PMv can be used for BCI control in cursor control tasks with and without hand movements, and that a PMv-based BCI can reach similar performances as PMd-based BCI and an M1-based BCI. Furthermore, the study showed that neural activity in all three areas was rather similar during training (passively observing the cursor moving on the screen) and decoding (actively controlling the cursor movements with a BCI decoder).
Description of the data and file structure
- Description of dataset
The Data.zip folder contains the data of Monkey V and L in their respective folders. Each subject folders contains four subfolders (one for each task). Each task subfolder contains all the data files associated with that specific task in that specific subject. All .mat files correspond to the filtered data acquired during the study.
The name of each file includes the task, the animal name, date of collection and hour of collection.
Animal name: Monkey L = Loki, Monkey V = Vino
Tasks: For each recording session, there is a training file and a decoding file for each task (see txt files labeled 'fnames' in each monkey's folder to distinguish between training and decoding files). There are three separate tasks which are listed here under there common name:
a. Parallel task --> 'vgrasp_touch_centerout'
b. Passive cursor task --> 'fix_centerout'
c. Passive avatar task --> 'fix_centerout_avatar'
d. Passive assisted avatar task --> 'fix_centerout_avatar'
Note: distinguish between the assisted and non-assisted avatar task by looking at the two letter combination used for each file in the .txt file named 'fnames_decoding'. In this .txt file, all recorded files for decoding are listed with below each file name a one or two letter combination. If it contains a 'N', this file contains information about a Passive avatar task, whereas a 'T' indicates that this files contains information about an Assisted Passive avatar task.
The data described here was used for a brain-computer interface (BCI). For each task, there is a training file in which neural and behavioral data is collected from a subject. With this training dataset, a BCI decoder was trained. Then the BCI decoder was used during the decoding file (the BCI decoder translates the neural signals into motor commands in real-time). So each recording session contains two .mat files: a training file and a decoding file.
The decoder was trained with the training file for the decoding file for tasks a, b, and c. The decoder for task d was identical to the decoder trained for task c.
General structure of the data folder:
Data.zip
| - subject name/
| | - fnames_training.txt (list of all training files with behavioral data)
| | - fnames_decoding.txt (list of all decoding files with behavioral data)
| | - fnames_training_spikes.txt (list of all training files with neural and behavioral data)
| | - fnames_decoding_spikes.txt (list of all decoding files with neural and behavioral data)
| | - task name/
| | | - [Task][Subject][YYYYMMDD][HHMM][SessionID]_pkl.mat
- File List
A. files of the Parallel task
vgrasp_touch_centerout_model_[Subject][YYYYMMDD][HHMM]_[SessionID]_pkl.mat
Example: vgrasp_touch_centerout_model_Loki_20210823_1111_A_pkl.mat (96 files of monkey L and 8 files of monkey V)
Description: These files contain the behavioral (and neural) data recorded while the subject performed the Parallel task. Each recoding session has two files (one training and one decoding file) that are subsequently recorded on one day. The files have been transformed from a .pkl file into a .mat file that can be opened in Matlab. The variables included in each .mat file are similar between the different tasks and are listed below the file list.
B. files of the Passive cursor task
fix_centerout_[Subject][YYYYMMDD][HHMM]_[SessionID]_pkl.mat
Example: fix_centerout_Loki_20210827_1439_B_pkl.mat (78 files of monkey L and 66 files of monkey V)
Description: These files contain the behavioral (and neural) data recorded while the subject performed the Passive cursor task. Each recoding session has two files (one training and one decoding file) that are subsequently recorded on one day. The files have been transformed from a .pkl file into a .mat file that can be opened in Matlab. The variables included in each .mat file are similar between the different tasks and are listed below the file list.
C. files of the Passive avatar task
fix_centerout_avatar_[Subject][YYYYMMDD][HHMM]_[SessionID]_pkl.mat
Example: fix_centerout_avatar_Loki_20220622_1037_G_pkl.mat (63 files of monkey L and 104 files of monkey V)
Description: These files contain the behavioral (and neural) data recorded while the subject performed the Passive avatar task. Each recoding session has two or three files (one training and one or two decoding files) that are subsequently recorded on one day. The files have been transformed from a .pkl file into a .mat file that can be opened in Matlab. The variables included in each .mat file are similar between the different tasks and are listed below the file list.
D. fnames txt files (for each monkey)
fnames_Decoding.txt
fnames_Decoding_spikes.txt
fnames_Training.txt
fnames_Training_spikes.txt
Description: These files contain a list of all recorded files (all tasks) per monkey, divided into the training files ('_Training') and the decoding files ('_Decoding'). The files with '_spikes' contain only the files that have neural data, while the files without '_spikes' contain all files regardless of the presence of neural data. The .txt files can be opened in Text Editor. Below each file in these .txt files, a one or two letter combination is written. A one letter combination for the Parallel and the Passive cursor task, and a two letter combination for the Passive avatar task. For all tasks, the first letter indicates which brain areas' neural signals were used for decoding:
P = PMd (dorsal premotor cortex)
Q = M1 (primary motor cortex)
O = PMv (ventral premotor cortex)
This first letter also indicates the electrode configuration (i.e. which electrodes, also called channels (ch), belong to which implanted brain area). This information is necessary when comparing neural responses between different brain areas:
P
ch1-64 and ch97-128 = PMd
ch65-96 and ch129-192 = PMv
ch193-256 = M1
Q
ch1-64 and ch97-128 = M1
ch65-96 and ch129-192 = PMd
ch193-256 = PMv
O
ch1-64 and ch97-128 = PMv
ch65-96 and ch129-192 = M1
ch193-256 = PMd
For the Passive avatar task, there is a second letter indicating if the decoding was assisted (Passive assisted avatar task) or not (Passive avatar task). This second letter is thus only written down in the '_decoding' .txt file:
T = assisted
N = not assisted
E. Variable list in the .mat files
Training and decoding .mat files
the 'data' variable contains a list with all trials, with specifications for each trial
- trial = trial number
- start = start of the trial (time is always shown in ms with respect to the start of the recording file)
- stop = end of the trial (ms)
- answer = if 1, the trial was correct. All other answers were incorrect
- targetObject = one of eight possible target positions
- cue = when the cue turned on and off (the second value is the go-cue) (ms)
- target = moment at which the target appears on the screen (ms)
- photoEvents = timepoints at which the target was visible on the touchscreen (ms)
- robotPhotoEvents = photoEvents on the screen that showed the robot simulation (ms), only usable in the Passive avatar task
- restkey = moment at which the monkey lifted its hand from the restkey (ms), only usable in the Parallel task
- cursorTrajectory = variable that includes the time of the cursor movements, and the position (x,y coordinates) of the cursor during the movement
- cuePosition = x,y coordinates of the cue on the screen
- targetPosition = x,y coordinates of the target on the screen
- muaA = list the spiketimings (i.e. crossing of the spike threshold) for each electrode (ms)
F. Missing data codes: None
Sharing/Access information
- Recommended citation for this dataset:
De Schrijver S*; Ramirez J G*; Iregui S; Aerbelien E; De Schutter J; Theys T; Decramer T; Janssen Peter (2025), An intracortical brain-machine interface based on macaque ventral premotor activity - The paper associated with this dataset has not been published yet (on January 22nd 2026)
