Data for: An accurate and rapidly calibrating speech neuroprosthesis
Data files
Oct 09, 2024 version files 58.92 MB
-
README.md
1.90 KB
-
t15_copyTask.pkl
57.79 MB
-
t15_personalUse.pkl
1.13 MB
Abstract
Brain-computer interfaces can enable communication for people with paralysis by transforming cortical activity associated with attempted speech into text on a computer screen. Communication with brain-computer interfaces has been restricted by extensive training requirements and limited accuracy. A 45-year-old man with amyotrophic lateral sclerosis (ALS) with tetraparesis and severe dysarthria underwent surgical implantation of four microelectrode arrays into his left ventral precentral gyrus 5 years after the onset of the illness; these arrays recorded neural activity from 256 intracortical electrodes. We report the results of decoding his cortical neural activity as he attempted to speak in both prompted and unstructured conversational contexts. Decoded words were displayed on a screen and then vocalized with the use of text-to-speech software designed to sound like his pre-ALS voice. On the first day of use (25 days after surgery), the neuroprosthesis achieved 99.6% accuracy with a 50-word vocabulary. Calibration of the neuroprosthesis required 30 minutes of cortical recordings while the participant attempted to speak, followed by subsequent processing. On the second day, after 1.4 additional hours of system training, the neuroprosthesis achieved 90.2% accuracy using a 125,000-word vocabulary. With further training data, the neuroprosthesis sustained 97.5% accuracy over a period of 8.4 months after surgical implantation, and the participant used it to communicate in self-paced conversations at a rate of approximately 32 words per minute for more than 248 cumulative hours. In a person with ALS and severe dysarthria, an intracortical speech neuroprosthesis reached a level of performance suitable to restore conversational communication after brief training.
The New England Journal of Medicine (2024)
Nicholas S. Card, Maitreyee Wairagkar, Carrina Iacobacci, Xianda Hou, Tyler Singer-Clark, Francis R. Willett, Erin M. Kunz, Chaofei Fan, Maryam Vahdati Nia, Darrel R. Deo, Aparna Srinivasan, Eun Young Choi, Matthew F. Glasser, Leigh R. Hochberg, Jaimie M. Henderson, Kiarash Shahlaie, Sergey D. Stavisky, and David M. Brandman.
- “*” denotes co-senior authors
Overview
This repository contains the data necessary to reproduce the results of the paper “An Accurate and Rapidly Calibrating Speech Neuroprosthesis” by Card et al. (2024), N Eng J Med.
The code is written in Python and is hosted on GitHub (link in the Related Works section).
The data can be downloaded from this Dryad repository. Please download this data and place it in the data
directory of the GitHub code.
Data is currently limited to what is necessary to reproduce the results in the paper. We intend to share additional data, including neural data, in the coming months. All included data has been anonymized and does not include any identifiable information.
Version 1 release files:
t15_copyTask.pkl
- Data from Copy Task trials during evaluation blocks (1,718 total trials) necessary for reproducing the online decoding performance plots (Figure 2).
- Copy Task data includes, for each trial: cue sentence, decoded phonemes and words, trial duration, and RNN-predicted logits.
t15_personalUse.pkl
- Data from Conversation Mode (22,126 total sentences) necessary for reproducing Figure 4.
- Conversation Mode data includes, for each trial: the number of decoded words, the sentence duration, and the participant’s rating of how correct the decoded sentence was.
- Specific decoded sentences from Conversation Mode are not included to protect the participant’s privacy.