Skip to main content
Dryad

Data from: Encoding of speech modes and loudness in ventral precentral gyrus

Data files

Feb 23, 2026 version files 2.47 GB

Click names to download individual files

Abstract

The ability to vary the mode and loudness of speech is an important part of the expressive range of human vocal communication. However, the encoding of these behaviors in the ventral precentral gyrus (vPCG) has not been studied at the resolution of neuronal firing rates. We investigated this in two participants who had intracortical microelectrode arrays implanted in their vPCG as part of a speech neuroprosthesis clinical trial. Neuronal firing rates modulated strongly in vPCG as a function of attempted mimed, whispered, normal, or loud speech. At the neural ensemble level, mode/loudness and phonemic content were encoded in distinct neural subspaces. Attempted mode/loudness could be decoded from vPCG with 94 % and 89 % accuracy for the two participants, and corresponding neural preparatory activity at 640 ms and 270 ms before speech onset enabled 80 % decoding accuracy, respectively. We then developed a closed-loop loudness decoder that achieved 94 % online accuracy in modulating a brain-to-text speech neuroprosthesis output based on attempted loudness. These findings demonstrate the feasibility of decoding mode and loudness from vPCG, paving the way for speech neuroprostheses capable of synthesizing more expressive speech.