Mechanisms of individualized fMRI neuromodulation for visual perception and visual imagery
Data files
Dec 05, 2024 version files 60.05 GB
-
10_segmentation_CSF_mask.zip
9.84 KB
-
Data.zip
60.05 GB
-
README.md
2.86 MB
Abstract
Neuromodulation is a growing precision-medicine approach to modulating neural activity that can be used to treat neuropsychiatric, and general pathophysiologic conditions. We developed individualized fMRI neuromodulation (iNM) to study the mechanisms of visuospatial perception modulation with the long-term goal of applying it in low-vision patient populations having cortical blindness or visuospatial impairment preceding subjective cognitive impairment. To determine these mechanisms, we developed a direction and coherence discrimination task to engage visual perception (VP), visual imagery (VI), selective extero-intero-ceptive attention (SEIA), and motor planning (MP) networks. Participants discriminated up and down direction, at full and subthreshold coherence under iNM or control (no iNM). We determined the blood-oxygen-level-dependent (BOLD) magnitude as the area under the curve (AUC) for VI, SEIA, and MP encoded networks and used a decoder to predict the stimulus from brain maps.
https://doi.org/10.5061/dryad.ngf1vhj1m
The following figure depicts our Individualized real-time functional MRI closed-loop neuromodulation (iNM) Intervention.
A. iNM strengthens visual perception and visual imagery networks: 1) and 2) High-resolution anatomical images were acquired and registered to the Siemens’ console computer; 3) Each participant’s individualized MT and MST networks were delineated and contoured; 4) iNM data extracted from individualized networks was preprocessed and general linear modeling was used to decode each coherence level in real-time every TR=2 sec (yellow line denotes the time series during green, purple, and black periods representing up or down direction at coherence level C100, C33, and baseline-random motion period, respectively); 5) The BOLD signal intensity and spatial extent (individualized network) for each direction was computed via a GLM, and beta weights were updated, see Fig. 1B. 6) iNM interface shows the extent of the circle filled, which directly corresponds to the percent upregulation (red fill) or downregulation (blue fill) of the BOLD signal intensity and spatial extent.
B. Task design: The study was performed over two days in which 5 control and 5 iNM runs were completed in an alternating fashion. Each run was performed in a temporal sequence that included: a coherent motion block lasting 20 seconds interleaved with a 10-second baseline-random motion block. The level of coherence was counterbalanced across runs, and blocks ensuring that the same coherence levels were not stacked back-to-back.
C. Computation of the iNM signal: The neuromodulation stimulus was calculated as a percent change of the BOLD signal intensity during the task compared to the baseline-random motion block during each control run which served as a reference. If the percent BOLD signal change during a 2-sec interval was higher than the 10th percentile, the iNM interface indicated a 10% upregulation, and if less than the 10th percentile, the iNM interface indicated downregulation.
D. Brain activation maps show increased activation in the iNM condition compared to control: GLM-generated activation maps for each coherence level and motion direction in iNM (top) and control- no cNMT (bottom) conditions are shown; voxelwise p-value = 0.05, cluster size > 20 voxels.
Description of the data and file structure
In the root directory (Data), each participant's data is organized into individual folders. These participant folders contain subfolders that accommodate specific aspects of the study. Notably, instances where participants repeated the task in a different visual field quadrant are indicated by the prefix TT_Q in the subfolder names. Within each quadrant's folder, there are two subfolders labeled down and up, corresponding to the task direction. Each of these subfolders contains two additional folders named control and iNM, which correspond to the control and neuromodulation arms of the study. Due to HIPPA compliance regarding the sharing of MRI data that could potentially be reconstructed into facial images, and because certain steps in the AFNI preprocessing pipeline require the presence of the skull, we have shared only the preprocessed files. The preprocessing code we used is included in the repository for your reference.
Data/
└──HS002/
├── ...
└──HS003/
└── TT_Q1/
└── preprocessedTT27/
└── up └── control/
├── ... └── iNM/
├── ...
└── down/
└── control/
├── ... └── iNM/
├── ...
└── TT_Q3/
└── ...
└──HS004/
├── ...
...
└──HS013/
├── ...
Code Overview
Before any code is run, AFNI must be installed on the computer. Instructions to install AFNI can be found here: AFNI Installation Link
In addition, Python must also be installed on the computer. The following python packages are needed for analysis:
pandas == 2.1.3
numpy == 1.26.2
0. Preprocessing
The script "preprocessing.sh" reads the AFNI Head/BRIK files and preprocesses them. The following steps are performed:
(1) Convert dicom to HEAD/BRIK
(2) Detection of outliers, despking
(3) Slice time correction
(4) Alignment of functional volumes (to the first volume of the first run)
(5) Skullstripping of anatomical data and Talairaching using template TTN27+tlrc dataset in AFNI
(6) Warp functional data to Talairach
(7) Blurring with fwhm=6mm
(8) Masking
(9) Segmentation with 3dSeg
(10) Scaling to have mean voxel time series of 100
1. Generalized Linear Model (GLM) Labels
The script "2_generate GLM_labels.py" generates labels for each coherence and direction that will be used to run the GLMs.
2. GLM
The script "3_run_GLM.py" runs the GLM. We ran 2 GLMs for each coherence level and direction:
(1) One in which 5 control runs (runs 3-4-5-6-7) were concatenated
(2) One in which 5 iNM runs (runs 8-9-10-11-12) were concatenated
3. Group Analysis
The script "4_group_analysis.py" runs a 3DREMLFit on individual subject GLMs to compare for group analysis.
4. Anatomical ROIs
GLM-generated activation maps have big clusters that span across multiple brain regions. To extract BOLD signal from each brain region, the first step was to generate anatomical datasets for each region. The script "5_generate_anat_ROIs.py" generates these anatomical masks.
5. Intersection Masks
The script "6_intersection_masks.py" generates masks that encompass the intersection of functional clusters and anatomical regions. These intersection masks can be obtained using 3dcalc.
(1) First step is to obtain datasets for functional clusters using 3dClusterize. Functional clusters for iNM condition were based on activation maps for iNM condition, and functional clusters for control condition were based on activation maps for control condition.
(2) 3dcalc was then used to multiply anatomical ROIs with the functional clusters just obtained to obtain the intersection masks
6. Find Local Maxima
The goal was to generate tables of activated regions in each condition. The script "7_extrema_mema.py" runs the command 3dExtrema for each cluster in the activation maps of the corresponding condition to create these tables.
(1) Functional clusters obtained previously through 3dClusterize were then masked and fed into 3dExtrema to find the local maximum. Information regarding the anatomical region in which the functional cluster was located in, the number of voxels of the anatomical region, the number of voxels that intersected between the anatomical region and the functional cluster, the ratio between the intersection voxels and the total voxels of the anatomical region, the X, Y, and Z coordinates of the local maximum and the z-score of the local maximum are stored in an excel file for each cluster.
Subjects
Eight healthy, right-handed volunteers (4 males, 4 females, age range = 25-31) were recruited into this 3-day study after obtaining informed consent according to the Baylor College of Medicine Institutional Research Board. Exclusion criteria included prior and current medical or psychiatric diagnoses, intake of any medications, and general contraindications against MRI examinations. Participants had normal or corrected-to-normal visual acuity with MRI-compatible glasses. At the end of each study day, participants were compensated for their time.
MRI and fMRI Pulse Sequence Parameters
Structural and functional brain imaging was performed at the Core for Advanced Magnetic Resonance Imaging, at Baylor College of Medicine, Houston, Texas using a 3.0 T Siemens Prisma (Siemens, Erlangen, Germany). We used a 20-channel head/neck receiver-array coil to acquire images. A T1-weighted 3D magnetization-prepared, gradient-echo (MPRAGE) sequence acquired 192 high-resolution axial slices [field-of-view (FOV) = 245 x 245 mm²; base resolution = 256 x 256; repetition time (TR) = 1,200 ms; echo time (TE) = 2.66 ms; flip angle (FA) = 12°]. Functional data consisted of 33 interleaved axial slices acquired using an Echo Planar Imaging (EPI) sequence (FOV = 200 x 200mm², voxel size = 3.1 x 3.1 x 3.0 mm, TR = 2000 ms; flip angle = 90°, number of volumes = 244).
Real-time fMRI Neuromodulation Acquisition
Turbo-BrainVoyager (TBV; 2.0; Brain Innovation, Maastricht, The Netherlands) software was used to perform the following five pre-processing computations on EPI images acquired at every time repetition (TR): 1) 3D motion correction; 2) incremental linear detrending to remove BOLD signal drifts; 3) statistical brain map displays generated from a general linear model (GLM) along with beta weights (BOLD signal intensity values) for each condition; 4) extraction of average BOLD signal intensity values from individualized networks acquired on Day 1 scans (see Data Analysis); and 5) presentation of the network average BOLD signal intensity via the neuromodulation interface (Figure 1A). The iRTfMRI neuromodulation (iNM) interface steps are summarized in Figure 1. To increase the signal-to-noise ratio (SNR), we used an exponential moving average (EMA) algorithm to high-pass filter the ROI BOLD average and suppress low-frequency noise components such as scanner drifts and physiological noise effects (e.g., heart rate and respiration). The EMA output was then low-pass filtered via a Kalman filter to eliminate high-frequency noise.
Task Design
A random dot kinematogram (RDK) was presented to the lower right quadrant of each subject’s right visual field, while they were asked to fixate on a dot in the middle of the screen. The RDK displayed upward or downward motion at either fully coherent or subthreshold levels. Four levels of coherent motion were chosen for this study; 100%, 84%, 66%, and 33%. Here we focus on the full and subthreshold coherence levels represented as C100 and C33 throughout this paper. Using their central vision to fixate on a dot in the middle of the screen, participants were asked to track the direction of RDK motion, which was presented in the lower quadrant of their right visual field, through their peripheral vision as it alternated between directions versus random motion. In the control and neuromodulation conditions, participants were asked to superimpose the upward or downward direction of motion centrally via visual imagery, while the direction of motion was tracked via their peripheral vision. In the iNM condition, the central dot served as the neuromodulation interface, i.e., when the central dot filled with red color, it corresponded to the successful visual imagery of the upward or downward direction of motion. The direction of motion was interleaved with blocks of random motion, during which subjects were asked to rest by disengaging from superimposing imagery of the direction of motion while continuing to fixate on the central dot.
Study Structure
Our study included two sessions; each consisting of ten functional (echo planar imaging; EPI) scans, which included five control-no iNM scans that alternated with five neuromodulation (iNM) scans. Each EPI scan included eight continuous periods each lasting 8 minutes and 12 seconds. Within each period, subjects were cued to imagine motion perception as either up or down depending on the RDK session displaying one of the four coherence levels. Each coherent motion block lasted 20 secs and was interleaved with a baseline-random motion block (10 secs). The direction of coherent motion blocks was randomly counterbalanced across runs, following three rules: 1) each coherent motion block occurred twice during each period; 2) a coherent motion block was never followed by the same coherence level; and 3) each run consisted of a unique block order.
Neuromodulation Paradigm
Neuromodulation was determined by the color and extent of a circle that was filled, representing the magnitude and extent of each subject’s targeted network. The neuromodulation signal was calculated by comparing the percent BOLD signal change (PSC) generated during each control run for each coherence block with the rest block that preceded it. The BOLD PSC change was calculated from each participant’s individualized areas every 2 seconds as follows: BOLD PSCi(j) = 100% * [ROIs BOLD during Up OR Down direction selectivity i(j) –ROIs BOLD during tongue at rest i(j)]tongue rest i(j) where i represents coherence level (C100; C84; C66; C33), j represents the time interval (2 secs) used to compute the BOLD PSC of coherence level I . The neuromodulation presented at each TR was computed by comparing the current PSC value with a PSC reference range that included seven bins of 25% BOLD increase or decrease: -100%; -75%; -50%; -25%; 0; 25%; 50%; 75%; 100%. During the iNM run following each control run, if the PSC at a given time point was within the reference range or higher than the maximum value, the circle was filled with red, representing upregulation of the targeted ROI BOLD signal that controlled visual perception and imagery. If the PSC was lower than the minimal value, the circle was filled with blue, representing the downregulation of the targeted ROI BOLD signal that controlled visual perception and imagery.
- Allam, Anthony Kaspa; Allam, Vincent; Reddy, Sandesh et al. (2024). Mechanisms of individualized fMRI neuromodulation for visual perception and visual imagery. Zenodo. https://doi.org/10.5281/zenodo.10161884
- Allam, Anthony Kaspa; Allam, Vincent; Reddy, Sandesh et al. (2024). Mechanisms of individualized fMRI neuromodulation for visual perception and visual imagery. Zenodo. https://doi.org/10.5281/zenodo.10161883
- Allam, Anthony; Allam, Vincent; Reddy, Sandy et al. (2024). Individualized functional magnetic resonance imaging neuromodulation enhances visuospatial perception: a proof-of-concept study. Philosophical Transactions of the Royal Society B: Biological Sciences. https://doi.org/10.1098/rstb.2023.0083
