Data from: Dissecting abstract, modality-specific and experience-dependent coding of affect in the human brain
Data files
Jan 08, 2024 version files 26.91 GB
-
dameca.tar
-
README.md
Abstract
Emotion and perception are tightly intertwined, as affective experiences often arise from the appraisal of sensory information. Nonetheless, whether the brain encodes emotional instances using a sensory-specific code or in a more abstract manner is unclear. Here, we answer this question by measuring the association between emotion ratings collected during a unisensory or multisensory presentation of a full-length movie and brain activity recorded in typically-developed, congenitally blind and congenitally deaf participants. Emotional instances are encoded in a vast network encompassing sensory, prefrontal, and temporal cortices. Within this network, the ventromedial prefrontal cortex stores a categorical representation of emotion independent of modality and previous sensory experience, and the posterior superior temporal cortex maps the valence dimension using an abstract code. Sensory experience more than modality impacts how the brain organizes emotional information outside supramodal regions, suggesting the existence of a scaffold for the representation of emotional states where sensory inputs during development shape its functioning.
README: Dissecting abstract, modality-specific, and experience-dependent coding of affect in the human brain
https://doi.org/10.5061/dryad.vdncjsz2k
Description of the data and file structure
Behavioral data and code
behavioral/code/ -> all MATLAB functions needed to analyze the behavioral data (i.e., categorical and valence ratings collected during the auditory, visual, and multisensory experiments).
behavioral/data/categorical_audio/ -> 20 folders (one for each participant) storing the categorical annotations of emotion collected during the auditory experiment. The folder also includes subs_demographics.csv, which stores participants' demographics. In each participant's folder, there are 6 MATLAB workspace files (_run0?.mat) storing ratings of the affective experience for each run. In each workspace, there is a structure called *ReMoTa_Output, which contains all behavioral data and experiment details. Specifically:
-ReMoTa_Output.Date_Experiment stores the date of the experiment.
-ReMoTa_Output.MovieFile stores the path to the stimulus file.
-ReMoTa_Output.MovieHeightInDeg is the height of the video in visual degrees.
-ReMoTa_Output.MovieWidthInDeg is the width of the video in visual degrees.
-ReMoTa_Output.Ratings stores the emotion annotations provided by the participant.
-ReMoTa_Output.RatingsSamplingFrequency the sampling frequency of emotion annotations in Hz.
-ReMoTa_Output.ResponseTime is the timing of the button press (not very useful).
-ReMoTa_Output.StepsInIntensity is the number of levels of intensity that could be specified for each emotional instance (if == 1, then only presence/absence).
-ReMoTa_Output.Subject is the participant id.
-ReMoTa_Output.TaggingCategories stores the labels (in Italian) of the emotion categories used in the experiment. the order reflects the rows in ReMoTa_Output.Ratings.
behavioral/data/categorical_audiovideo/ -> 22 folders (one for each participant) storing the categorical annotations of emotion collected during the multisensory experiment. The folder also includes subs_demographics.csv, which stores participants' demographics. In each participant's folder, there are 6 MATLAB workspace files (_run0?.mat) storing ratings of the affective experience for each run. In each workspace, there is a structure called *ReMoTa_Output, which contains all behavioral data and experiment details. The organization of the data structure is identical across conditions (please refer to the previous section for further details).
behavioral/data/categorical_video/ -> 20 folders (one for each participant) storing the categorical annotations of emotion collected during the visual experiment. The folder also includes subs_demographics.csv, which stores participants' demographics. In each participant's folder, there are 6 MATLAB workspace files (_run0?.mat) storing ratings of the affective experience for each run. In each workspace, there is a structure called *ReMoTa_Output, which contains all behavioral data and experiment details. The organization of the data structure is identical across conditions (please refer to the previous section for further details).
behavioral/data/valence_audio/ -> 20 folders (one for each participant) storing the valence ratings collected during the auditory experiment. The folder also includes subs_demographics.csv, which stores participants' demographics. In each participant's folder, there are 6 MATLAB workspace files (_run0?.mat) storing ratings of the affective experience for each run.In each workspace, there is a structure called *ReMoTa_Output, which contains all behavioral data and experiment details. The organization of the data structure is identical across conditions (please refer to the previous section for further details).
behavioral/data/valence_audiovideo/ -> 21 folders (one for each participant) storing the valence ratings collected during the multisensory experiment. The folder also includes subs_demographics.csv, which stores participants' demographics. In each participant's folder, there are 6 MATLAB workspace files (_run0?.mat) storing ratings of the affective experience for each run. In each workspace, there is a structure called *ReMoTa_Output, which contains all behavioral data and experiment details. The organization of the data structure is identical across conditions (please refer to the previous section for further details).
behavioral/data/valence_video/ -> 21 folders (one for each participant) storing the valence ratings collected during the visual experiment. The folder also includes subs_demographics.csv, which stores participants' demographics. In each participant's folder, there are 6 MATLAB workspace files (_run0?.mat) storing ratings of the affective experience for each run. In each workspace, there is a structure called *ReMoTa_Output, which contains all behavioral data and experiment details. The organization of the data structure is identical across conditions (please refer to the previous section for further details).
Neuroimaging data and code
fmri/code/ -> fmri_preprocessing.sh is the bash script to preprocess raw fMRI data. Requires AFNI, ANTs, and FSL. All other *.m files are MATLAB functions needed to analyze the fMRI data (e.g., conjunction_univariate.m, mvpa_classification.m).
fmri/data/audio/ -> this folder stores preprocessed fMRI data for 21 participants (????_sub-???_allruns-cleaned_reml2mni.nii.gz) collected during the auditory experiment. The filename indicates the participant's sensory experience (i.e., blind for congenitally blind individuals; ctrl for typically-developed people) and the id, which matches what we report in Table 2 (e.g., sub-033). In addition, this folder stores the single-participant results of the voxelwise encoding analysis (????_sub-???_allruns-cleaned_encoding_results_categorical_nn1.nii.gz), as well as the results of the non-parametric combination at the group level (????_audio_encoding_npc_fwe_nn1.nii.gz) and the comparison between the categorical and dimensional models in terms of fitting brain activity (????_audio_adjr2_cat-dim.nii.gz).
fmri/data/audiovideo/ -> this folder stores preprocessed fMRI data for 10 participants (ctrl_sub-???_allruns-cleaned_reml2mni.nii.gz) collected during the multisensory experiment. The filename indicates the participant's id, which matches what we report in Table 2 (e.g., sub-012). In addition, this folder stores the single-participant results of the voxelwise encoding analysis (ctrl_sub-???_allruns-cleaned_encoding_results_categorical_nn1.nii.gz), as well as the results of the non-parametric combination at the group level (ctrl_audiovideo_encoding_npc_fwe_nn1.nii.gz) and the comparison between the categorical and dimensional models in terms of fitting brain activity (ctrl_audiovideo_adjr2_cat-dim.nii.gz).
fmri/data/video/ -> this folder stores preprocessed fMRI data for 19 participants (????_sub-???_allruns-cleaned_reml2mni.nii.gz) collected during the visual experiment. The filename indicates the participant's sensory experience (i.e., deaf for congenitally deaf individuals; ctrl for typically-developed people) and the id, which matches what we report in Table 2 (e.g., sub-020). In addition, this folder stores the single-participant results of the voxelwise encoding analysis (????_sub-???_allruns-cleaned_encoding_results_categorical_nn1.nii.gz), as well as the results of the non-parametric combination at the group level (????_video_encoding_npc_fwe_nn1.nii.gz) and the comparison between the categorical and dimensional models in terms of fitting brain activity (????_video_adjr2_cat-dim.nii.gz).
fmri/data/gm_010_final.nii.gz -> the mask used in voxelwise encoding analysis.
fmri/data/vmpfc_neurosynth_mask_nn1_gm_masked.nii.gz -> a vmPFC mask obtained from neurosynth, which has been employed in testing the association between average activity of this region and the emotion model.
fmri/data/npc_overlap_conditions_groups_fwe_clust_nn1.nii.gz -> a map showing the overlap between brain regions encoding the emotion model across groups and conditions.
fmri/data/npc_overlap_conditions_groups_fwe_clust_nn1_thr1_min20.nii.gz -> a mask of the emotion network used to classify participants' sensory experience and stimulus modality.
fmri/data/npc_overlap_conditions_groups_fwe_clust_nn1_thr1_min20_mpfc.nii.gz -> a mask of mPFC used to classify participants' sensory experience and stimulus modality.
fmri/data/npc_overlap_conditions_groups_fwe_clust_nn1_thr1_min20_rsts.nii.gz -> a mask of right STS used to classify participants' sensory experience and stimulus modality.
fmri/data/npc_overlap_conditions_groups_fwe_clust_nn1_thr1_min20_lsts.nii.gz -> a mask of left STS used to classify participants' sensory experience and stimulus modality.
fmri/data/mvpa_classifier_results_feature_relevance.nii.gz -> a map showing feature relevance for classifying participants' sensory experience and the stimulus modality from the activity of regions encoding the emotion model.
fmri/data/avg_across_groups_and_conditions_mpfc_masked_zscore_significant_emotions.nii.gz -> a map showing the average standardized fitting coefficients of emotions significantly classified from mPFC activity across groups and conditions.
False positive rate code
fpr_simulation/ -> this folder stores the code to verify that our NPC method ensures a rate of false positive results is in line with what is expected based on the alpha level. fpr_simulation.m is the main script to run, generate synthetic_fmri.m is a function that generates fake fMRI timeseries, categorical_encoding_matrix_tpshuffle.mat stores the emotion model used in voxelwise encoding analyses. the workspace contains two variables: categorical_encoding_matrix, the actual encoding matrix, and categorical_encoding_matrix_null, the null encoding matrix generated by shuffling the timepoints before convolution. R2_audiovideo_subjects.mat contains the fitting values of the emotion model across all voxels and participants in the multisensory condition (variable: timeserie_R2) and their xyz coordinates in MNI space (variable: coordinates). npc_fpr_1000_exp_10_part_80_vox_1614_tps_16_pred_2001_perms.mat is the workspace storing the results of the simulation for 1,000 experiments, 10 participants, 80 voxels, 1,614 timepoints, 16 predictors, and 2000 permutations. relevant variables in the workspace are:
-all_fwc_pvalues is a 2D matrix (experiments by voxels) storing the familywise corrected pvalues for each voxel and experiment.
-categorical_encoding_matrix is a 3D matrix (timepoints by predictors by permutations) storing the encoding model and its permuted versions based on timepoint shuffling.
-experiment_data is a 3D matrix (timepoints by voxels by participants) storing fmri data from one simulated experiment. a new matrix of simulated data is generated for each experiment.
-fpr is the false positive rate obtained from the simulation after correction for multiple comparisons.
-fpr_ci is the 95th confidence interval of the false positive rate
-rsquared is a 3D matrix (participants by voxels by permutations) storing the coefficient of determination obtained by fitting the encoding model to fmri data in one simulated experiment. a new matrix of coefficients is obtained from each experiment.
-significant_experiments is a logical column array with the number of elements equal to the number of simulated experiments. at each position, the value is either 0 (false) if all voxels do not pass the familywise corrected threshold (fwc_alpha) or 1 (true) if at least one voxel reaches statistical significance (false positive experiment).
npc_fpr_1000_exp_10_part_80_vox_1614_tps_16_pred_2001_perms_shuffle_after_convolution.mat is the workspace storing the results of the same simulation with the only exception that timepoint shuffling is applied after convolution (i.e., wrong method). therefore, the variables' names are identical to those in the npc_fpr_1000_exp_10_part_80_vox_1614_tps_16_pred_2001_perms.mat workspace.
Experiment Info
In the main folder, the file experiment_info.mat stores information about the fmri and behavioral experiments most scripts require. These details are the tagging categories (i.e., experiment_info.emotion_categories), the number of timepoints in the fmri acquisition for each run (i.e., experiment_info.fmri_runs_duration), the sampling frequency in the behavioral experiment (i.e., experiment_info.behavioral_sampling_freq), and the temporal resolution of the fmri acquisition in seconds (i.e., experiment_info.fmri_tr_in_sec).
Code/Software
In the main folder, the behavioral_parent_script.m and fmri_parent_script.m files contain examples to replicate the analyses reported in our work. For the code to run properly, you need SPM12 (https://www.fil.ion.ucl.ac.uk/spm/software/spm12/) and MATLAB Tools for NIfTI and ANALYZE image (https://it.mathworks.com/matlabcentral/fileexchange/8797-tools-for-nifti-and-analyze-image) in the MATLAB path. Also, the chaotic system toolbox (https://it.mathworks.com/matlabcentral/fileexchange/1597-chaotic-systems-toolbox) and the functions mpcdf.m, mpinv, and wachter.m (https://brainder.org/2021/04/05/how-many-principal-components/) should be added to the MATLAB path. For exporting figures we use the export_fig MATLAB package (https://it.mathworks.com/matlabcentral/fileexchange/23629-export_fig) and colorbrewer colormaps (https://it.mathworks.com/matlabcentral/fileexchange/45208-colorbrewer-attractive-and-distinctive-colormaps). All analyses are implemented in MATLAB R2022a.
Description of scripts and functions for the analysis of behavioral data:
-behavioral_parent_script.m
running the behavioral_parent_script.m file will result in all behavioral analyses being performed and individual figures being created. Specifically, the script evaluates the most frequent emotions for each condition. It also generates emotion-by-emotion representational dissimilarity matrices (RDMs) from categorical annotations collected under the three experimental conditions and affective norms reported in Warriner et al., 2013. Then, it computes Kendall's tau correlation between all pairings of RDMs to assess the similarity of emotions in a latent space across conditions and between behavioral annotations and affective norms. Lastly, it computes principal components (PCs) from categorical ratings and estimates the correlation between PCs and behavioral valence ratings.
-behavioral/code/analyze_emotion_ratings.m
the analyze_emotion_ratings.m function takes as input the parent directory storing categorical annotation of emotions from multiple participants and produces single-participant and group-level timeseries (i.e., the number of participants reporting an emotional instance for each timepoint) of emotion ratings downsampled to fmri resolution.
usage:
[ratings,downsampled_ratings,aggregated_ratings] = analyze_emotion_ratings('behavioral/data/categorical_audio/',experiment_info.fmri_runs_duration,experiment_info.behavioral_sampling_freq,experiment_info.fmri_tr_in_sec)
-------
INPUT:
behavioral/data/categorical_audio/: path to behavioral data
experiment_info.fmri_runs_duration: duration of fmri runs
experiment_info.behavioral_sampling_freq: sampling frequency of behavioral data
experiment_info.fmri_tr_in_sec: repetition time of fmri scan
-------
OUTPUT:
ratings: each participant's emotion annotations not downsampled to the fmri temporal resolution
downsampled_ratings: each participant's emotion annotations downsampled to the fmri temporal resolution
aggregated_ratings: group-level emotion annotations downsampled to the fmri temporal resolution
-behavioral/code/analyze_valence_ratings.m
the analyze_valence_ratings.m function takes as input the parent directory storing valence scores from multiple participants and produces single-participant and group-level timeseries (i.e., the median valence for each timepoint) of valence downsampled to fmri resolution.
usage:
[ratings,downsampled_ratings,aggregated_ratings] = analyze_valence_ratings('behavioral/data/valence_audio/',experiment_info.fmri_runs_duration,experiment_info.behavioral_sampling_freq,experiment_info.fmri_tr_in_sec)
-------
INPUT:
behavioral/data/valence_audio/: path to behavioral data
experiment_info.fmri_runs_duration: duration of fmri runs
experiment_info.behavioral_sampling_freq: sampling frequency of behavioral data
experiment_info.fmri_tr_in_sec: repetition time of fmri scan
-------
OUTPUT:
ratings: each participant's valence ratings not downsampled to the fmri temporal resolution
downsampled_ratings: each participant's valence ratings downsampled to the fmri temporal resolution
aggregated_ratings: group-level valence ratings downsampled to the fmri temporal resolution
-behavioral/code/jaccard_agreement.m
this script computes the agreement between participants in emotion annotations and its significance. Agreement is computed using Jaccard coefficients and significance is established through permutation testing. The script also produces Figures reported in the Supplementary materials of our paper.
-behavioral/code/prepare_encoding_regressors_new.m
the prepare_encoding_regressors_new.m function creates the encoding model and its null version for voxelwise encoding analyses. This version of the script also computes principal components from categorical ratings of emotion and set the optimal number of PCs using the Wachter method. Please note that the prepare_encoding_regressors.m function is an older version of the function that does not include the Wachter method.
[encoding_matrix,encoding_matrix_null,encoding_matrix_pc,encoding_matrix_pc_null,pc_coefficients,explained_variance,n_optimal_components] = prepare_encoding_regressors_new(aggregated_ratings,agreement_threshold,hrf_convolution,hrf_parameters,fmri_tr_in_sec,add_intercept,scaling,n_perm,random_seed,do_pc)
-------
INPUT:
aggregated_ratings: group-level timeseries of emotion ratings (typically the output of analyze_emotion_ratings.m)
agreement_threshold: the minimum number of participants reporting an emotional instance in the same timepoint (e.g., 2)
hrf_convolution: if set to 'yes' then emotion ratings are convolved using a hemodynamic response function.
hrf_parameters: the parameters for hrf convolution (e.g., [6 16 1 1 6 0 32] the standard in SPM12).
fmri_tr_in_sec: the temporal resolution of the fmri acquisition.
add_intercept: if set to 'yes' the intercept is added to the encoding model.
scaling: if set to 'yes' the encoding matrix is scaled based on overall maximum agreement across subjects.
n_perm: number of permutations of the timepoints under the null hypothesis (e.g., 2000).
random_seed: the randomization seed for reproducibility (e.g., 15012018).
do_pc: if set to 'yes' the function computes also principal components and determines the optimal number of PCs using the wachter method.
-------
OUTPUT:
encoding_matrix: the encoding matrix based on categorical ratings of emotion.
encoding_matrix_null: the null encoding matrix obtained from timepoint shuffling of the original encoding matrix.
encoding_matrix_pc: the encoding matrix based on principal components (i.e., dimensional model).
encoding_matrix_pc_null: the null encoding matrix based on principal components.
pc_coefficients: the coefficients of PCs.
explained_variance: the variance explained by each PC.
n_optimal_components: the optimal number of components according to the wachter method.
Description of scripts and functions for the analysis of fmri data:
-fmri_parent_script.m
running the fmri_parent_script.m file will result in all fmri analyses being performed. Specifically, it will perform voxelwise encoding analysis at the single-participant level and will estimate the group-level significance using a non-parametric combination approach. Univariate conjunction analyses are also performed on group-level results obtained from all groups and conditions. In addition, the script also provides results for univariate contrasts between groups and conditions. Running the script, multivoxel pattern classification of participants' sensory experience and stimulus modality is also performed, as well as crossdecoding of valence from regions encoding the emotion model.
-fmri/code/voxelwise_encoding_cluster_corr_new.m
the voxelwise_encoding_cluster_corr_new.m function performs voxelwise encoding at the single-participant level
usage:
voxelwise_encoding_results = voxelwise_encoding_cluster_corr_new(encoding_matrix,encoding_matrix_null,p_forming_thr,nn_type,fwe_threshold,fmri_parent_directory,input_files,output_suffix,fmri_mask_file,demean_fmri,n_cpus,save_to_disk)
-------
INPUT:
encoding_matrix: the encoding matrix, typically the output of the prepare_encoding_regressors_new.m function.
encoding_matrix_null: permuted versions of the encoding matrix also coming from prepare_encoding_regressors_new.m function.
p_forming_thr: the cluster defining threshold (e.g., 0.001).
nn_type: the type of connection to determine a cluster. nn_type = 1 means connected if faces touch; nn_type = 2 means connected if faces or edges touch; nn_type = 3 means connected if faces, edges or corners touch.
fwe_threshold: the familywise corrected threshold (e.g., 0.05).
fmri_parent_directory: the parent directory storing preprocessed single-participant fmri data.
input_files: the name of nifti files (e.g., 'ctrl_sub-*_allruns-cleaned_reml2mni.nii').
output_suffix: the suffix of the output filename (e.g., '_encoding_results_categorical_nn1').
fmri_mask_file: a mask to limit the search for significance (e.g., 'fmri/data/gm_010_final.nii').
demean_fmri: if 'yes' mean center the fmri signal.
n_cpus: number of cpus for parallel computing.
save_to_disk: if 'yes' voxelwise encoding results are saved to disk.
-------
OUTPUT:
voxelwise_encoding_results: a matrix storing voxelwise encoding results for each voxel.
-fmri/code/voxelwise_encoding_group_analysis_npc.m
the voxelwise_encoding_group_analysis_npc.m function computes group results for categorical ratings using non-parametric
combination.
usage:
voxelwise_group_results = voxelwise_encoding_group_analysis_npc(parent_dir,single_sub_matfiles,fmri_mask_file,output_filename,fwe_threshold,cluster_forming_thr,nn_type)
-------
INPUT:
parent_dir: the folder with MATLAB workspaces storing the results of the voxelwise encoding analysis for each participant. Typically the results of the voxelwise_encoding_cluster_corr_new.m function (e.g., 'fmri/data/audio/').
single_sub_matfiles: the prefix of single-participant MATLAB workspaces (e.g., 'ctrl_*_encoding_results_categorical_nn1.mat').
fmri_mask_file: a mask to limit the search for significance (e.g., 'fmri/data/gm_010_final.nii').
output_filename: the suffix of the output filename (e.g., 'ctrl_audio_encoding_npc_fwe_nn1.nii').
fwe_threshold: the familywise corrected threshold (e.g., 0.05).
cluster_forming_thr: the cluster defining threshold (e.g., 0.001).
nn_type: the type of connection to determine a cluster.
-------
OUTPUT:
voxelwise_group_results: a matrix storing the family-wise corrected pvalue for each voxel.
-fmri/code/conjunction_univariate.m
the conjunction_univariate.m function performs conjunctions between group-level results obtained from non-parametric combination. please refer to equations e-h in the manuscript for further details.
usage:
conjunction_results = conjunction_univariate(path_to_univariate_results,save_to_disk)
-------
INPUT:
path_to_univariate_results: specify the path pointing to all group-level voxelwise encoding results (e.g., 'fmri/data/*/*_encoding_npc_fwe_nn1.nii').
save_to_disk: if 'yes' conjunction results are saved to disk.
-------
OUTPUT:
conjunction_results: a structure storing results of univariate conjunction analyses.
-fmri/code/voxelwise_group_comparison.m
the voxelwise_group_comparison.m function compares the fitting obtained for the full emotion model between conditions and/or groups.
usage:
results = voxelwise_group_comparison(condition_a,condition_b,group_a,group_b,fmri_mask_file,nn_type,n_perm,cluster_forming_thr,number_of_cpus,save_to_disk)
-------
INPUT:
condition_a: the experimental condition of the first sample (e.g., audio).
condition_b: the experimental condition of the second sample (e.g., audio).
group_a: the sensory experience of the first sample (e.g., ctrl).
group_b: the sensory experience of the second sample (e.g., blind).
fmri_mask_file: a mask to limit the search for significance (e.g., 'fmri/data/gm_010_final.nii').
nn_type: the type of connection to determine a cluster.
n_perm: number of permutations for establishing statistical significance (e.g., 2000).
cluster_forming_thr: the cluster defining threshold (e.g., 0.001).
number_of_cpus: number of cpus for parallel computing.
save_to_disk: if 'yes' results are saved to disk.
-------
OUTPUT:
results: a structure storing results of univariate contrasts.
-fmri/code/mvpa_classification.m
the mvpa_classification.m function is used to classify participants sensory experience and the modality through which the emotion elicitation paradigm was administered from regions encoding the emotion model. we use a svm classifier and f1score as performance metric.
usage:
mvpa_classifier_output = mvpa_classification(parent_dir, nn_type, roi_file, n_folds, n_features, n_perms, performance_metric, save_to_disk, output_filename)
-------
INPUT:
parent_dir: the directory storing the single-participant workspaces obtained from the voxelwise_encoding_cluster_corr_new.m (e.g., 'fmri/data').
nn_type: the connection type used to define a cluster in single-participant analyses (e.g., 'nn1').
roi_file: a mask to determine the voxels used to perform the classification (e.g., 'fmri/data/npc_overlap_conditions_groups_fwe_clust_nn1_thr1_min20.nii').
n_folds: number of folds for the cross-validation procedure (e.g., 5).
n_features: number of features (i.e., voxels) that are used in the classification analysis (e.g., 1000).
n_perms: number of permutations for establishing the statistical significance of the classification. (e.g., 2000).
performance_metric: the metric used to evaluate classifier performance (e.g., 'f1score').
save_to_disk: if 'yes' results are saved to disk.
output_filename: a filename for the results (e.g., 'fmri/data/mvpa_classifier_results').
-------
OUTPUT:
mvpa_classifier_output: a structure storing results of multivariate classification.
-fmri/code/crossdecoding_ridge.m
the crossdecoding_ridge.m function is employed to crossdecode valence from regions significantly associated to the emotion model. this to test whether some brain area map valence in a supramodal manner. one can explore different masks to assess the spatial specificity of results (e.g., entire network encoding the emotion model, mpfc roi).
usage:
crossdecoding_results = crossdecoding_ridge(fmri_data_dir, file_prefix, mask_file, random_seed, valence_data_dir, experiment_info, ridge_values, n_perm, save_output)
-------
INPUT:
fmri_data_dir: the directory storing preprocessed single-participant fmri data (e.g., 'fmri/data').
file_prefix: the prefix of single-participant nifti files (e.g., '_reml2mni.nii').
mask_file: a mask to determine the voxels used to crossdecode valence (e.g., 'fmri/data/npc_overlap_conditions_groups_fwe_clust_nn1_thr1_min20.nii').
random_seed: the randomization seed for reproducibility (e.g., 14051983).
valence_data_dir: the parent directory storing behavioral ratings of valence (e.g., 'behavioral/data').
experiment_info: the variable containing experiment details (this is the structure stored in the experiment_info.mat workspace).
ridge_values: the penalization values to be tested in the crossvalidation procedure (e.g., logspace(-3,2,1000)).
n_perm: number of permutations for establishing the statistical significance (e.g., 2000).
save_output: if 'yes' results are saved to disk.
-------
OUTPUT:
crossdecoding_results: a structure storing the results of the crossdecoding of valence.
-fmri/code/emotion_decoding_and_coefficients_similarity_in_mpfc.m
the emotion_decoding_and_coefficients_similarity_in_mpfc.m script performs the decoding of emotional instances from mpfc regression coefficients.
-fmri/code/multiclass_classifier_performance.m
the multiclass_classifier_performance.m function is used to estimate performance metrics in the context of multiclass classification
usage:
[macro_metrics, weighted_metrics, micro_metrics, single_class_metrics, classifier_errors] = multiclass_classifier_performance(confusion_matrix)
-------
INPUT:
confusion_matrix: a confusion matrix resulting from a classification procedure.
-------
OUTPUT:
macro_metrics: stores - in the following order - the macro accuracy, the macro precision, the macro recall and the macro f1 score for all the evaluated confusion matrices.
weighted_metrics: stores - in the following order - the weighted average accuracy, precision, recall and f1 score for all the evaluated confusion matrices. the averages are weighted by the number of elements in each class.
micro_metrics: in multiclass classification micro precision, recall and f1 score are the same number. this is what micro_metrics stores for each evaluated confusion matrix.
single_class_metrics: stores - in the following order - the accuracy, precision, recall, and f1 score of each class and for all the evaluated confusion matrices.
classifier_errors: stores - in the following order - the number of true positives, true negatives, false positives and false negatives of each class and for all the evaluated confusion matrices
-fmri/code/neurosynth_vmpfc.m
the neurosynth_vmpfc.m script estimates the relationship between average activity of vmpfc and the emotion model.