Dual-feature selectivity enables bidirectional coding in visual cortical neurons
Data files
Nov 11, 2025 version files 25 GB
-
batch_001.zip
951.53 MB
-
batch_002.zip
951.83 MB
-
batch_003.zip
950.27 MB
-
batch_004.zip
951.44 MB
-
batch_005.zip
948.76 MB
-
batch_006.zip
952.83 MB
-
batch_007.zip
952.89 MB
-
batch_008.zip
953.29 MB
-
batch_009.zip
951.62 MB
-
batch_010.zip
951.12 MB
-
batch_011.zip
951.25 MB
-
batch_012.zip
951.82 MB
-
batch_013.zip
952.23 MB
-
batch_014.zip
949.65 MB
-
batch_015.zip
949.87 MB
-
batch_016.zip
949.70 MB
-
batch_017.zip
948.90 MB
-
batch_018.zip
949.57 MB
-
batch_019.zip
951.71 MB
-
batch_020.zip
953.51 MB
-
README.md
10.84 KB
-
v1_imagenet_ordered_indices.npz
1.96 GB
-
v1_imagenet_ordered_responses.npz
764.07 MB
-
v1_leis.npz
222.63 MB
-
v1_meis.npz
221.73 MB
-
v1_rendered_ordered_indices.npz
271.48 MB
-
v1_rendered_ordered_responses.npz
158.27 MB
-
v4_imagenet_ordered_indices.npz
901.28 MB
-
v4_imagenet_ordered_responses.npz
366.23 MB
-
v4_leis.npz
457.20 MB
-
v4_meis.npz
458.13 MB
-
v4_rendered_ordered_indices.npz
125.06 MB
-
v4_rendered_ordered_responses.npz
76.28 MB
Abstract
This dataset contains neural recordings and computational analyses supporting the identification of dual-feature selectivity in visual cortex. We recorded spiking activity from macaque visual areas V1 (458 neurons) and V4 (394 neurons) while animals viewed naturalistic images, as well as from mouse visual cortex areas V1 (598 neurons), LM (350 neurons), and LI (126 neurons). Using functional digital twin models—deep learning-based predictive models trained on these recordings—we systematically characterized neuronal selectivity across the full dynamic range of responses. The dataset includes: (1) 200,000 synthetically rendered scenes (236×236 pixels, PNG format) used to probe neuronal responses; (2) optimized most and least exciting inputs (MEIs/LEIs) generated through gradient-based synthesis for each neuron; (3) indices identifying the most and least activating natural images (MAIs/LAIs) from large-scale screening of ImageNet and of the Rendered Data; (4) predicted neuronal activation profiles across all stimuli; and (5) metadata including baseline firing rates, and response reliability metrics. These data reveal that many visual neurons exhibit bidirectional selectivity—responding strongly to preferred features while being systematically suppressed by distinct non-preferred features around elevated baseline firing rates. This coding strategy appears conserved across species (macaque and mouse) and visual areas (from primary to higher-order cortex), suggesting a general principle of sensory coding that balances representational capacity with interpretable single-neuron responses.
Dataset DOI: 10.5061/dryad.q573n5tx3
Description of the data and file structure
This dataset contains the material needed to replicate the findings from "Dual-feature selectivity enables bidirectional coding in visual cortical neurons" (Franke, Karantzas et al., 2025). The data includes neuronal response predictions, optimized images, and naturalistic image sets used to characterize feature selectivity in macaque visual areas V1 and V4.
Files and variables
Files: [batch_001.zip, batch_002.zip, batch_003.zip, batch_004.zip, batch_005.zip, batch_006.zip, batch_007.zip, batch_008.zip, batch_009.zip, batch_010.zip, batch_011.zip, batch_012.zip, batch_013.zip, batch_014.zip, batch_015.zip, batch_016.zip, batch_017.zip, batch_018.zip, batch_019.zip, batch_020.zip]
Description: This dataset contains 200,000 synthetically rendered images of 3D objects, generated using the Kubric framework. The images are distributed across 20 zip files (batch_001.zip through batch_020.zip), each containing 10,000 images. When extracted, all zip files create a unified directory structure with images consolidated in a single rendered_data folder.
Each image is a 236×236 pixel RGB PNG file depicting a single object from one of 10 object classes. The scenes are systematically varied to support robust computer vision research, with diversity along multiple dimensions:
- Scale: Objects appear at various sizes within the frame
- Orientation: Objects are rendered from multiple viewpoints and rotations
- Position: Objects are placed at different locations within the scene
- Lighting: Diverse lighting conditions and environments
- Texture: Each object instance is overlaid with a texture sampled from the Describable Textures Dataset (DTD), providing rich surface appearance variation
This controlled variability makes the dataset well-suited for tasks such as object recognition, texture analysis, scale-invariant feature learning, and robustness evaluation under varying visual conditions.
Predicted Neural Response Files
This dataset includes eight NPZ files containing predicted neuronal responses from the digital twin models to large-scale image screening:
File Structure
Each NPZ file contains keys in the format unit_0, unit_1, ..., unit_N, where N corresponds to the number of neurons in the recorded population. Each key maps to an array of predicted responses or image indices for that specific neuron.
V1 Response Files
v1_imagenet_ordered_responses.npz (443 neurons)
- Contains predicted firing rates for each V1 neuron in response to 1,281,167 ImageNet-1K training images
- Responses are sorted in ascending order for each neuron
- Used to identify Most Activating Images (MAIs) and Least Activating Images (LAIs) from natural image statistics
v1_imagenet_ordered_indices.npz (443 neurons)
- Contains the ImageNet image indices corresponding to the sorted responses in
v1_imagenet_ordered_responses.npz - Enables the retrieval of specific images that elicited particular response levels
- The bottom indices identify LAIs; the top indices identify MAIs
v1_rendered_ordered_responses.npz (443 neurons)
- Contains predicted firing rates for each V1 neuron in response to 200,000 synthetically rendered Kubric scenes
- Responses are sorted in ascending order for each neuron
- Rendered scenes provide controlled variation in object shape, scale, orientation, position, lighting, and texture
v1_rendered_ordered_indices.npz (443 neurons)
- Contains the rendered image indices corresponding to the sorted responses in
v1_rendered_ordered_responses.npz - Maps response rankings to specific rendered scenes
V4 Response Files
v4_imagenet_ordered_responses.npz (205 neurons)
- Contains predicted firing rates for each V4 neuron in response to 1,281,167 ImageNet-1K training images
- Responses are sorted in ascending order for each neuron
- V4 neurons exhibit selectivity for more complex visual features than V1
v4_imagenet_ordered_indices.npz (205 neurons)
- Contains the ImageNet image indices corresponding to the sorted responses in
v4_imagenet_ordered_responses.npz
v4_rendered_ordered_responses.npz (205 neurons)
- Contains predicted firing rates for each V4 neuron in response to 200,000 synthetically rendered Kubric scenes
- Responses are sorted in ascending order for each neuron
v4_rendered_ordered_indices.npz (205 neurons)
- Contains the rendered image indices corresponding to the sorted responses in
v4_rendered_ordered_responses.npz
Preprocessing
All images were preprocessed prior to response prediction:
- V1 images: Center-cropped to 167 pixels (2.65°), downsampled to 93×93 pixels, converted to grayscale, masked by population receptive field, and normalized to ℓ2 norm of 12.0
- V4 images: Center-cropped to 200×200 pixels (bottom center region), downsampled to 100×100 pixels, masked by population receptive field, and normalized to ℓ2 norm of 40.0
Usage
These files enable:
- Identification of neurons' most and least activating images at scale
- Analysis of dual-feature selectivity (excitatory and suppressive tuning)
- Construction of 2D similarity spaces based on MAI and LAI features (Figure 9)
- Population-level analysis of shared feature selectivity (Figure 10)
- Validation of model predictions against recorded responses (Figure 7)
The sorted structure allows efficient extraction of response extremes without loading entire response matrices into memory.
Optimized Synthetic Images (MEIs and LEIs)
This dataset includes four NPZ files containing gradient-optimized synthetic images that maximally excite or suppress individual neurons. These images were generated using the digital twin models as described in the "Optimization of most and least exciting images" section of the Methods.
File Structure
Each NPZ file contains three arrays per neuron unit:
unit_N_images: Optimized image pixels (10 seeds × height × width × channels)unit_N_alphas: Optimization trajectory parameters (10 seeds × optimization_steps)unit_N_activations: Predicted neuronal responses during optimization (10 seeds × optimization_steps)
where N corresponds to the neuron index. Each neuron has 10 independent optimization runs starting from different random noise initializations (seeds) to ensure feature robustness and avoid local minima.
V1 Optimization Files
v1_meis.npz (443 neurons, 10 seeds each)
- Most Exciting Inputs (MEIs) for V1 neurons
- Gradient-optimized images that maximize predicted neuronal response
- Generated through 256 optimization steps using Adam optimizer (learning rate: 0.05)
- Optimization performed in pixel space with direct modification of grayscale values
- Images are 93×93 pixels, normalized to ℓ2 norm of 12.0
- Multi-crop augmentation applied during optimization (4 random crops per step)
- MEIs reveal orientation, spatial frequency, and phase selectivity (Figure 4)
v1_leis.npz (443 neurons, 10 seeds each)
- Least Exciting Inputs (LEIs) for V1 neurons
- Gradient-optimized images that minimize predicted neuronal response
- Same optimization procedure as MEIs, but with gradient descent to minimize activation
- LEIs reveal structured suppressive features, including orthogonal orientations, shifted spatial frequencies, and alternative texture patterns
- Demonstrate feature-specific inhibition beyond classical cross-orientation suppression
V4 Optimization Files
v4_meis.npz (205 neurons, 10 seeds each)
- Most Exciting Inputs (MEIs) for V4 neurons
- Gradient-optimized images that maximize predicted neuronal response
- Generated through 256 optimization steps using Adam optimizer (learning rate: 0.05)
- Optimization performed ithe n the Fourier frequency domain (phase spectrum only)
- Amplitude spectrum constrained to match the mean amplitude from 10,000 ImageNet images
- Images are 100×100 pixels (RGB), normalized to ℓ2 norm of 40.0
- Multi-crop augmentation applied during optimization
- MEIs reveal complex features including curved contours, textured surfaces, color combinations, eye-like configurations, and branching structures (Figure 5)
v4_leis.npz (205 neurons, 10 seeds each)
- Least Exciting Inputs (LEIs) for V4 neurons
- Gradient-optimized images that minimize predicted neuronal response
- Same optimization procedure as MEIs, but minimizing activation
- LEIs show coherent alternative feature configurations: different contour arrangements, color combinations, and texture patterns
- Demonstrate systematic suppression of distinct non-preferred features in V4
Optimization Details
Algorithm: Gradient-based optimization using the digital twin models
- V1 model: ConvNeXt-v2-tiny architecture with Gaussian readout
- V4 model: Adversarially-trained ResNet50 with Gaussian readout
- Starting point: Random noise images
- Steps: 256 iterations
- Optimizer: Adam with learning rate 0.05
- Augmentation: Multi-crop strategy with 4 crops per iteration, Gaussian-distributed centers (μ=0.5, σ=0.15)
- Constraint: Fixed ℓ2 norm matching natural image preprocessing
Validation
The optimized images were validated using:
- In vivo recordings: Model-predicted extreme stimuli accurately ranked in recorded response distributions (Figure 7)
- Independent evaluator models: Cross-model validation confirmed MEIs/LEIs reflected genuine neuronal tuning rather than optimization artifacts (Figure 8)
Usage
These files enable:
- Visualization of preferred and non-preferred features for individual neurons (Figures 4, 5)
- Analysis of dual-feature selectivity across the neuronal population
- Comparison of excitatory and suppressive feature structure
- Investigation of systematic relationships between MEIs and LEIs
- Verification that multiple optimization seeds converge to similar feature representations
Code Availability
Code for generating MEIs and LEIs, including the optimization procedures and augmentation strategies, is available at: https://github.com/enigma-brain/dualneuron
Related Files
These optimized images complement the large-scale screening results in:
v1_imagenet_ordered_*.npzandv4_imagenet_ordered_*.npz(Most/Least Activating Images from natural images)v1_rendered_ordered_*.npzandv4_rendered_ordered_*.npz(Most/Least Activating Images from rendered scenes)
Together, the gradient-optimized (MEI/LEI) and screened (MAI/LAI) approaches provide converging evidence for dual-feature selectivity in visual cortex neurons.
