Skip to main content
Dryad

Collective detection based on visual information in animal groups

Cite this dataset

Davidson, Jacob D. et al. (2021). Collective detection based on visual information in animal groups [Dataset]. Dryad. https://doi.org/10.5061/dryad.sbcc2fr2h

Abstract

We investigate key principles underlying individual, and collective, visual detection of stimuli, and how this relates to the internal structure of groups. While the individual and collective detection principles are generally applicable, we employ a model experimental system of schooling golden shiner fish ( Notemigonus crysoleucas ) to relate theory directly to empirical data, using computational reconstruction of the visual fields of all individuals. This reveals how the external visual information available to each group member depends on the number of individuals in the group, the position within the group, and the location of the external visually detectable stimulus. We find that in small groups, individuals have detection capability in nearly all directions, while in large groups, occlusion by neighbours causes detection capability to vary with position within the group. To understand the principles that drive detection in groups, we formulate a simple, and generally applicable, model that captures how visual detection properties emerge due to geometric scaling of the space occupied by the group and occlusion caused by neighbours. We employ these insights to discuss principles that extend beyond our specific system, such as how collective detection depends on individual body shape, and the size and structure of the group.

Methods

PI:  Iain D. Couzin, Max Planck Institute for Animal Behavior, icouzin@ab.mpg.de

The videos used in this analysis come from a previous study:
Katz, Y., Tunstrøm, K., Ioannou, C.C., Huepe, C., Couzin, I.D., 2011. Inferring the structure and dynamics of interactions in schooling fish. Proceedings of the National Academy of Sciences 108, 18720–18725. https://doi.org/doi.org/10.1073/pnas.1107583108

Usage notes

File contents 
saved_data_and_results.zip: 
Processed results needed to reproduce results in the paper.  See ipython notebooks to read in these results and make plots: 
'4 - Data figures.ipynb'  https://github.com/jacobdavidson/collectivedetection/blob/main/4%20-%20Data%20figures.ipynb 
'5 - Model.ipynb': https://github.com/jacobdavidson/collectivedetection/blob/main/5%20-%20Model.ipynb 

How to read in the pkl and pklz files in this folder (also see above ipython notebooks): 
[grid_allseen, grid_allseen_problow, grid_allseen_probhigh, grid_allseen_blind,processing_skipvalue] = pickle.load(gzip.open('detectionresults.pklz','rb'))    
[grid_frontbackdist,grid_sidesidedist,grid_groupstates,grid_groupheading,grid_orientations,grid_tailcoords,grid_positions,grid_groupcentroid,grid_lefteye,grid_righteye] = pickle.load(gzip.open('data-grid.pklz','rb')) 
[degreebins,distbins,d1D,exampledata] = pickle.load(open('distributions+exampledata.pkl','rb')) 
[indiv_mean,indiv_std,group_mean,group_std] = pickle.load(open('ind-groupmeans.pkl','rb')) 

Individual trial zip files 
10-fish-0066.zip, 10-fish-0105.zip, 10-fish-0126.zip, 30-fish-0084.zip, 30-fish-0115.zip, 30-fish-0120.zip, 70-fish-0103.zip, 70-fish-0107.zip, 70-fish-0124.zip, 150-fish.zip 

See associated code files to read in and process this data, run updated visual detection algorithm, and save in a condensed form:
1 - Process data.ipynb:
https://github.com/jacobdavidson/collectivedetection/blob/main/1%20-%20Process%20data.ipynb
2 - Detection-run simulation.ipynb:
https://github.com/jacobdavidson/collectivedetection/blob/main/2%20-%20Detection-run%20simulation.ipynb
3 - Process Detection results.ipynb:
https://github.com/jacobdavidson/collectivedetection/blob/main/3%20-%20Process%20Detection%20results.ipynb

Each zip contains a folder with video files and associated tracked .h5 files
e.g. for trial labeled 0066 with 10 fish (10-fish-0066.zip), files are 
0066_006-calib.mov
0066_000_fov.h5
0066_000-calib.mov
0066_001_fov.h5
0066_001-calib.mov
0066_002_fov.h5
0066_002-calib.mov
0066_003_fov.h5
0066_003-calib.mov
0066_004_fov.h5
0066_004-calib.mov
0066_005_fov.h5
0066_005-calib.mov
0066_006_fov.h5

For 150-fish, there is only one trial, and the data is already joined together in a single large h5 file.   

.h5 file contains output of fish that were tracked with Schooltracker (Couzin lab, Princeton Univ), and manually corrected to maintain identities of individual fish over the trial.  These files contain angular area estimated from the Fovea software (Colin Twomey), however, in the 2021 publication in JRSIF, only the following fields from the .h5 files are used:
- /fields/x, /fields/y:  x and y head positions of tracked fish
- /fields/heading_x, /fields/heading_y:  heading of each fish
- /fields/body_midline_x, /fields/body_midline_y:  fish midline x and y coordinates 
- /fields/body_length:  fish body length
- /fields/left_eye_x, /fields/left_eye_y, /fields/right_eye_x, /fields/right_eye_y:  tracked positions of the left and right eyes of each fish

Funding

Heidelberg Academy of Sciences and Humanities

Office of Naval Research, Award: N00014-19-1-2556

Deutsche Forschungsgemeinschaft, Award: EXC 2117-422037984

National Science Foundation, Award: Graduate Research Fellowship

National Science Foundation, Award: IOS-1355061

MindCORE

Struktur-und Innovationsfonds für die Forschung of the State of Baden-Württemberg

Max Planck Institute of Animal Behavior