Data and code from: 3D-SOCS: synchronized video capture for posture estimation
Data files
Apr 24, 2025 version files 7.52 GB
-
README.md
18.79 KB
-
reproducibility_3D_SOCS.zip
1.59 GB
-
SampleDataset.zip
5.69 GB
-
supplementary_videos.zip
243.13 MB
Abstract
This repository provides the data and code necessary to reproduce the manuscript "Peering into the world of wild passerines with 3D-SOCS: synchronized video capture for posture estimation".This repository also contains sample datasets for running the code and bounding box and keypoint annotations.
Collection of large behavioral data-sets on wild animals in natural habitats is vital in ecology and evolution studies. Recent progress in machine learning and computer vision, combined with inexpensive microcomputers, have unlocked a new frontier of fine-scale markerless measurements.
Here, we leverage these advancements to develop a 3D Synchronized Outdoor Camera System (3D-SOCS): an inexpensive, mobile and automated method for collecting behavioral data on wild animals using synchronized video frames from Raspberry Pi controlled cameras. Accuracy tests demonstrate 3D-SOCS’ markerless tracking can estimate postures with a 3mm tolerance.
To illustrate its research potential, we place 3D-SOCS in the field and conduct a stimulus presentation experiment. We estimate 3D postures and trajectories for multiple individuals of different bird species, and use this data to characterize the visual field configuration of wild great tits (Parus major), a model species in behavioral ecology. We find their optic axes at approximately ±60◦ azimuth and −5◦ elevation. Furthermore, birds exhibit functional lateralization in their use of the right eye with conspecific stimulus, and show individual differences in lateralization. We also show that birds’ convex hulls predicts body weight, highlighting 3D-SOCS’ potential for non-invasive population monitoring.
3D-SOCS is a first-of-its-kind camera system for wild research, presenting exciting potential to measure fine-scaled behavior and morphology in wild birds.
This repository provides the data and code necessary to reproduce the manuscript “Peering into the world of wild passerines with 3D-SOCS: synchronized video capture for posture estimation” by Michael Chimento, Alex Hoi Hang Chan, Lucy M. Aplin & Fumihiro Kano. Bold denotes co-first authorship. Note: This is separate from the code necessary to run 3D-SOCS yourself, which can be found at this github repository.
3D tracking pipeline and system accuracy validation were performed in Python, and any questions related to these should be directed to Alex Chan (hoi-hang.chan at uni-konstanz.de). Bayesian statistical analysis, figures and tables were all performed in R, and any questions related to these (or the Python scripts that control the data collection system) should be directed to Michael Chimento (mchimento at ab.mpg.de). We provide required packages, directory contents and column descriptions for all analyses below.
Contents of reproducibility_3D_SOCS.zip
Directory | Description |
---|---|
analysis_python | Analysis scripts ran in python, includes reproducing system accuracy test, gaze calculation and saccade analysis |
analysis_R | Analysis scripts ran in R, includes statistical models and figures |
data | Include all input data for reproducing results |
output | Output directory for all results and figures |
Util | Util python scripts for python analysis |
equipment_list.ods | spreadsheet of equipment we used, along with prices and links (working as of June 2024) |
R Requirements
To reproduce R code, the following packages are required/tested
tidyverse==2.0.0
ggpubr==0.6.0
ggExtra==0.10.1
MASS_7.3==60.0.1
rethinking==2.40
kableExtra==1.4.0
HDInterval==0.2.4
Python Requirements
To reproduce python code, the following packages are required/tested
python==3.8
opencv-python==4.9.0.80
numpy==1.24.4
pandas==1.3.5
matploblib==3.7.5
natsort==8.4.0
scipy==1.10.1
tqdm==4.66.4
Analysis
Contents of python code in analysis_R
Filename | Description |
---|---|
Data_prepare_for_mixture_model.R | Preprocess data for fitting the gaussian mixture model for azimuth / elevation |
Fig2_heatmap_histogram.R | Create Figure 2 - heatmap+histogram of azimuth / elevation. |
Fig3_azimuth_over_time.R | Create Figure 3 - azimuth over trial time |
Model_azimuth_elevation.R | Fit the gaussian mixture model for azimuth / elevation |
Model_azimuth_elevation_validation.R | Validate the gaussian mixture model for azimuth / elevation with simulated data |
Model_lateralization.R | Fit logistic model of eye lateralization, create bottom of Table 1, Table S10 and Figure S8 |
Model_hull_weight.R | Fit linear model of body weight predicted by estimated convex hull, create table S12, S13, Figure 4 and Figure S9 |
Table_azimuth_elevation.R | Create Table S6, top of Table 1 |
SystemAccuracyTests.R | Summarizes all results of system accuracy tests from python scripts output. Generates results for Table 2, S3,S4,S6,S7,S8, Figure S4 |
Contents of python code in analysis_R/stan_models
Filename | Description |
---|---|
azimuth_elevation.stan | STAN code for mixture model that estimates head azimuth and mixing proportions per stimulus. |
azimuth_elevation_grand_mean.stan | STAN code for mixture model that estimates head azimuth and mixing proportions across all stimuli pooled. |
Contents of python code in analysis_python
Filename | Description |
---|---|
RunSystemAccuracy_wild.py | Runs system accuracy test for data in the wild. Simulates different number of cameras and computes accuracies |
RunSystemAccuracy_barn.py | Runs system accuracy test in the barn. Simulates different number of cameras and computes accuracies |
RunSystemAccuracy_barn_stereo.py | Runs system accuracy test in the barn. Modify line 24 between “defualt” and “stereo” for evaluation of all cameras or stereo cameras. |
GazeAnalysisPipeline.py | Runs gaze analysis using all 3D tracking results from 3D-SOCS, generates JoinedGaze_3s.csv for further analysis in R |
SaccadeAnalysis.py | Extracts saccades and generates Figure S3 |
Data
Contents in data/cleaned_rda
Filename | Description |
---|---|
df_look3s.rda | main dataframe for stimulus experiment where each row represents 1 frame from video data |
VolumeComparison.rda | dataframe with estimated convex volumes used for population monitoring section |
Column names/descriptions for df_look3s.rda
.
Column | Description |
---|---|
Date | String: YYYY_MM_DD of observation |
DateEvent | Integer: running index of event for that day |
VideoName | String: filename of associated video |
Stim | String: stimulus name |
VideoFrame | Integer: running index of frame number for event |
RealTime: | String: HH:MM:SS.fff of time of frame |
StimTriggerTime | String: time stimulus was triggered within that event. |
Azimuth | Real: Measured head azimuth in degrees |
Elevation | Real: Measured head elevation in degrees |
StartFrame | Integer: Start frame of event |
EndFrame | Integer: End frame of trial |
DistanceObject | Real: distance from stimulus |
BodyAzimuth | Real: Measured body azimuth in degrees |
HeadPos_x | Real: location of head x axis |
HeadPos_y | Real: location of head y axis |
HeadPos_z | Real: location of head z axis |
Species | String: species of individual |
Ring | String: ring number of individual |
Sex | String: Sex of individual (M,F,Unknown) |
Last_captured | String: date of last measurement |
Site | String: location where last caught |
Age | Integer: Age of birds in years. |
AbsAzm | Real: absolute value of Azimuth |
numeric_ID | Integer: numeric representation of Ring |
radian_azimuth | Real: Head azimuth in radians |
radian_elevation | Real: Head elevation in radians |
numeric_trial | Integer: running index of event for that individual across all days |
numeric_stim | Integer: numeric representation of Stim |
Column names/descriptions for VolumeComparison.rda
.
Column | Description |
---|---|
Species | String: species |
Sex | String: Sex |
PIT | String: Bird identity |
mean_wing | Real: Mean wing length (mm) taken from measurements when not nestling |
mean_tarsus | Real: Mean tarsus length (mm) taken from measurements when not nestling |
mean_weight | Real: Mean body weight (g) taken from measurements when not nestling |
BodyConvexVolume | Real: Estimated convex hull volume in mm^3 |
HeadConvexVolume | Real: Estimated convex hull volume in mm^3 |
BodySurfaceArea | Real: Estimated convex hull SA in mm^2 |
HeadSurfaceArea | Real: Estimated convex hull SA in mm^2 |
Contents in data/fits
This folder contains Bayesian model fits from STAN.
Filename | Description |
---|---|
fit_lateralization/fit_lateralization.rda | Model fit for logistic model of lateralization |
fit_lateralization/ulam_cmdstanr….csv | saved samples from fit_\lateralization.rda |
fit_mixture.rda | Model fit for gaussian mixture model of visual field usage |
fit_mixture_gm.rda | Model fit for “grand mean” gaussian mixture model of visual field usage |
fit_validation.rda | Model fit for validation of gaussian mixture model |
fit_hull_weight.rda | Model fit for linear model where weight is predicted by convex hull surface area |
Contents in data/3DTracking
3D Tracking dataset contains all 3D coordinates used in the 3D-SOCS paper, organized into sub-trials.
Dataset is structured as below:
Data
└── 3DTracking
└── {Date}
└── {trials}
├── metadata.csv
└── data
└── Out3DDict.csv
└── Out3DDict.p
└── NumCam2_Out3DDict.csv
└── NumCam2_Out3DDict.p
...
└── NumCam6_Out3DDict.csv
└── NumCam6_Out3DDict.p
Each trial folder follows the naming convention “bird_PIT-ID_trial_Trial number_YYYY-MM-DD HH_MM_SS.FFFFFF”. PIT-ID is the uniquely identifiable alphanumeric ID from passive integrated transponders. Trial number is a running tally of the unique visits to 3D-SOCS each day.
Column names/descriptions for metadata.csv
. Contains frame aquisition metadata for each sub-trial
Column name | Description |
---|---|
“mill1/2/3/4/5/6” | Exact date time when each frame was captured |
“stimulus_on” | True/False boolean, specified whether the stimulus is presented or not. |
Column names/descriptions for CSV
files (NumCam[2-6]_Out3DDict.csv
and Out3DDict.csv
).
Contains 3D tracking results for each sub-trial. Out3DDict is identical to NumCam6_Out3DDict and was the data used in the final analysis of gaze presented in the manuscript. Other NumCams were for accuracy validation with an artificially reduced number of cameras.
Column name | Description |
---|---|
“Frame” | Frame Number |
“ID” | Arbitiary tracking ID for a bird within a trial. |
“Keypoint” | Name of detected keypoint |
“x” | x coordinates of keypoint, in world coordinate system |
“y” | y coordinates of keypoint, in world coordinate system |
“z” | z coordinates of keypoint, in world coordinate system |
Names/descriptions for pickle files (NumCam[2-6]_Out3DDict.p
and Out3DDict.p
). Contains 3D tracking results for each sub-trial.
- Same data as CSV files above but in pickle format, can be loaded as a python dictionary:
{frame:{id:{keypoint:[x,y,z]}}}
.
Contents in data/SystemAccuracy
All data required to reproduce system accuracy test comparing 3D-SOCS and SMART-BARN
Dataset is structured as below:
data
└── System Accuracy
└── Raspi
└── 2024_02_13
└── data
└── data_stereo
└── 2024_02_13_wide
└── data
└── data_stereo
└── Vicon
└── Wild
└── MetaData.csv
Column names/descriptions for MetaData.csv
. Contains metadata for each trial collected in system accuracy test
Column name | Description |
---|---|
“TrialName” | Name of the video taken using 3D-SOCS |
“TrialFolder” | Folder of the 3D-SOCS trial, either 2024_02_13 (narrow FOV) or 2024_02_13_wide (wide FOV) |
“ViconTrial” | Name of recording from SMART-BARN |
“TrialType” | Type of Trial. “DefaultQR” and “DefaultBird” are short sequences to capture original position of the bird and QRCode, while “Bird” are the main validation trials with the taxidermy great tit |
Contents for Raspi/{date}/data/
. Contains calibration and 3D tracking results from 3D-SOCS for each trial
- All data files used to reproduce system accuracy test
- 3D tracking trials are all stored in pickles,can be loaded as a python dictionary:
{frame:{id:{keypoint:[x,y,z]}}}
- Intrinsic and extrinsic parameters for each camera are also stored as pickles.
Contents for Vicon
. All data and measurements from SMART-BARN in system accuracy test
- All data files used to reproduce system accuracy test
- We refer to SMART-BARN publication and Vicon documentation for detailed description of file types (https://help.vicon.com/space/Nexus216/11611993/PDF+downloads+for+Vicon+Nexus)
Contents for Wild
. All data for in multi-view annotations in the wild
- JSON of 3D and 2D estimates
- Original images and the full dataset can be found in the sample dataset zip.
Contents of supplementary_videos.zip
- Supplementary Video S1
Qualitative results for multi-individual 3D tracking in 3D-SOCS. Left hand panels show detected 2D keypoints over six synchronized cameras. Detected 3D points were reprojected back to 2D and shown in the right panel, with line representing foveas, 60 degrees left and right from the bill tip projection. Video was slowed down to 0.5x for visualization. - Supplementary Video S2
Example of great tit looking at screen mealworm stimulus. Detected 3D points were reprojected back to 2D, with lines representing fovea projections, 60 degrees left and right from bill tip - Supplementary Video S3
Example sequence in system accuracy test with taxidermy great tit. A taxidermy great tit was systematically rotated in space, as part of the system accuracy test. The detected 3D keypoint estimate from 3D-SOCS is reprojected to 2D. The video was shown in 4x speed.
Contents of SampleDataset.zip
Data set used as exemplar from github repository. Please see the repository contained in this archive.
We develop and use a markerless 3D tracking system to estimate the posture of wild passerine birds (great tits and blue tits) in the field. We demonstrate the capabilities of this system using a stimulus-display experiment. 3D tracking pipeline and system accuracy validation were performed in Python, and any questions related to these should be directed to Alex Chan. Bayesian statistical analysis, figures and tables were all peformed in R, and any questions related to these, along with those related to the Python scripts that control the Raspberry Pis should be directed to Michael Chimento. We provide required packages, directory contents and column descriptions for all analyses below.