Computational analyses of dynamic visual courtship display reveal diet-dependent and plastic male signaling in Rabidosa rabida wolf spiders
Data files
Jan 20, 2023 version files 9.12 MB
-
README.md
1.79 KB
-
total_raw.csv
9.12 MB
Aug 10, 2023 version files 18.46 MB
-
pose_cluster.csv
9.34 MB
-
README.md
3.56 KB
-
total_raw.csv
9.12 MB
-
vae_hdbscan_result.csv
1.74 KB
Nov 15, 2023 version files 18.47 MB
-
pose_cluster.csv
9.34 MB
-
README.md
4.33 KB
-
total_raw.csv
9.12 MB
-
trad_hdbscan_result.csv
1.60 KB
-
vae_hdbscan_result.csv
1.74 KB
Abstract
It has long been a challenge to quantify the variation in dynamic motions to understand how those displays function in animal communication. The traditional approach is dependent on labor-intensive manual identification/annotation by experts. However, the recent progress in computational techniques provides researchers with toolsets for rapid, objective, and reproducible quantification of dynamic visual displays. In the present study, we investigated the effects of diet manipulation on dynamic visual components of male courtship displays of Rabidosa rabida wolf spiders using machine learning algorithms. Our results suggest that (i) the computational approach can provide an insight into the variation in the dynamic visual display between high- and low-diet males which is not clearly shown with the traditional approach and (ii) males may plastically alter their courtship display according to the body size of females they encounter. Through the present study, we add an example of the utilization of recent computational techniques for understanding the evolution of animal behaviors.
- 4 Python codes, 1 R code and 4 CSV files are included.
- 0_raw_data_process.py
- fill the non-observed values with the initial position of each features
- create gif and png figures to describe the visual display
- require the following packages
- numpy, pandas, seaborn, matplotlib, math
- 1_rabidosa_pose_cluster.py
- conduct clustering posture of forelegs from each frame
- using UMAP and HDBSCAN
- require the following packages
- umap, hdbscan, pickle, pandas, numpy, tensorflow, seaborn, matplotlib, scipy, sklearn
- 2_rabidosa_LSTM.py
- train and save LSTM model of dynamic visual display of male R. rabida
- clustering visual displays using umap and hdbscan
- require the following packages
- umap, hdbscan, pickle, pandas, numpy, tensorflow, seaborn, matplotlib, tsaug, sklearn
- 3_trad_clustering.py
- clustering visual displays using traditional features with umap and hdbscan
- require the following packages
- umap, hdbscan, pickle, pandas, numpy, tensorflow, seaborn, matplotlib
- rabidosa_analysis_final.R
- R script for statistical analysis
- total_raw.csv
- dataset created by 1. rabidosa_220426.py
- total_raw.csv contains 41 columns
- id: id of males
- diet: diet treatment (H: High-diet, L: Low-diet)
- frame: the number of frames
- x_1 ~ x_8: x coordinates of each feature
- y_1 ~ y_8: y coordinates of each feature
- Feature id
- 1 (InsertLeg)
- 2 (Pedi1)
- 3 (Pedi2)
- 4 (FemPatJoint)
- 5 (PatTibJoint)
- 6 (TibMetJoint)
- 7 (MetTarJoint)
- 8 (LegTip)
- d_1 ~ d_8: the distance from the origin to each feature
- angle_1 ~ angle_8 (degree): the angle between x-axis to each feature
- ang_fp ~ ang_p2 (degree): angle between leg segments
- fp: femur (coxa-femur) and patella
- pt: patella and tibia
- tm: tibia and metatarsus
- mt: metatarsus and tarsus
- p1: pedipalp 1 (closer to the observer)
- p2: pedipalp 2 (further from the observer)
- pose_cluster.csv
- dataset created by 1_rabidosa_pose_cluster.py
- pose_cluster.csv contains 62 columns
- id: id of males
- diet: diet treatment (H: High-diet, L: Low-diet)
- frame: the number of frames
- x_1 ~ x_8: x coordinates of each feature
- y_1 ~ y_8: y coordinates of each feature
- Feature id
- 1 (InsertLeg)
- 2 (Pedi1)
- 3 (Pedi2)
- 4 (FemPatJoint)
- 5 (PatTibJoint)
- 6 (TibMetJoint)
- 7 (MetTarJoint)
- 8 (LegTip)
- d_1 ~ d_8: the distance from the origin to each feature
- angle_1 ~ angle_8 (degree): the angle between x-axis to each feature
- ang_fp ~ ang_p2 (degree): angle between leg segments
- fp: femur (coxa-femur) and patella
- pt: patella and tibia
- tm: tibia and metatarsus
- mt: metatarsus and tarsus
- p1: pedipalp 1 (closer to the observer)
- p2: pedipalp 2 (further from the observer)
- cluster_un{int}: predicted label from HDBSCAN with UMAP embedding with n_neighbor == int
- embedding_un{int}_x: X coordinate of the pose in 2d UMAP embedding with n_neighbor == int
- embedding_un{int}_y: y coordinate of the pose in 2d UMAP embedding with n_neighbor == int
- n_neighbor = [15, 25, 50, 100, 200, 400, 500]
- vae_hdbscsan_result.csv
- dataset created by 2_rabidosa_LSTM.py
- vae_hdbscan_result.csv contains 11 columns
- id: id of males
- diet: diet treatment (H: High-diet, L: Low-diet)
- mated: presence of previous mating experience of the male (y/n)
- cluster_un{int}: predicted label from HDBSCAN with UMAP embedding with n_neighbor == int
- n_neighbor = [5, 10, 15, 20, 25]
- m_weight (g): body weight of focal male
- f_weight (g): body weight of visual stimuli female
- age: days from the maturation date
- trad_hdbscan_result.csv
- dataset created by 3_trad_clustering.py
- trad_hdbscan_result.csv contains 60 columns
- id: id of males
- m_weight (g): body weight of focal male
- f_weight (g): body weight of visual stimuli female
- mated: presence of previous mating experience of the male (y/n)
- diet: diet treatment (H: High-diet, L: Low-diet)
- cluster_un{int}: predicted label from HDBSCAN with UMAP embedding with n_neighbor == int
- n_neighbor = [5, 10, 15, 20, 25]
Raw data - We recorded male courtship with a Photron Fastcam 1024 PCI 100k high-speed camera (Photron USA, San Diego, CA, USA) and a Sony DCR-HC65 NTSC Handycam (Sony Electronics Inc., USA). Then, we analyzed the movement of the foreleg and pedipalps during the selected courtship bouts using ProAnalyst Lite software (Xcitex Inc., Woburn, Massachusetts, USA). We first set the x-axis and y-axis by where the pedipalp tip was in contact with the substrate (y-position 0) and most posterior point of the abdomen (x-position 0) at the beginning of the courtship bout. When the foreleg or pedipalps did not move during the courtship bout, the location of the joint was recorded by the location of the parts at the cocked position. In the case of the image being blurred, the location of blurred points was guessed based on the previous or subsequent frames or other parts in the current frame.