Mentalising mechanisms underly strategic coordination in Guinea baboons (Papio papio)
Data files
Apr 10, 2024 version files 873.98 KB
-
passedindivstestdata_curated.csv
-
README.md
Abstract
It remains controversial whether the ability to mentalise is confined to humans. To address this question, Guinea baboons living in a social colony freely came to play a 2-players coordination game with any other baboon, or alone (social vs solo conditions). In fact, in both conditions, they interacted with an identical Artificial Agent. Their choice behaviour depended on the social context and their relative dominance hierarchy. A mentalising computational model accounted for baboons’ behaviour better than simpler models without mentalizing components in the social condition while the same baboons used a simpler strategy when they played alone. Together, these findings indicate that computations required for mentalising and used for coordination learning may have evolved in the common ancestor of the Old-World monkey and apes.
README: Mentalising mechanisms underly strategic coordination in Guinea baboons (Papio papio)
https://doi.org/10.5061/dryad.qjq2bvqpv
Description of the data and file structure
Variable definitions:
- Programme: condition (Test=social, ghost=solo) x session (1 or 2)
- Nom: tested baboon's name (2nd chooser)
- Box: ID of the box of the tested baboon
- Sexe: sex
- Famille: Family ID
- Age: tested baboon's age in months
- Score: 1=success (rewarded), 0=failure (not rewarded)
- remoteName: other baboon's name
- TrialNumber: trial number over the whole experiment
- FollowerResponse: tested baboon's choice. It has the form "test-nb1-nb2-nb3", where nb1, nb2, and nb3 are integers that are either equal to 1 or 2. nb1 represents the ID of the pair of visual cues that were used and that are randomized across subjects. nb2 represents the session number. nb3 represents the ID of the cue within the pair of visual cues. The sequence nb1-nb2-nb3 uniquely maps to a shape and color that were used during the experiment.
- FollowTarget: rewarding choice [choice of Artificial Agent (AA)]
- FollowDist: non-rewarding choice ("distractor")
- FollowTargetPos: position of the rewarding choice (choice of AA)
- prevstim1: choice (target ID) at n-1
- prevscore1: outcome (success=1, failure=0) at n-1
- prevstim2: choice (target ID) at n-2
- prevscore2: outcome (success=1, failure=0) at n-2
- nbocc: number of occurrences of such sequence of events previously in history
- nbstim1: number of times target ID 1 was chosen after such an event
- prob1: the probability of choosing target ID 1 by AA
- randNumb: random number drawn by the AA to compare to prob1
- EloScore: ELO score of tested baboon
Sharing/Access information
OSF: link in Related Works section
Code/Software
OSF: link in Related Works section
Methods
Automated data collection by S-ALDM at Rousset Primates Station.