Joint commitment in human cooperative hunting through an “Imagined We”
Data files
Aug 01, 2025 version files 1.07 GB
-
imaginedWeCodeRelease.zip
1.07 GB
-
README.md
10.64 KB
Abstract
Cooperation involves the challenge of jointly selecting one from multiple goals while maintaining the team’s joint commitment to it. We test joint commitment in a multi-player hunting game, combining psychophysics and computational modeling. Joint commitment is modeled through an "Imagined We" (IW) approach, where each agent uses Bayesian inference to infer the intention of “We”, an imagined supraindividual agent controlling all agents as its body parts. This is compared against a Reward Sharing (RS) model, which frames cooperation through reward sharing via multi-agent reinforcement learning (MARL). Both humans and IW, but not RS, maintained high performance by jointly committing to a single prey, regardless of prey quantity or speed. Human observers rated all hunters in both human and IW teams as making high contributions to the catch, regardless of their proximity to the prey, suggesting that high-quality hunting stemmed from sophisticated cooperation rather than individual strategies. Unlike RS hunters, IW hunters are capable of cooperating not only with one another, but also with human participants actively engaged in the same hunting game. In conclusion, this study demonstrates that humans achieve cooperation through joint commitment that enforces a single goal, rather than simply motivating members through reward sharing.
https://doi.org/10.5061/dryad.brv15dvjn
Location of the Data and Code
The data and code are compressed in the file ‘imaginedWeCodeRelease.zip’, which can be downloaded from the ‘Files’ tab.
Description of the data and file structure
Data Collection:
The dataset was collected through offline laboratory experiments involving human subjects, machine simulations, and human-machine collaboration . The machine simulations and collaborative were based on models implemented in Python, including reinforcement learning, neural networks, and Bayesian inference. Participants engaged in tasks designed to assess cognitive processes, with data recorded in controlled laboratory conditions.
Data Processing:
The collected data was processed using Pythons pandas library. This involved data cleaning, transformation, and preparation for further analysis. For statistical analysis, we used the JASP software to perform various statistical tests, ensuring the robustness and validity of our findings
Data Description:
This dataset contains trial-level behavioral data from participants engaging in a simulated cooperative hunting task. Each row corresponds to one trial.
All variables represent values derived from program-defined rules within the simulation. These values reflect agent behaviors and task outcomes, but do not carry physical-world units (e.g., seconds, meters). For example, a trial score of 30 represents in-game performance, not a physical quantity.
The table below defines all variables:
| Variable Name | Description | Example |
|---|---|---|
name |
Participant identifier (based on participation time) | 20221221-1550 |
numHunters |
Number of hunter agents in the trial | 3 |
targetColorIndexUnshuffled |
Unshuffled index values for the target color(s); here, all targets are orange, coded as 0 |
[0, 0, 0, 0] |
blockPositions |
2D coordinates of block objects in the environment | [[0, 0], [0, 0]] |
targetMaxSpeed |
Maximum movement speed of target agents | 0.7 |
targetNums |
Number of target agents | 4 |
blockSize |
Size of each block object (unitless, abstract scale) | 0 |
targetConcern |
Target behavior mode; e.g., self indicates individual-based movement |
self |
targetColorIndexes |
Actual index values for the target colors (after shuffling) | [0, 0, 0, 0] |
trialTime |
Timestamp of trial initiation | 26948 |
hunterFlag |
Total number of times any hunter touched any target in this trial | 78 |
targetTouchCounts |
Number of times each target was touched by any hunter in this trial | [118, 3, 5, 2] |
targetCatchCounts |
Number of times each target was caught by any hunter (see article for description of “catch”) | [15, 0, 0, 0] |
caughtTimes |
Total number of target catches in the trial | 15 |
trialScore |
Score earned by participant in this trial | 60.8 |
totalScore |
Accumulated score up to and including this trial | 60.8 |
Files and variables
Folder: env
Description: This folder stores the environment settings for the game developed in this project.
Folder: exec
Description: This directory includes executable files or scripts for running the main programs related to the project.
-
Files with the suffix
Analysis_Exp.py in this folder are specifically responsible for performing data analysis on various experiments. -
The file runExpWithDiffBlockSizeWithDiffTargetColorFromPolicyPool(Newest, Human&Model Merged in one).py in this folder is the experimental program for Experiment 1.
When controlType = 'human', the program is used to collect human data.
When controlType = 'model', the program runs the experiment using the RS model.
-
The file runExpSharedAgencyWithDiffBlockSizeWithDiffTargetColor.py in this folder is the program that runs Experiment 1 using the IW model in batch mode.
-
The file runExpSharedRewardWithDiffBlockSizeWithDiffTargetColor.py in this folder is the program that runs Experiment 1 using the RS model in batch mode.
It is functionally equivalent to repeatedly running runExpWithDiffBlockSizeWithDiffTargetColorFromPolicyPool(Newest, Human&Model Merged in one).py with controlType = 'model'.
-
File contributionRating.py in this folder is the experiment program for Experiment 2.
-
Files server.py and client.py in this folder is the experiment program for Experiment 3.
Folder: model
Description: This folder contains the model files used in the project. It includes pre-trained models, model checkpoints, configuration files, and other assets required for model training or inference.
Folder: results
Description: This folder stores the output or results generated by the humans or models. This includes processed data, log files, model performance metrics, figures, and any other outputs created during the execution of the project.
- “
Expt 1 Human/IW Model/RS Model Data” is the folder containing the raw data files of Experiment 1. - “
ContributionRawData_Expt2” is the folder containing the raw data files of Experiment 2. - “
partnerSelectionRawData_Expt3” is the folder containing the raw data files of Experiment 3. - “
shareRewardBaseResult_wolfFlex_sheep6_No5Folder_120000eps_(ToStat/toPlot)” store the data files of Experiments 1 and 2 that have been analyzed from raw data and are prepared for subsequent statistical analysis and visualization. - “
partnerSelectionResult_(ToStat/toPlot)” store the data files of Experiment 3 that have been analyzed from raw data and are prepared for subsequent statistical analysis and visualization.
Folder: src
Description: This is the source code directory where the main logic of the project resides. It includes scripts or other programming files that contain the core functionalities, classes, and methods needed to implement the project's features.
Folder: pictures
Description: This directory contains the images used by the experimental code, including break screens and the final end-of-experiment screen.
Folder: paper figures
Description: This directory contains the 7 figures used in the paper.
Current Platform Support and Future Plans
This project has been tested and runs successfully on both Windows and Linux systems. However, it is currently not compatible with macOS machines using Apple Silicon (M-series chips) due to architectural conflicts with TensorFlow 1.13.1, which is required by this project. (Even when using Rosetta to emulate x64 architecture, the project does not run reliably.)
Therefore, we recommend running the project on Windows or Linux platforms. We are actively working on a new version based on TensorFlow 2, which will support Apple Silicon devices, and we plan to release it on this website as soon as possible.
Library Requirements
This project requires the following environment:
Python 3.6 (e.g., 3.6.8)
pip 18.1
protobuf 3.9.2
tensorflow 1.13.1
pygame 2.0.0
pandas 1.1.5
numpy 1.18.4
matplotlib 3.1.3
seaborn 0.11.2
scipy 1.3.1
statsmodels 0.11.1
All required packages can be installed via docker (using requirements.txt) or via pip. For example:
py -3.6 -m pip install protobuf==3.9.2 tensorflow==1.13.1 pygame==2.0.0 pandas==1.1.5 numpy==1.18.5 matplotlib==3.1.3 seaborn==0.11.2 scipy==1.3.1 statsmodels==0.11.1
or
python3 -m pip install protobuf==3.9.2 tensorflow==1.13.1 pygame==2.0.0 pandas==1.1.5 numpy==1.18.5 matplotlib==3.1.3 seaborn==0.11.2 scipy==1.3.1 statsmodels==0.11.1
For users experiencing issues with pip, Python 3.6.8 can be downloaded from the official website at https://www.python.org/downloads/release/python-368/, and TensorFlow 1.13.1 is available at https://pypi.org/project/tensorflow/1.13.1/
It is recommended that one sets up a virtual environment based on Python 3.6 prior to installation.
Code Execution
To run the game and analyze the results, one simply needs to execute the appropriate Python scripts located in the exec folder using the specified environment (e.g., python3 xxxx.py). For guidance on which script corresponds to which experiment’s game and data analysis, please refer to the subsection “folder: exec” under the “Files and Variables” section in the README.
Access information
Other publicly accessible locations of the data:
Publicly accessible locations of the demos of the experiments:
Human subjects data
All human subjects data included in this dataset were collected with the explicit informed consent of participants, including their consent to publish the de-identified data in the public domain.
To ensure anonymity, participants are identified only by the time of their participation (e.g., "20221221-1550"). The data content and file names do not contain any personally identifiable information or demographic details such as names, gender, age, or other identifying characteristics.
Data Collection:
The dataset was collected through offline laboratory experiments involving human subjects, machine simulations, and human-machine collaboration. The machine simulations and collaborative tasks were based on models implemented in Python, including reinforcement learning, neural networks, and Bayesian inference. Participants engaged in tasks designed to assess cognitive processes, with data recorded in controlled laboratory conditions.
Data Processing:
The collected data was processed using Python’s pandas library. This involved data cleaning, transformation, and preparation for further analysis. For statistical analysis, we used the Jeffreys’s Amazing Statistics Program (JASP) software to perform various statistical tests.
