Skip to main content
Dryad

The AI Economist: Taxation policy design via two-level deep reinforcement learning

Data files

Dec 02, 2021 version files 3.09 GB
Dec 09, 2021 version files 3.09 GB

Abstract

This dataset contains all raw experimental data for the paper "The AI Economist: Taxation Policy Design via Two-level Deep Multi-Agent Reinforcement Learning". 

The accompanying simulation, reinforcement learning, and data visualization code can be found at https://github.com/salesforce/ai-economist.

For the one-step economy experiments, we provide:

  • training histories,

  • configuration files (these experiments do not use phases), and

  • final agent and planner models.

For the Gather-Trade-Build scenario, the data covers 6 spatial layouts: two Open-Quadrant (with 4 and 10 agents), and four Split-World maps with different configurations of the high-skilled and low-skilled agents. It also covers 4 tax policies (the AI Economist, Saez, free-market, and US federal). In addition, the AI Economist has been optimized for two social welfare functions: the product of equality and productivity, and inverse-income weighted utility. The Saez tax policy also uses estimated elasticities. 

Each experiment was repeated with different random seeds: 10 seeds for the Open-Quadrant scenarios, and 5 seeds for the Split-World scenarios. For each individual experiment, we provide: 

  • Training histories (e.g. equality and productivity throughout training)

  • the phase 1 and phase 2 configuration files, 

  • 40 episode dense logs (the final 10 simulation logs across 4 environment replicas),

  • phase 1 final agent models, and

  • phase 2 final agent and planner models.

Finally, we include all data and results used to calibrate the Saez elasticity estimates and to estimate elasticity directly from a sweep over flat-rate tax policies:

  • training histories,

  • the phase 1 and phase 2 configuration files, 

  • phase 1 final agent models, and

  • phase 2 final agent and planner models.