On collaborative reinforcement learning to optimize the redistribution of critical medical supplies throughout the COVID-19 pandemic
Data files
May 20, 2021 version files 12.40 MB
-
fullsim_states20_iter1000_LSTM_max.ods
465.23 KB
-
fullsim_states20_iter1000_LSTM_min.ods
606.92 KB
-
fullsim_states20_iter1000_LSTM_qlrn.ods
443.90 KB
-
fullsim_states20_iter1000_LSTM_rand.ods
536.31 KB
-
fullsim_states20_iter1000_LSTM_val.ods
451.61 KB
-
fullsim_states35_iter1000_LSTM_max.ods
743.89 KB
-
fullsim_states35_iter1000_LSTM_min.ods
939.35 KB
-
fullsim_states35_iter1000_LSTM_qlrn.ods
730.27 KB
-
fullsim_states35_iter1000_LSTM_rand.ods
845.51 KB
-
fullsim_states35_iter1000_LSTM_val.ods
598.59 KB
-
fullsim_states5_iter1000_LSTM_max.ods
212.97 KB
-
fullsim_states5_iter1000_LSTM_min.ods
245.59 KB
-
fullsim_states5_iter1000_LSTM_qlrn.ods
195.53 KB
-
fullsim_states5_iter1000_LSTM_rand.ods
225.31 KB
-
fullsim_states5_iter1000_LSTM_val.ods
216.46 KB
-
fullsim_states50_iter1000_LSTM_max.ods
1.03 MB
-
fullsim_states50_iter1000_LSTM_min.ods
1.23 MB
-
fullsim_states50_iter1000_LSTM_qlrn.ods
876.16 KB
-
fullsim_states50_iter1000_LSTM_rand.ods
1.17 MB
-
fullsim_states50_iter1000_LSTM_val.ods
628.70 KB
-
max_summary_1000.csv
462 B
-
min_summary_1000.csv
457 B
-
qlrn_summary_1000.csv
460 B
-
rand_summary_1000.csv
457 B
-
val_summary_1000.csv
459 B
Abstract
Objective: This work investigates how reinforcement learning and deep learning models can facilitate the near-optimal redistribution of medical equipment in order to bolster public health responses to future crises similar to the COVID-19 pandemic.
Materials and Methods: The system presented is simulated with disease impact statistics from the Institute of Health Metrics (IHME), Center for Disease Control, and Census Bureau[1, 2, 3]. We present a robust pipeline for data preprocessing, future demand inference, and a redistribution algorithm that can be adopted across broad scales and applications.
Results: The reinforcement learning redistribution algorithm demonstrates performance optimality ranging from 93-95%. Performance improves consistently with the number of random states participating in exchange, demonstrating average shortage reductions of 78.74% (± 30.8) in simulations with 5 states to 93.50% (± 0.003) with 50 states.
Conclusion: These findings bolster confidence that reinforcement learning techniques can reliably guide resource allocation for future public health emergencies.
Summary statistics for our study are attached here for reference.