Dataset from: In-situ bidirectional human-robot value alignment
Yuan, Luyao et al. (2022), Dataset from: In-situ bidirectional human-robot value alignment, Dryad, Dataset, https://doi.org/10.5068/D1XT3V
A prerequisite for social coordination is bidirectional communication between teammates, each playing two roles simultaneously: as receptive listeners and expressive speakers. For robots working with humans in complex situations with multiple goals that differ in importance, failure to fulfill the expectation of either role could undermine group performance due to misalignment of values between humans and robots. Specifically, a robot needs to serve as an effective listener to infer human users' intents from instructions and feedback, and as an expressive speaker to explain its decision processes to users. In this paper, we investigate how to foster effective bidirectional human-robot communications in the context of value alignment---collaborative robots and users form an aligned understanding of the importance of possible task goals. We propose an explainable artificial intelligence (XAI) system in which a group of robots predicts users' values by taking in-situ feedback into consideration, while communicating their decision processes to users through explanations. To learn from human feedback, our XAI system integrates a cooperative communication model for inferring human values associated with multiple desirable goals. To be interpretable to humans, the system simulates human mental dynamics and predicts optimal explanations using graphical models. We conducted psychological experiments to examine the core components of the proposed computational framework. Our results show that real-time human-robot mutual understanding in complex cooperative tasks is achievable with a learning model based on bidirectional communication. We believe this interaction framework can shed light on bidirectional value alignment in communicative XAI systems, and more broadly, in future human-machine teaming systems.
Defense Advanced Research Projects Agency, Award: N66001-17-2-4029