Abstract
The Shapley value is one of the most important normative division schemes in cooperative game theory, satisfying basic axioms. However, some allocation according to the Shapley value may seem unfair to humans. In this paper, we develop an automatic method that generates intuitive explanations for a Shapley-based payoff allocation, which utilizes the basic axioms. Given any coalitional game, our method decomposes it to sub-games, for which it is easy to generate verbal explanations, and shows that the given game is composed of the sub-games. Since the payoff allocation for each sub-game is perceived as fair, the Shapley-based payoff allocation for the given game should seem fair as well. We run an experiment with 630 human participants and show that when applying our method, humans perceive the Shapley-based payoff allocation as more fair than the Shapley-based payoff allocation without any explanation or with explanations generated by other methods.
Original language | English |
---|---|
Pages | 2285-2291 |
Number of pages | 7 |
State | Published - 2022 |
Event | 44th Annual Meeting of the Cognitive Science Society: Cognitive Diversity, CogSci 2022 - Toronto, Canada Duration: 27 Jul 2022 → 30 Jul 2022 |
Conference
Conference | 44th Annual Meeting of the Cognitive Science Society: Cognitive Diversity, CogSci 2022 |
---|---|
Country/Territory | Canada |
City | Toronto |
Period | 27/07/22 → 30/07/22 |