ملخص
The Shapley value is one of the most important normative division schemes in cooperative game theory, satisfying basic axioms. However, some allocation according to the Shapley value may seem unfair to humans. In this paper, we develop an automatic method that generates intuitive explanations for a Shapley-based payoff allocation, which utilizes the basic axioms. Given any coalitional game, our method decomposes it to sub-games, for which it is easy to generate verbal explanations, and shows that the given game is composed of the sub-games. Since the payoff allocation for each sub-game is perceived as fair, the Shapley-based payoff allocation for the given game should seem fair as well. We run an experiment with 630 human participants and show that when applying our method, humans perceive the Shapley-based payoff allocation as more fair than the Shapley-based payoff allocation without any explanation or with explanations generated by other methods.
اللغة الأصلية | الإنجليزيّة |
---|---|
الصفحات | 2285-2291 |
عدد الصفحات | 7 |
حالة النشر | نُشِر - 2022 |
الحدث | 44th Annual Meeting of the Cognitive Science Society: Cognitive Diversity, CogSci 2022 - Toronto, كندا المدة: ٢٧ يوليو ٢٠٢٢ → ٣٠ يوليو ٢٠٢٢ |
!!Conference
!!Conference | 44th Annual Meeting of the Cognitive Science Society: Cognitive Diversity, CogSci 2022 |
---|---|
الدولة/الإقليم | كندا |
المدينة | Toronto |
المدة | ٢٧/٠٧/٢٢ → ٣٠/٠٧/٢٢ |