Improving the Perception of Fairness in Shapley-Based Allocations

Meir Nizry, Noam Hazon, Amos Azaria

Research output: Contribution to conferencePaperpeer-review

Abstract

The Shapley value is one of the most important normative division schemes in cooperative game theory, satisfying basic axioms. However, some allocation according to the Shapley value may seem unfair to humans. In this paper, we develop an automatic method that generates intuitive explanations for a Shapley-based payoff allocation, which utilizes the basic axioms. Given any coalitional game, our method decomposes it to sub-games, for which it is easy to generate verbal explanations, and shows that the given game is composed of the sub-games. Since the payoff allocation for each sub-game is perceived as fair, the Shapley-based payoff allocation for the given game should seem fair as well. We run an experiment with 630 human participants and show that when applying our method, humans perceive the Shapley-based payoff allocation as more fair than the Shapley-based payoff allocation without any explanation or with explanations generated by other methods.

Original languageEnglish
Pages2285-2291
Number of pages7
StatePublished - 2022
Event44th Annual Meeting of the Cognitive Science Society: Cognitive Diversity, CogSci 2022 - Toronto, Canada
Duration: 27 Jul 202230 Jul 2022

Conference

Conference44th Annual Meeting of the Cognitive Science Society: Cognitive Diversity, CogSci 2022
Country/TerritoryCanada
CityToronto
Period27/07/2230/07/22

Fingerprint

Dive into the research topics of 'Improving the Perception of Fairness in Shapley-Based Allocations'. Together they form a unique fingerprint.

Cite this