TY - GEN
T1 - Strategic Information Disclosure to People with Multiple Alternatives
AU - Azaria, Amos
AU - Rabinovich, Zinovi
AU - Kraus, Sarit
AU - Goldman, Claudia V.
N1 - Publisher Copyright:
Copyright © 2011, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
PY - 2011/8/11
Y1 - 2011/8/11
N2 - This paper studies how automated agents can persuade humans to behave in certain ways. The motivation behind such agent's behavior resides in the utility function that the agent's designer wants to maximize and which may be different from the user's utility function. Specifically, in the strategic settings studied, the agent provides correct yet partial information about a state of the world that is unknown to the user but relevant to his decision. Persuasion games were designed to study interactions between automated players where one player sends state information to the other to persuade it to behave in a certain way. We show that this game theory based model is not sufficient to model human-agent interactions, since people tend to deviate from the rational choice. We use machine learning to model such deviation in people from this game theory based model. The agent generates a probabilistic description of the world state that maximizes its benefit and presents it to the users. The proposed model was evaluated in an extensive empirical study involving road selection tasks that differ in length, costs and congestion. Results showed that people's behavior indeed deviated significantly from the behavior predicted by the game theory based model. Moreover, the agent developed in our model performed better than an agent that followed the behavior dictated by the game-theoretical models.
AB - This paper studies how automated agents can persuade humans to behave in certain ways. The motivation behind such agent's behavior resides in the utility function that the agent's designer wants to maximize and which may be different from the user's utility function. Specifically, in the strategic settings studied, the agent provides correct yet partial information about a state of the world that is unknown to the user but relevant to his decision. Persuasion games were designed to study interactions between automated players where one player sends state information to the other to persuade it to behave in a certain way. We show that this game theory based model is not sufficient to model human-agent interactions, since people tend to deviate from the rational choice. We use machine learning to model such deviation in people from this game theory based model. The agent generates a probabilistic description of the world state that maximizes its benefit and presents it to the users. The proposed model was evaluated in an extensive empirical study involving road selection tasks that differ in length, costs and congestion. Results showed that people's behavior indeed deviated significantly from the behavior predicted by the game theory based model. Moreover, the agent developed in our model performed better than an agent that followed the behavior dictated by the game-theoretical models.
UR - http://www.scopus.com/inward/record.url?scp=84878790934&partnerID=8YFLogxK
M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???
AN - SCOPUS:84878790934
T3 - Proceedings of the 25th AAAI Conference on Artificial Intelligence, AAAI 2011
SP - 594
EP - 600
BT - Proceedings of the 25th AAAI Conference on Artificial Intelligence, AAAI 2011
T2 - 25th AAAI Conference on Artificial Intelligence, AAAI 2011
Y2 - 7 August 2011 through 11 August 2011
ER -