TY - GEN
T1 - Irrational, but Adaptive and Goal Oriented
T2 - 31st International Joint Conference on Artificial Intelligence, IJCAI 2022
AU - Azaria, Amos
N1 - Publisher Copyright:
© 2022 International Joint Conferences on Artificial Intelligence. All rights reserved.
PY - 2022
Y1 - 2022
N2 - Autonomous agents that interact with humans are becoming more and more prominent. Currently, such agents usually take one of the following approaches for considering human behavior. Some methods assume either a fully cooperative or a zero-sum setting; these assumptions entail that the human's goals are either identical to that of the agent, or their opposite. In both cases, the agent is not required to explicitly model the human's goals and account for humans' adaptation nature. Other methods first compose a model of human behavior based on observing human actions, and then optimize the agent's actions based on this model. Such methods do not account for how the human will react to the agent's actions and thus, suffer an overestimation bias. Finally, other methods, such as model free reinforcement learning, merely learn which actions the agent should take at which states. While such methods can, theoretically, account for human adaptation nature, since they require extensive interaction with humans, they usually run in simulation. By not considering the human's goals, autonomous agents act selfishly, lack generalization, require vast amounts of data, and cannot account for human's strategic behavior. Therefore, we call for pursuing solution concepts for autonomous agents interacting with humans that consider the human's goals and adaptive nature.
AB - Autonomous agents that interact with humans are becoming more and more prominent. Currently, such agents usually take one of the following approaches for considering human behavior. Some methods assume either a fully cooperative or a zero-sum setting; these assumptions entail that the human's goals are either identical to that of the agent, or their opposite. In both cases, the agent is not required to explicitly model the human's goals and account for humans' adaptation nature. Other methods first compose a model of human behavior based on observing human actions, and then optimize the agent's actions based on this model. Such methods do not account for how the human will react to the agent's actions and thus, suffer an overestimation bias. Finally, other methods, such as model free reinforcement learning, merely learn which actions the agent should take at which states. While such methods can, theoretically, account for human adaptation nature, since they require extensive interaction with humans, they usually run in simulation. By not considering the human's goals, autonomous agents act selfishly, lack generalization, require vast amounts of data, and cannot account for human's strategic behavior. Therefore, we call for pursuing solution concepts for autonomous agents interacting with humans that consider the human's goals and adaptive nature.
UR - http://www.scopus.com/inward/record.url?scp=85137876709&partnerID=8YFLogxK
M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???
AN - SCOPUS:85137876709
T3 - IJCAI International Joint Conference on Artificial Intelligence
SP - 5798
EP - 5802
BT - Proceedings of the 31st International Joint Conference on Artificial Intelligence, IJCAI 2022
A2 - De Raedt, Luc
A2 - De Raedt, Luc
Y2 - 23 July 2022 through 29 July 2022
ER -