TY - JOUR
T1 - Automated agents for reward determination for human work in crowdsourcing applications
AU - Azaria, Amos
AU - Aumann, Yonatan
AU - Kraus, Sarit
N1 - Funding Information:
Acknowledgments This paper has evolved from a paper presented at the AAAI-2012 conference [27]. We thank Avi Rosenfeld and Shira Abuhatzera for their helpful comments. This work is supported in part by the
PY - 2014/11
Y1 - 2014/11
N2 - Crowdsourcing applications frequently employ many individual workers, each performing a small amount of work. In such settings, individually determining the reward for each assignment and worker may seem economically beneficial, but is inapplicable if manually performed. We thus consider the problem of designing automated agents for automatic reward determination and negotiation in such settings. We formally describe this problem and show that it is NP-hard. We therefore present two automated agents for the problem, based on two different models of human behavior. The first, the Reservation Price Based Agent (RPBA), is based on the concept of a RP, and the second, the No Bargaining Agent (NBA) which tries to avoid any negotiation. The performance of the agents is tested in extensive experiments with real human subjects, where both NBA and RPBA outperform strategies developed by human experts.
AB - Crowdsourcing applications frequently employ many individual workers, each performing a small amount of work. In such settings, individually determining the reward for each assignment and worker may seem economically beneficial, but is inapplicable if manually performed. We thus consider the problem of designing automated agents for automatic reward determination and negotiation in such settings. We formally describe this problem and show that it is NP-hard. We therefore present two automated agents for the problem, based on two different models of human behavior. The first, the Reservation Price Based Agent (RPBA), is based on the concept of a RP, and the second, the No Bargaining Agent (NBA) which tries to avoid any negotiation. The performance of the agents is tested in extensive experiments with real human subjects, where both NBA and RPBA outperform strategies developed by human experts.
KW - Crowdsourcing
KW - Human-computer interaction
KW - Negotiation
UR - http://www.scopus.com/inward/record.url?scp=84906056375&partnerID=8YFLogxK
U2 - 10.1007/s10458-013-9244-y
DO - 10.1007/s10458-013-9244-y
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:84906056375
SN - 1387-2532
VL - 28
SP - 934
EP - 955
JO - Autonomous Agents and Multi-Agent Systems
JF - Autonomous Agents and Multi-Agent Systems
IS - 6
ER -