TY - GEN
T1 - "Is There Anything Else I Can Help YouWith?" Challenges in Deploying an On-Demand Crowd-Powered Conversational Agent
AU - Huang, Ting Hao
AU - Lasecki, Walter S.
AU - Azaria, Amos
AU - Bigham, Jeffrey P.
N1 - Publisher Copyright:
Copyright © 2016 Association for the Advancement of Artificial Intelligence.
PY - 2016/11/3
Y1 - 2016/11/3
N2 - Intelligent conversational assistants, such as Apple's Siri, Microsoft's Cortana, and Amazon's Echo, have quickly become a part of our digital life. However, these assistants have major limitations, which prevents users from conversing with them as they would with human dialog partners. This limits our ability to observe how users really want to interact with the underlying system. To address this problem, we developed a crowd-powered conversational assistant, Chorus, and deployed it to see how users and workers would interact together when mediated by the system. Chorus sophisticatedly converses with end users over time by recruiting workers on demand, which in turn decide what might be the best response for each user sentence. Up to the first month of our deployment, 59 users have held conversations with Chorus during 320 conversational sessions. In this paper, we present an account of Chorus' deployment, with a focus on four challenges: (i) identifying when conversations are over, (ii) malicious users and workers, (iii) on-demand recruiting, and (iv) settings in which consensus is not enough. Our observations could assist the deployment of crowd-powered conversation systems and crowd-powered systems in general.
AB - Intelligent conversational assistants, such as Apple's Siri, Microsoft's Cortana, and Amazon's Echo, have quickly become a part of our digital life. However, these assistants have major limitations, which prevents users from conversing with them as they would with human dialog partners. This limits our ability to observe how users really want to interact with the underlying system. To address this problem, we developed a crowd-powered conversational assistant, Chorus, and deployed it to see how users and workers would interact together when mediated by the system. Chorus sophisticatedly converses with end users over time by recruiting workers on demand, which in turn decide what might be the best response for each user sentence. Up to the first month of our deployment, 59 users have held conversations with Chorus during 320 conversational sessions. In this paper, we present an account of Chorus' deployment, with a focus on four challenges: (i) identifying when conversations are over, (ii) malicious users and workers, (iii) on-demand recruiting, and (iv) settings in which consensus is not enough. Our observations could assist the deployment of crowd-powered conversation systems and crowd-powered systems in general.
UR - http://www.scopus.com/inward/record.url?scp=85068068220&partnerID=8YFLogxK
M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???
AN - SCOPUS:85068068220
T3 - Proceedings of the 4th AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2016
SP - 79
EP - 88
BT - Proceedings of the 4th AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2016
A2 - Ghosh, Arpita
A2 - Lease, Matthew
T2 - 4th AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2016
Y2 - 30 October 2016 through 3 November 2016
ER -