Enhancing User Acceptance of an AI Agent’s Recommendation in Information-Sharing Environments

Research output: Contribution to journalArticlepeer-review

Abstract

Information sharing (IS) occurs in almost every action daily. IS holds benefits for its users, but it is also a source of privacy violations and costs. Human users struggle to balance this trade-off. This reality calls for Artificial Intelligence (AI)-based agent assistance that surpasses humans’ bottom-line utility, as shown in previous research. However, convincing an individual to follow an AI agent’s recommendation is not trivial; therefore, this research’s goal is establishing trust in machines. Based on the Design of Experiments (DOE) approach, we developed a methodology that optimizes the user interface (UI) with a target function of maximizing the acceptance of the AI agent’s recommendation. To empirically demonstrate our methodology, we conducted an experiment with eight UI factors and n = 64 human participants, acting in a Facebook simulator environment, and accompanied by an AI agent assistant. We show how the methodology can be applied to enhance AI agent user acceptance on IS platforms by selecting the proper UI. Additionally, due to its versatility, this approach has the potential to optimize user acceptance in multiple domains as well.

Original languageEnglish
Article number7874
JournalApplied Sciences (Switzerland)
Volume14
Issue number17
DOIs
StatePublished - Sep 2024

Keywords

  • AI agent acceptance
  • human–computer trust
  • information sharing
  • user interface design

Fingerprint

Dive into the research topics of 'Enhancing User Acceptance of an AI Agent’s Recommendation in Information-Sharing Environments'. Together they form a unique fingerprint.

Cite this