The key role of design and transparency in enhancing trust in AI-powered digital agents

Research output: Contribution to journalArticlepeer-review

Abstract

This study examines the factors that influence user trust in AI-powered digital agents within organizational contexts. Employing an integrated framework that combines the Expectation Confirmation Model and the Technology Acceptance Model, we investigated how user expectations, attitudes toward AI, information transparency, visual design, and exposure duration contribute to trust formation. A sample of 118 organizational participants in Israel interacted with ChatGPT across simulated workplace scenarios. The findings indicate that while expectations alone do not significantly predict trust, transparency and effective visual design serve as crucial mediators. Attitudes toward AI were also positively associated with trust. The experimental manipulation of expectations did not yield a significant effect, suggesting that pre-existing user perceptions may override brief interventions. These results highlight actionable design and implementation strategies for organizations aiming to foster user trust and promote AI adoption. Limitations include sample bias, constraints in subgroup analysis, and the need for longitudinal research.

Original languageEnglish
Article number100770
JournalJournal of Innovation and Knowledge
Volume10
Issue number5
DOIs
StatePublished - 1 Sep 2025

Keywords

  • Artificial intelligence
  • ChatGPT
  • Digital agents
  • Expectations
  • Organizational adoption
  • Trust

Fingerprint

Dive into the research topics of 'The key role of design and transparency in enhancing trust in AI-powered digital agents'. Together they form a unique fingerprint.

Cite this