TY - JOUR
T1 - The key role of design and transparency in enhancing trust in AI-powered digital agents
AU - Glassberg, Iris
AU - Ilan, Yael Brender
AU - Zwilling, Moti
N1 - Publisher Copyright:
© 2025 The Author(s)
PY - 2025/9/1
Y1 - 2025/9/1
N2 - This study examines the factors that influence user trust in AI-powered digital agents within organizational contexts. Employing an integrated framework that combines the Expectation Confirmation Model and the Technology Acceptance Model, we investigated how user expectations, attitudes toward AI, information transparency, visual design, and exposure duration contribute to trust formation. A sample of 118 organizational participants in Israel interacted with ChatGPT across simulated workplace scenarios. The findings indicate that while expectations alone do not significantly predict trust, transparency and effective visual design serve as crucial mediators. Attitudes toward AI were also positively associated with trust. The experimental manipulation of expectations did not yield a significant effect, suggesting that pre-existing user perceptions may override brief interventions. These results highlight actionable design and implementation strategies for organizations aiming to foster user trust and promote AI adoption. Limitations include sample bias, constraints in subgroup analysis, and the need for longitudinal research.
AB - This study examines the factors that influence user trust in AI-powered digital agents within organizational contexts. Employing an integrated framework that combines the Expectation Confirmation Model and the Technology Acceptance Model, we investigated how user expectations, attitudes toward AI, information transparency, visual design, and exposure duration contribute to trust formation. A sample of 118 organizational participants in Israel interacted with ChatGPT across simulated workplace scenarios. The findings indicate that while expectations alone do not significantly predict trust, transparency and effective visual design serve as crucial mediators. Attitudes toward AI were also positively associated with trust. The experimental manipulation of expectations did not yield a significant effect, suggesting that pre-existing user perceptions may override brief interventions. These results highlight actionable design and implementation strategies for organizations aiming to foster user trust and promote AI adoption. Limitations include sample bias, constraints in subgroup analysis, and the need for longitudinal research.
KW - Artificial intelligence
KW - ChatGPT
KW - Digital agents
KW - Expectations
KW - Organizational adoption
KW - Trust
UR - https://www.scopus.com/pages/publications/105011839986
U2 - 10.1016/j.jik.2025.100770
DO - 10.1016/j.jik.2025.100770
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:105011839986
SN - 2530-7614
VL - 10
JO - Journal of Innovation and Knowledge
JF - Journal of Innovation and Knowledge
IS - 5
M1 - 100770
ER -