Fool Me, Fool Me: User Attitudes Toward LLM Falsehoods

Research output: Contribution to journalArticlepeer-review

Abstract

While Large Language Models (LLMs) have become central tools in various fields, they often provide inaccurate or false information. This study examines user preferences regarding falsehood responses from LLMs. Specifically, we evaluate preferences for LLM responses where false statements are explicitly marked versus unmarked responses, preferences for confident falsehoods compared to LLM disclaimers acknowledging a lack of knowledge, and an upfront comparison of user preference between true responses and falsehoods. Additionally, we investigate how requiring users to assess the truthfulness of statements influences these preferences. Surprisingly, 61% of users prefer unmarked responses over marked ones, and 69% prefer confident falsehoods over LLMs admitting lack of knowledge. When users are required to evaluate the truthfulness of statements, preferences for unmarked and falsehood responses decrease slightly but remain high. However, as expected, users prefer blunt truth over blunt falsehoods. In all our experiments, a total of over 400 users participated. Our findings suggest that user preferences, which influence LLM training via feedback mechanisms, may inadvertently encourage the generation of confident falsehoods. Future research should address the ethical and practical implications of aligning LLM behavior with such preferences.

Original languageEnglish
JournalIEEE Access
DOIs
StateAccepted/In press - 2025

Keywords

  • Falsehoods
  • Hallucination
  • Large Language Models
  • RLHF
  • User Preferences

Fingerprint

Dive into the research topics of 'Fool Me, Fool Me: User Attitudes Toward LLM Falsehoods'. Together they form a unique fingerprint.

Cite this