TY - JOUR
T1 - Fool Me, Fool Me
T2 - User Attitudes Toward LLM Falsehoods
AU - Nirman, Diana Bar Or
AU - Weizman, Ariel
AU - Azaria, Amos
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2025
Y1 - 2025
N2 - While Large Language Models (LLMs) have become central tools in various fields, they often provide inaccurate or false information. This study examines user preferences regarding falsehood responses from LLMs. Specifically, we evaluate preferences for LLM responses where false statements are explicitly marked versus unmarked responses, preferences for confident falsehoods compared to LLM disclaimers acknowledging a lack of knowledge, and an upfront comparison of user preference between true responses and falsehoods. Additionally, we investigate how requiring users to assess the truthfulness of statements influences these preferences. Surprisingly, 61% of users prefer unmarked responses over marked ones, and 69% prefer confident falsehoods over LLMs admitting lack of knowledge. When users are required to evaluate the truthfulness of statements, preferences for unmarked and falsehood responses decrease slightly but remain high. However, as expected, users prefer blunt truth over blunt falsehoods. In all our experiments, a total of over 400 users participated. Our findings suggest that user preferences, which influence LLM training via feedback mechanisms, may inadvertently encourage the generation of confident falsehoods. Future research should address the ethical and practical implications of aligning LLM behavior with such preferences.
AB - While Large Language Models (LLMs) have become central tools in various fields, they often provide inaccurate or false information. This study examines user preferences regarding falsehood responses from LLMs. Specifically, we evaluate preferences for LLM responses where false statements are explicitly marked versus unmarked responses, preferences for confident falsehoods compared to LLM disclaimers acknowledging a lack of knowledge, and an upfront comparison of user preference between true responses and falsehoods. Additionally, we investigate how requiring users to assess the truthfulness of statements influences these preferences. Surprisingly, 61% of users prefer unmarked responses over marked ones, and 69% prefer confident falsehoods over LLMs admitting lack of knowledge. When users are required to evaluate the truthfulness of statements, preferences for unmarked and falsehood responses decrease slightly but remain high. However, as expected, users prefer blunt truth over blunt falsehoods. In all our experiments, a total of over 400 users participated. Our findings suggest that user preferences, which influence LLM training via feedback mechanisms, may inadvertently encourage the generation of confident falsehoods. Future research should address the ethical and practical implications of aligning LLM behavior with such preferences.
KW - Falsehoods
KW - Hallucination
KW - Large Language Models
KW - RLHF
KW - User Preferences
UR - https://www.scopus.com/pages/publications/105021870655
U2 - 10.1109/ACCESS.2025.3632748
DO - 10.1109/ACCESS.2025.3632748
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:105021870655
SN - 2169-3536
JO - IEEE Access
JF - IEEE Access
ER -