The Internal State of an LLM Knows When It's Lying

Amos Azaria, Tom Mitchell

    نتاج البحث: فصل من :كتاب / تقرير / مؤتمرمنشور من مؤتمرمراجعة النظراء

    61 اقتباسات (Scopus)

    ملخص

    While Large Language Models (LLMs) have shown exceptional performance in various tasks, one of their most prominent drawbacks is generating inaccurate or false information with a confident tone. In this paper, we provide evidence that the LLM's internal state can be used to reveal the truthfulness of statements. This includes both statements provided to the LLM, and statements that the LLM itself generates. Our approach is to train a classifier that outputs the probability that a statement is truthful, based on the hidden layer activations of the LLM as it reads or generates the statement. Experiments demonstrate that given a set of test sentences, of which half are true and half false, our trained classifier achieves an average of 71% to 83% accuracy labeling which sentences are true versus false, depending on the LLM base model. Furthermore, we explore the relationship between our classifier's performance and approaches based on the probability assigned to the sentence by the LLM. We show that while LLM-assigned sentence probability is related to sentence truthfulness, this probability is also dependent on sentence length and the frequencies of words in the sentence, resulting in our trained classifier providing a more reliable approach to detecting truthfulness, highlighting its potential to enhance the reliability of LLM-generated content and its practical applicability in real-world scenarios.

    اللغة الأصليةالإنجليزيّة
    عنوان منشور المضيفFindings of the Association for Computational Linguistics
    العنوان الفرعي لمنشور المضيفEMNLP 2023
    ناشرAssociation for Computational Linguistics (ACL)
    الصفحات967-976
    عدد الصفحات10
    رقم المعيار الدولي للكتب (الإلكتروني)9798891760615
    المعرِّفات الرقمية للأشياء
    حالة النشرنُشِر - 2023
    الحدث2023 Findings of the Association for Computational Linguistics: EMNLP 2023 - Singapore, سنغافورة
    المدة: ٦ ديسمبر ٢٠٢٣١٠ ديسمبر ٢٠٢٣

    سلسلة المنشورات

    الاسمFindings of the Association for Computational Linguistics: EMNLP 2023

    !!Conference

    !!Conference2023 Findings of the Association for Computational Linguistics: EMNLP 2023
    الدولة/الإقليمسنغافورة
    المدينةSingapore
    المدة٦/١٢/٢٣١٠/١٢/٢٣

    بصمة

    أدرس بدقة موضوعات البحث “The Internal State of an LLM Knows When It's Lying'. فهما يشكلان معًا بصمة فريدة.

    قم بذكر هذا