TY - JOUR
T1 - Interpretation of cardiopulmonary exercise test by GPT–promising tool as a first step to identify normal results
AU - Kleinhendler, Eyal
AU - Pinkhasov, Avital
AU - Hayek, Samah
AU - Man, Avraham
AU - Freund, Ophir
AU - Perluk, Tal Moshe
AU - Gershman, Evgeni
AU - Unterman, Avraham
AU - Fire, Gil
AU - Bar-Shai, Amir
N1 - Publisher Copyright:
© 2025 Informa UK Limited, trading as Taylor & Francis Group.
PY - 2025
Y1 - 2025
N2 - Background: Cardiopulmonary exercise testing (CPET) is used in the evaluation of unexplained dyspnea. However, its interpretation requires expertise that is often not available. We aim to evaluate the utility of ChatGPT (GPT) in interpreting CPET results. Research Design and Methods: This cross-sectional study included 150 patients who underwent CPET. Two expert pulmonologists categorized the results as normal or abnormal (cardiovascular, pulmonary, or other exercise limitations), being the gold standard. GPT versions 3.5 (GPT-3.5) and 4 (GPT-4) analyzed the same data using pre-defined structured inputs. Results: GPT-3.5 correctly interpreted 67% of the cases. It achieved a sensitivity of 75% and specificity of 98% in identifying normal CPET results. GPT-3.5 had varying results for abnormal CPET tests, depending on the limiting etiology. In contrast, GPT-4 demonstrated improvements in interpreting abnormal tests, with sensitivities of 83% and 92% for respiratory and cardiovascular limitations, respectively. Combining the normal CPET interpretations by both AI models resulted in 91% sensitivity and 98% specificity. Low work rate and peak oxygen consumption were independent predictors for inaccurate interpretations. Conclusions: Both GPT-3.5 and GPT-4 succeeded in ruling out abnormal CPET results. This tool could be utilized to differentiate between normal and abnormal results.
AB - Background: Cardiopulmonary exercise testing (CPET) is used in the evaluation of unexplained dyspnea. However, its interpretation requires expertise that is often not available. We aim to evaluate the utility of ChatGPT (GPT) in interpreting CPET results. Research Design and Methods: This cross-sectional study included 150 patients who underwent CPET. Two expert pulmonologists categorized the results as normal or abnormal (cardiovascular, pulmonary, or other exercise limitations), being the gold standard. GPT versions 3.5 (GPT-3.5) and 4 (GPT-4) analyzed the same data using pre-defined structured inputs. Results: GPT-3.5 correctly interpreted 67% of the cases. It achieved a sensitivity of 75% and specificity of 98% in identifying normal CPET results. GPT-3.5 had varying results for abnormal CPET tests, depending on the limiting etiology. In contrast, GPT-4 demonstrated improvements in interpreting abnormal tests, with sensitivities of 83% and 92% for respiratory and cardiovascular limitations, respectively. Combining the normal CPET interpretations by both AI models resulted in 91% sensitivity and 98% specificity. Low work rate and peak oxygen consumption were independent predictors for inaccurate interpretations. Conclusions: Both GPT-3.5 and GPT-4 succeeded in ruling out abnormal CPET results. This tool could be utilized to differentiate between normal and abnormal results.
KW - Artificial Intelligence
KW - ChatGPT
KW - Generative AI
KW - cardio-pulmonary exercise test (CPET)
KW - large language model
KW - pulmonary function test
UR - http://www.scopus.com/inward/record.url?scp=86000237375&partnerID=8YFLogxK
U2 - 10.1080/17476348.2025.2474138
DO - 10.1080/17476348.2025.2474138
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
C2 - 40012496
AN - SCOPUS:86000237375
SN - 1747-6348
VL - 19
SP - 371
EP - 378
JO - Expert Review of Respiratory Medicine
JF - Expert Review of Respiratory Medicine
IS - 4
ER -