News

Article

ChatGPT shows promise in responding to urology patient in-basket messages

Author(s):

"Generative AI technologies may play a valuable role in providing prompt, accurate responses to routine patient questions––potentially alleviating patients' concerns while freeing up clinic time and resources to address other complex tasks," says Michael Scott, MD.

ChatGPT was able to generate acceptable responses to nearly half of patient in-basket messages, indicating a potential for generative AI tools to decrease the time burden of electronic health record use for urologists, according to new data published in Urology Practice.1

Overall, 47% of ChatGPT’s responses were deemed acceptable to send to patients.

Overall, 47% of ChatGPT’s responses were deemed acceptable to send to patients.

"Generative AI technologies may play a valuable role in providing prompt, accurate responses to routine patient questions––potentially alleviating patients' concerns while freeing up clinic time and resources to address other complex tasks," said lead author Michael Scott, MD, in a news release on the findings.2 Scott is a urology resident at Stanford University School of Medicine in California.

For the study, the investigators collected 100 electronic patient messages from a men’s health clinic and entered them individually into ChatGPT 3.5 for responses. Questions included queries on clinical decision-making, health and treatment plans, postoperative concerns, symptoms, and test results.

ChatGPT’s responses were then independently evaluated by 5 urologists to determine if they would send the generated response to a patient. A question was deemed acceptable to send to patients if the majority of reviewers answered yes. The responses were also graded on a 5-point Likert scale (1 = strongly disagree, 2 = agree, 3 = neutral, 4 = agree, 5 = strongly agree) on accuracy, completeness, harmfulness, helpfulness, and intellligibleness.

Overall, data showed that 47% of ChatGPT’s responses were deemed acceptable to send to patients. Responses across all question types were generally accurate (average, 4.0) and intelligible (average, 4.7). The completeness (average, 3.9) and helpfulness (average, 3.5) of responses tended to be lower. Importantly, little harm was detected across all question types (average, 1.4).

The chatbot performed better with questions graded as easy, with 56% of responses to easy questions being acceptable to send to patients vs 34% of questions graded as difficult (P = .03). Responses to easy questions were also shown to be more accurate, complete, helpful, and intelligible compared with responses to difficult questions. In total, 59 questions were graded as easy and 41 were graded as difficult.

The investigators observed no significant difference in response quality based on question content. Questions regarding patient symptoms generally scored higher than others, but little consistent ranking could be discerned between question categories.

According to the authors, these findings indicate a potential for generative AI to be used in clinical practice.

They wrote, “These results show promise for the utilization of generative AI technology to help improve clinical efficiency. A likely application of this technology is to integrate this technology into an electronic medical record to automatically generate a response to all patient messages.”1

However, they also caution that further research is still warranted before use of large language models (LLMs) in this manner becomes widespread.

Scott added in the news release, “While our study provides an interesting starting point, more research will be needed to validate the use of LLMs to respond to patient questions in urology as well as other specialties. This will be a potentially valuable health care application, particularly with continued advances in AI technology."2

References

1. Scott M, Muncey W, Seranio N, et al. Assessing artificial intelligence–generated responses to urology patient in-basket messages. Urol Pract. 2024;11(5):793-798. doi:10.1097/UPJ.0000000000000637

2. ChatGPT shows promise in answering patients' questions to urologists. News release. Wolters Kluwer Health: Lippincott. August 22, 2024. Accessed August 26, 2024. https://www.newswise.com/articles/chatgpt-shows-promise-in-answering-patients-questions-to-urologists

Related Videos
Woman having telemedicine appointment with doctor | Image Credit: © Jacob Lund - stock.adobe.com
Stacy Loeb, MD, MSc, PhD (Hon), answers a question during a Zoom video interview
Emily Sopko, CNP, answers a question during a Zoom video interview
Male nurse pushing stretcher gurney bed in hospital corridor with doctors & senior female patient | Image Credit: © spotmatikphoto - stock.adobe.com
Emily Sopko, CNP, answers a question during a Zoom video interview
Stacy Loeb, MD, MSc, PhD (Hon), answers a question during a Zoom virtual interview
Related Content
© 2024 MJH Life Sciences

All rights reserved.