Item request has been placed! ×
Item request cannot be made. ×
loading  Processing Request

ChatGPT Responses to Glaucoma Questions Based on Patient Health Literacy Levels

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • Additional Information
    • Contributors:
      Patel, Monica; Suresh, Sruthi; Saleh, Ibrahim; Kooner, Karanjit
    • Publication Date:
      2024
    • Collection:
      UT Southwestern Medical Center Institutional Repository (University of Texas)
    • Abstract:
      The 62nd Annual Medical Student Research Forum at UT Southwestern Medical Center (Tuesday, January 30, 2024, 3-6 p.m., D1.700 Lecture Hall) ; BACKGROUND: Glaucoma is a complex, progressive neurodegenerative disease of the optic nerve, commonly found in the elderly. Patients usually do not understand the complexities of the disease and struggle to find answers from different glaucoma sources and sites which may be difficult to understand. AI chatbots such as ChatGPT(r) have recently emerged as a useful tool to gather information on any medical question. However, the role of ChatGPT in generating answers to glaucoma treatment questions is not well documented. Health literacy is defined as the basic reading and mathematical skills required to find, understand, and use health-related information. The average reading level among US adults is 7th-8th grade; however, most medical information is often written at a higher reading level. The purpose of this study was to determine whether ChatGPT can tailor responses to glaucoma treatment questions based on patient health literacy levels. HYPOTHESIS: We hypothesize that ChatGPT may satisfactorily tailor answers to glaucoma questions based on patient health literacy level. METHODS: We selected 27 common questions relating to glaucoma medications, lasers, and surgical treatments. The questions were inputted into ChatGPT, first without instructions. Then, ChatGPT was instructed to tailor responses to 4 health literacy levels based on the US National Assessment of Health Literacy: below basic (BB), basic (B), intermediate (I), and proficient (P). Responses were analyzed using Flesch-Kincaid (FKC) grade level [0-18+] corresponding to years of education, word count, and syllables. Kruskal-Wallis rank sum tests were used to analyze the data. RESULTS: The mean FKC grade level of ChatGPT responses without any instructions about health literacy levels was 12.83, corresponding to a 12th-grade or "fairly difficult to read" level. When instructed to tailor responses, the mean FKC grade ...
    • File Description:
      application/pdf
    • Relation:
      62nd Annual Medical Student Research Forum; Mekala, P., Patel, M., Suresh, S., Saleh, I., & Kooner, K. S. (2024, January 30). ChatGPT responses to glaucoma questions based on patient health literacy levels [Poster session]. 62nd Annual Medical Student Research Forum, Dallas, Texas. https://hdl.handle.net/2152.5/10251; https://hdl.handle.net/2152.5/10251
    • Online Access:
      https://hdl.handle.net/2152.5/10251
    • Accession Number:
      edsbas.1282270B