Item request has been placed! ×
Item request cannot be made. ×
loading  Processing Request

Shifts in emergency physicians' attitudes toward large language model-based documentation: a pre- and post-implementation study.

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • Additional Information
    • Source:
      Publisher: Nature Publishing Group Country of Publication: England NLM ID: 101563288 Publication Model: Electronic Cited Medium: Internet ISSN: 2045-2322 (Electronic) Linking ISSN: 20452322 NLM ISO Abbreviation: Sci Rep Subsets: MEDLINE
    • Publication Information:
      Original Publication: London : Nature Publishing Group, copyright 2011-
    • Subject Terms:
    • Abstract:
      Competing Interests: Declarations. Competing interests: S.C.Y reports grants from Daiichi Sankyo. He is a coinventor of granted Korea Patent DP-2023-1223 and DP-2023-0920, and pending Patent Applications DP-2024-0909, DP-2024-0908, DP-2022-1658, DP-2022-1478, and DP-2022-1365 unrelated to current work. S.C.Y. is a chief executive officer of PHI Digital Healthcare. Other authors have no potential conflicts of interest to disclose.
      Large language models (LLMs) can assist physicians in writing medical notes more efficiently. This study evaluates whether using an LLM assistant for writing emergency department discharge notes can reduce doctors' workload and addresses concerns regarding the incorporation of AI in medical practice. Eight emergency doctors with an average experience of 12 years participated in our study. We surveyed them prior to, post 3 days, and post 5 weeks of their LLM usage. The results showed that doctors' concerns about using LLMs decreased significantly and remained low throughout the study period. Moreover, the LLM usage considerably reduced the perceived workload, with the time required to write each discharge note reduced by one-third of the original time. These findings demonstrate that doctors readily accept and benefit from LLM assistants in their daily practice. Our study provides the first real-world evidence of how doctors' attitudes toward AI assistants change over time in clinical settings, offering valuable insights into the future implementation of LLM-based documentation tools in healthcare.
      (© 2025. The Author(s).)
    • References:
      Boussina, A. et al. Large Language models for more efficient reporting of hospital quality measures. NEJM AI. 1, AIcs2400420 (2024). (PMID: 10.1056/AIcs2400420)
      Ong, J. C. L. et al. Medical ethics of large Language models in medicine. NEJM AI, AIra2400038 (2024).
      Gallifant, J. et al. The TRIPOD-LLM reporting guideline for studies using large Language models. Na. Med. 1–10 (2025).
      Spotnitz, M. et al. A survey of clinicians’ views of the utility of large Language models. Appl. Clin. Inf. 15, 306–312. https://doi.org/10.1055/a-2281-7092 (2024). (PMID: 10.1055/a-2281-7092)
      Tripathi, S., Sukumaran, R. & Cook, T. S. Efficient healthcare with large Language models: optimizing clinical workflow and enhancing patient care. J. Am. Med. Inform. Assoc. 31, 1436–1440 (2024). (PMID: 10.1093/jamia/ocad2583827373911105142)
      Hartman, V. et al. Developing and evaluating large Language model–generated emergency medicine handoff notes. JAMA Netw. Open. 7, e2448723–e2448723 (2024). (PMID: 10.1001/jamanetworkopen.2024.487233962571911615705)
      Hartman, V. C. et al. A method to automate the discharge summary hospital course for neurology patients. J. Am. Med. Inform. Assoc. 30, 1995–2003 (2023). (PMID: 10.1093/jamia/ocad1773763962410654848)
      Chua, C. E. et al. Integration of customised LLM for discharge summary generation in real-world clinical settings: a pilot study on RUSSELL GPT. The Lancet Reg. Health–Western Pacific 51 (2024).
      Heilmeyer, F. et al. Viability of open large Language models for clinical Documentation in German health care: Real-World model evaluation study. JMIR Med. Inf. 12, e59617 (2024). (PMID: 10.2196/59617)
      Association, A. M. AMA Augmented Intelligence Research: Physician Sentiments Around the Use of AI in Health Care: Motivations, Opportunities, Risks, and Use Cases – Shifts from 2023 to 2024. (2025).
      Park, S. H. & Langlotz, C. P. Crucial role of Understanding in Human-Artificial intelligence interaction for successful clinical adoption. Korean J. Radiol. 26, 287–290 (2025). (PMID: 10.3348/kjr.2025.00714001556211955380)
      Kim, H. et al. A Bilingual On-premise AI agent for Clinical Drafting: Seamless EHR integration in the Y-KNOT Project. medRxiv:2025.2004. 25325003 (2025). (2003).
      Lambert, S. I. et al. An integrative review on the acceptance of artificial intelligence among healthcare professionals in hospitals. NPJ Digit. Med. 6, 111 (2023). (PMID: 10.1038/s41746-023-00852-53730194610257646)
      Jindal, J. A., Lungren, M. P. & Shah, N. H. Ensuring useful adoption of generative artificial intelligence in healthcare. J. Am. Med. Inform. Assoc. 31, 1441–1444. https://doi.org/10.1093/jamia/ocae043 (2024). (PMID: 10.1093/jamia/ocae0433845229811105148)
      Gandhi, T. K. et al. How can artificial intelligence decrease cognitive and work burden for front line practitioners? JAMIA open 6, ooad079 (2023).
      Tierney, A. A. et al. Ambient artificial intelligence scribes to alleviate the burden of clinical Documentation. NEJM Catalyst Innovations Care Delivery 5, (2024). CAT. 23.0404.
      Sellergren, A. et al. Medgemma technical report. arXiv preprint arXiv:2507.05201 (2025).
      OpenAI Introducing gpt-oss. https://openai.com/ko-KR/index/introducing-gpt-oss/ (2025).
      McCurry, J. South Korean Doctors threaten mass resignation. Lancet 403, 1124 (2024). (PMID: 10.1016/S0140-6736(24)00578-638522437)
      Moon, J. & Lee, J. Y. Why I decide to leave South Korea healthcare system. The Lancet Reg. Health–Western Pacific 52 (2024).
      Murad, M. H. et al. Measuring Documentation burden in healthcare. J. Gen. Intern. Med. 1–12 (2024).
      Meta Introducing Meta Llama 3: The most capable openly available LLM to date, (2024). https://ai.meta.com/blog/meta-llama-3/ .
      Hart, S. G. in Proceedings of the human factors and ergonomics society annual meeting. 904–908 (Sage publications Sage CA: Los Angeles, CA).
      Venkatesh, V. & Davis, F. D. A theoretical extension of the technology acceptance model: four longitudinal field studies. Manage. Sci. 46, 186–204 (2000). (PMID: 10.1287/mnsc.46.2.186.11926)
      Davis, F. D. & Perceived Usefulness Perceived ease of Use, and user acceptance of information technology. MIS Q. 13, 319–340. https://doi.org/10.2307/249008 (1989). (PMID: 10.2307/249008)
      Ajzen, I., Fishbein, M., Lohmann, S. & Albarracín, D. The influence of attitudes on behavior. The handbook of attitudes, volume 1: Basic principles, 197–255 (2018).
      Stout, N., Dennis, A. R. & Wells, T. M. The Buck stops there: the impact of perceived accountability and control on the intention to delegate to software agents. AIS Trans. Hum Comput Interact. 6, 1–15 (2014). (PMID: 10.17705/1thci.00058)
    • Grant Information:
      RS-2023-KH135326 Republic of Korea Korea Health Industry Development Institute; MD-Phd/Medical Scientist Training Program Republic of Korea Korea Health Industry Development Institute
    • Contributed Indexing:
      Keywords: Emergency service, hospital; Longitudinal studies; Natural language processing; Surveys and questionnaires; Workload
    • Publication Date:
      Date Created: 20251125 Date Completed: 20251125 Latest Revision: 20251127
    • Publication Date:
      20251127
    • Accession Number:
      PMC12644546
    • Accession Number:
      10.1038/s41598-025-24659-4
    • Accession Number:
      41285993