References: Boussina, A. et al. Large Language models for more efficient reporting of hospital quality measures. NEJM AI. 1, AIcs2400420 (2024). (PMID: 10.1056/AIcs2400420)
Ong, J. C. L. et al. Medical ethics of large Language models in medicine. NEJM AI, AIra2400038 (2024).
Gallifant, J. et al. The TRIPOD-LLM reporting guideline for studies using large Language models. Na. Med. 1–10 (2025).
Spotnitz, M. et al. A survey of clinicians’ views of the utility of large Language models. Appl. Clin. Inf. 15, 306–312. https://doi.org/10.1055/a-2281-7092 (2024). (PMID: 10.1055/a-2281-7092)
Tripathi, S., Sukumaran, R. & Cook, T. S. Efficient healthcare with large Language models: optimizing clinical workflow and enhancing patient care. J. Am. Med. Inform. Assoc. 31, 1436–1440 (2024). (PMID: 10.1093/jamia/ocad2583827373911105142)
Hartman, V. et al. Developing and evaluating large Language model–generated emergency medicine handoff notes. JAMA Netw. Open. 7, e2448723–e2448723 (2024). (PMID: 10.1001/jamanetworkopen.2024.487233962571911615705)
Hartman, V. C. et al. A method to automate the discharge summary hospital course for neurology patients. J. Am. Med. Inform. Assoc. 30, 1995–2003 (2023). (PMID: 10.1093/jamia/ocad1773763962410654848)
Chua, C. E. et al. Integration of customised LLM for discharge summary generation in real-world clinical settings: a pilot study on RUSSELL GPT. The Lancet Reg. Health–Western Pacific 51 (2024).
Heilmeyer, F. et al. Viability of open large Language models for clinical Documentation in German health care: Real-World model evaluation study. JMIR Med. Inf. 12, e59617 (2024). (PMID: 10.2196/59617)
Association, A. M. AMA Augmented Intelligence Research: Physician Sentiments Around the Use of AI in Health Care: Motivations, Opportunities, Risks, and Use Cases – Shifts from 2023 to 2024. (2025).
Park, S. H. & Langlotz, C. P. Crucial role of Understanding in Human-Artificial intelligence interaction for successful clinical adoption. Korean J. Radiol. 26, 287–290 (2025). (PMID: 10.3348/kjr.2025.00714001556211955380)
Kim, H. et al. A Bilingual On-premise AI agent for Clinical Drafting: Seamless EHR integration in the Y-KNOT Project. medRxiv:2025.2004. 25325003 (2025). (2003).
Lambert, S. I. et al. An integrative review on the acceptance of artificial intelligence among healthcare professionals in hospitals. NPJ Digit. Med. 6, 111 (2023). (PMID: 10.1038/s41746-023-00852-53730194610257646)
Jindal, J. A., Lungren, M. P. & Shah, N. H. Ensuring useful adoption of generative artificial intelligence in healthcare. J. Am. Med. Inform. Assoc. 31, 1441–1444. https://doi.org/10.1093/jamia/ocae043 (2024). (PMID: 10.1093/jamia/ocae0433845229811105148)
Gandhi, T. K. et al. How can artificial intelligence decrease cognitive and work burden for front line practitioners? JAMIA open 6, ooad079 (2023).
Tierney, A. A. et al. Ambient artificial intelligence scribes to alleviate the burden of clinical Documentation. NEJM Catalyst Innovations Care Delivery 5, (2024). CAT. 23.0404.
Sellergren, A. et al. Medgemma technical report. arXiv preprint arXiv:2507.05201 (2025).
OpenAI Introducing gpt-oss. https://openai.com/ko-KR/index/introducing-gpt-oss/ (2025).
McCurry, J. South Korean Doctors threaten mass resignation. Lancet 403, 1124 (2024). (PMID: 10.1016/S0140-6736(24)00578-638522437)
Moon, J. & Lee, J. Y. Why I decide to leave South Korea healthcare system. The Lancet Reg. Health–Western Pacific 52 (2024).
Murad, M. H. et al. Measuring Documentation burden in healthcare. J. Gen. Intern. Med. 1–12 (2024).
Meta Introducing Meta Llama 3: The most capable openly available LLM to date, (2024). https://ai.meta.com/blog/meta-llama-3/ .
Hart, S. G. in Proceedings of the human factors and ergonomics society annual meeting. 904–908 (Sage publications Sage CA: Los Angeles, CA).
Venkatesh, V. & Davis, F. D. A theoretical extension of the technology acceptance model: four longitudinal field studies. Manage. Sci. 46, 186–204 (2000). (PMID: 10.1287/mnsc.46.2.186.11926)
Davis, F. D. & Perceived Usefulness Perceived ease of Use, and user acceptance of information technology. MIS Q. 13, 319–340. https://doi.org/10.2307/249008 (1989). (PMID: 10.2307/249008)
Ajzen, I., Fishbein, M., Lohmann, S. & Albarracín, D. The influence of attitudes on behavior. The handbook of attitudes, volume 1: Basic principles, 197–255 (2018).
Stout, N., Dennis, A. R. & Wells, T. M. The Buck stops there: the impact of perceived accountability and control on the intention to delegate to software agents. AIS Trans. Hum Comput Interact. 6, 1–15 (2014). (PMID: 10.17705/1thci.00058)
No Comments.