Patient insights into empathy, compassion and self-disclosure in medical large language models: results from the IPALLM III study
Large language models (LLMs) offer promising applications in healthcare communication, including the provision of medical information and simulation of empathetic responses. However, the extent to which patients perceive such interactions as empathetic remains unclear. This study explores urological...
Gespeichert in:
| Hauptverfasser: | , , , , , , , , , , |
|---|---|
| Dokumenttyp: | Article (Journal) |
| Sprache: | Englisch |
| Veröffentlicht: |
December 2025
|
| In: |
World journal of urology
Year: 2025, Jahrgang: 43, Heft: 1, Pages: 1-9 |
| ISSN: | 1433-8726 |
| DOI: | 10.1007/s00345-025-05872-2 |
| Online-Zugang: | Verlag, lizenzpflichtig, Volltext: https://doi.org/10.1007/s00345-025-05872-2 |
| Verfasserangaben: | Nicolas Carl, Sarah Haggenmüller, Jana Theres Winterstein, Lisa Nguyen, Christoph Wies, Martin Joachim Hetz, Maurin Helen Mangold, Britta Grüne, Maurice Stephan Michel, Titus Josef Brinker, Frederik Wessels |
| Zusammenfassung: | Large language models (LLMs) offer promising applications in healthcare communication, including the provision of medical information and simulation of empathetic responses. However, the extent to which patients perceive such interactions as empathetic remains unclear. This study explores urological patients’ perceptions of empathy and compassion in interactions with an LLM-powered chatbot, as well as their willingness to disclose personal information. |
|---|---|
| Beschreibung: | Online veröffentlicht: 14. August 2025 Gesehen am 06.11.2025 |
| Beschreibung: | Online Resource |
| ISSN: | 1433-8726 |
| DOI: | 10.1007/s00345-025-05872-2 |