Chanda, Tirtha and Hauser, Katja and Hobelsberger, Sarah and Bucher, Tabea-Clara and Garcia, Carina Nogueira and Wies, Christoph and Kittler, Harald and Tschandl, Philipp and Navarrete-Dechent, Cristian and Podlipnik, Sebastian and Chousakos, Emmanouil and Crnaric, Iva and Majstorovic, Jovana and Alhajwan, Linda and Foreman, Tanya and Peternel, Sandra and Sarap, Sergei and Ozdemir, Irem and Barnhill, Raymond L. and Llamas-Velasco, Mar and Poch, Gabriela and Korsing, Soeren and Sondermann, Wiebke and Gellrich, Frank Friedrich and Heppt, Markus V. and Erdmann, Michael and Haferkamp, Sebastian and Drexler, Konstantin and Goebeler, Matthias and Schilling, Bastian and Utikal, Jochen S. and Ghoreschi, Kamran and Froehling, Stefan and Krieghoff-Henning, Eva and Brinker, Titus J. (2024) Dermatologist-like explainable AI enhances trust and confidence in diagnosing melanoma. NATURE COMMUNICATIONS, 15 (1): 524. ISSN , 2041-1723
Full text not available from this repository. (Request a copy)Abstract
Artificial intelligence (AI) systems have been shown to help dermatologists diagnose melanoma more accurately, however they lack transparency, hindering user acceptance. Explainable AI (XAI) methods can help to increase transparency, yet often lack precise, domain-specific explanations. Moreover, the impact of XAI methods on dermatologists' decisions has not yet been evaluated. Building upon previous research, we introduce an XAI system that provides precise and domain-specific explanations alongside its differential diagnoses of melanomas and nevi. Through a three-phase study, we assess its impact on dermatologists' diagnostic accuracy, diagnostic confidence, and trust in the XAI-support. Our results show strong alignment between XAI and dermatologist explanations. We also show that dermatologists' confidence in their diagnoses, and their trust in the support system significantly increase with XAI compared to conventional AI. This study highlights dermatologists' willingness to adopt such XAI systems, promoting future use in the clinic. Artificial intelligence has become popular as a cancer classification tool, but there is distrust of such systems due to their lack of transparency. Here, the authors develop an explainable AI system which produces text- and region-based explanations alongside its classifications which was assessed using clinicians' diagnostic accuracy, diagnostic confidence, and their trust in the system.
| Item Type: | Article |
|---|---|
| Uncontrolled Keywords: | ARTIFICIAL-INTELLIGENCE; NETWORKS; |
| Subjects: | 600 Technology > 610 Medical sciences Medicine |
| Divisions: | Medicine > Lehrstuhl für Dermatologie und Venerologie |
| Depositing User: | Dr. Gernot Deinzer |
| Date Deposited: | 20 Aug 2025 07:10 |
| Last Modified: | 20 Aug 2025 07:10 |
| URI: | https://pred.uni-regensburg.de/id/eprint/65598 |
Actions (login required)
![]() |
View Item |

