A machine learning framework for interpretable predictions in patient pathways: The case of predicting ICU admission for patients with symptoms of sepsis

Zilker, Sandra and Weinzierl, Sven and Kraus, Mathias and Zschech, Patrick and Matzner, Martin (2024) A machine learning framework for interpretable predictions in patient pathways: The case of predicting ICU admission for patients with symptoms of sepsis. HEALTH CARE MANAGEMENT SCIENCE, 27 (2). pp. 136-167. ISSN 1386-9620, 1572-9389

Full text not available from this repository. (Request a copy)

Abstract

Proactive analysis of patient pathways helps healthcare providers anticipate treatment-related risks, identify outcomes, and allocate resources. Machine learning (ML) can leverage a patient's complete health history to make informed decisions about future events. However, previous work has mostly relied on so-called black-box models, which are unintelligible to humans, making it difficult for clinicians to apply such models. Our work introduces PatWay-Net, an ML framework designed for interpretable predictions of admission to the intensive care unit (ICU) for patients with symptoms of sepsis. We propose a novel type of recurrent neural network and combine it with multi-layer perceptrons to process the patient pathways and produce predictive yet interpretable results. We demonstrate its utility through a comprehensive dashboard that visualizes patient health trajectories, predictive outcomes, and associated risks. Our evaluation includes both predictive performance - where PatWay-Net outperforms standard models such as decision trees, random forests, and gradient-boosted decision trees - and clinical utility, validated through structured interviews with clinicians. By providing improved predictive accuracy along with interpretable and actionable insights, PatWay-Net serves as a valuable tool for healthcare decision support in the critical case of patients with symptoms of sepsis.

Item Type: Article
Uncontrolled Keywords: INTENSIVE-CARE; MODEL; Patient pathway; Process prediction; Sepsis; Interpretability; Interpretable machine learning; Interpretation plots; Deep learning
Subjects: 000 Computer science, information & general works > 004 Computer science
600 Technology > 610 Medical sciences Medicine
Divisions: Informatics and Data Science > Department Information Systems > Chair of Explainable Artificial Inteligence for Business Value Creation (Prof. Dr. Mathias Kraus)
Depositing User: Dr. Gernot Deinzer
Date Deposited: 14 Jan 2026 07:49
Last Modified: 14 Jan 2026 07:49
URI: https://pred.uni-regensburg.de/id/eprint/65138

Actions (login required)

View Item View Item