Aus Aifbportal
Wechseln zu:Navigation, Suche

Explainable Artificial Intelligence in Healthcare: A Citation and Co-Citation Analysis

Information on the Thesis

Type of Final Thesis: Bachelor
Supervisor: Philipp Toussaint
Research Group: Critical Information Infrastructures

Archive Number: 4.730
Status of Thesis: In Progress
Date of start: 2021-05-03

Further Information


With the ever-growing amounts of generated biomedical data (e.g., CT scans, X-ray images, omics data) it is becoming increasingly difficult to wholly analyze this data with conven-tional analysis methods. Therefore, researchers and practitioners are turning to artificial intelli-gence (AI) approaches (e.g., machine learning (ML), deep learning (DL)) for analysis. Although AI approaches for biomedical data promise improved performance and accuracy, they are opaque. Extant AI methods are inaccessible and non-transparent to humans (black box), thus limiting us in fully understanding and therefore trusting the produced outputs.
To address this opacity of contemporary AI approaches a recent trend in AI research, namely Ex-plainable AI (XAI), focuses on producing (more) interpretable AI models whilst maintaining high levels of performance and accuracy. For this, XAI does not only aim to make extant ML and DL models more explainable post-hoc, but does also strive to produce new, interpretable ML and DL models. However, research concerned with XAI for non-image biomedical data (e.g., omics data) is scarce but promises to help with knowledge discovery (i.e., an enhanced understanding of bio-logical processes), improving the performance of and identifying errors in extant ML and DL mod-els, as well as justifying the application of AI approaches in personalized medicine.
Therefore, an investigation of current medical use cases for AI approaches is needed for two rea-sons. First, current use cases with accurate AI approaches may be identified for future implemen-tation of post-hoc explainability. Second, the investigation may unveil potential medical use cases for novel XAI approaches to improve performance and achieve explainability of results.

Objective(s): The aim of this project is to create an overview of current medical use cases for AI/ML approaches in extant literature.

Method(s): Literature review

Introductory Literature:
Arrieta AB, et al. Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 2020; 58: 82–115.

Levy Y, Ellis TJ. A Systems Approach to Conduct an Effective Literature Review in Support of In-formation Systems Research. Informing Science Journal 2006; 9.

Quinn TP, Senadeera M, Jacobs S, Coghlan S, Le V. Trust and medical AI: the challenges we face and the expertise needed to overcome them. Journal of the American Medical Informatics Association 2020.