Stage-oe-small.jpg

Thema4730

Aus Aifbportal
Wechseln zu:Navigation, Suche



Explainable Artificial Intelligence in Healthcare: A Citation and Co-Citation Analysis


Marie Morgenroth



Informationen zur Arbeit

Abschlussarbeitstyp: Bachelor
Betreuer: Philipp Toussaint
Forschungsgruppe: Critical Information Infrastructures

Archivierungsnummer: 4730
Abschlussarbeitsstatus: Abgeschlossen
Beginn: 03. Mai 2021
Abgabe: 15. September 2021

Weitere Informationen

Background: With the ever-growing amounts of generated biomedical data (e.g., CT scans, X-ray images, omics data) it is becoming increasingly difficult to wholly analyze this data with conven-tional analysis methods. Therefore, researchers and practitioners are turning to artificial intelli-gence (AI) approaches (e.g., machine learning (ML), deep learning (DL)) for analysis. Although AI approaches for biomedical data promise improved performance and accuracy, they are opaque. Extant AI methods are inaccessible and non-transparent to humans (black box), thus limiting us in fully understanding and therefore trusting the produced outputs.
To address this opacity of contemporary AI approaches a recent trend in AI research, namely Ex-plainable AI (XAI), focuses on producing (more) interpretable AI models whilst maintaining high levels of performance and accuracy. For this, XAI does not only aim to make extant ML and DL models more explainable post-hoc, but does also strive to produce new, interpretable ML and DL models. However, research concerned with XAI for non-image biomedical data (e.g., omics data) is scarce but promises to help with knowledge discovery (i.e., an enhanced understanding of bio-logical processes), improving the performance of and identifying errors in extant ML and DL mod-els, as well as justifying the application of AI approaches in personalized medicine.
Therefore, an investigation of current medical use cases for AI approaches is needed for two rea-sons. First, current use cases with accurate AI approaches may be identified for future implemen-tation of post-hoc explainability. Second, the investigation may unveil potential medical use cases for novel XAI approaches to improve performance and achieve explainability of results.

Objective(s): The aim of this project is to create an overview of current medical use cases for AI/ML approaches in extant literature.



Method(s): Literature review



Introductory Literature:
Arrieta AB, et al. Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 2020; 58: 82–115.

Levy Y, Ellis TJ. A Systems Approach to Conduct an Effective Literature Review in Support of In-formation Systems Research. Informing Science Journal 2006; 9.

Quinn TP, Senadeera M, Jacobs S, Coghlan S, Le V. Trust and medical AI: the challenges we face and the expertise needed to overcome them. Journal of the American Medical Informatics Association 2020.