Stage-oe-small.jpg

Thema4730: Unterschied zwischen den Versionen

Aus Aifbportal
Wechseln zu:Navigation, Suche
(3 dazwischenliegende Versionen desselben Benutzers werden nicht angezeigt)
Zeile 1: Zeile 1:
 
{{Abschlussarbeit
 
{{Abschlussarbeit
|Titel=Explainable Artificial Intelligence in Life Science: A Literature Review on Medical Use Cases
+
|Titel=Explainable Artificial Intelligence in Healthcare: A Citation and Co-Citation Analysis
|Abschlussarbeitstyp=Bachelor, Master
+
|Vorname=Marie
 +
|Nachname=Morgenroth
 +
|Abschlussarbeitstyp=Bachelor
 
|Betreuer=Philipp Toussaint
 
|Betreuer=Philipp Toussaint
 
|Forschungsgruppe=Critical Information Infrastructures
 
|Forschungsgruppe=Critical Information Infrastructures
|Beginn=2021/01/01
+
|Abschlussarbeitsstatus=In Bearbeitung
 +
|Beginn=2021/05/03
 
|Beschreibung DE=<b>Background:</b>
 
|Beschreibung DE=<b>Background:</b>
 +
With the ever-growing amounts of generated biomedical data (e.g., CT scans, X-ray images, omics data) it is becoming increasingly difficult to wholly analyze this data with conven-tional analysis methods. Therefore, researchers and practitioners are turning to artificial intelli-gence (AI) approaches (e.g., machine learning (ML), deep learning (DL)) for analysis. Although AI approaches for biomedical data promise improved performance and accuracy, they are opaque. Extant AI methods are inaccessible and non-transparent to humans (black box), thus limiting us in fully understanding and therefore trusting the produced outputs. <br>
 +
To address this opacity of contemporary AI approaches a recent trend in AI research, namely Ex-plainable AI (XAI), focuses on producing (more) interpretable AI models whilst maintaining high levels of performance and accuracy. For this, XAI does not only aim to make extant ML and DL models more explainable post-hoc, but does also strive to produce new, interpretable ML and DL models. However, research concerned with XAI for non-image biomedical data (e.g., omics data) is scarce but promises to help with knowledge discovery (i.e., an enhanced understanding of bio-logical processes), improving the performance of and identifying errors in extant ML and DL mod-els, as well as justifying the application of AI approaches in personalized medicine.<br>
 +
Therefore, an investigation of current medical use cases for AI approaches is needed for two rea-sons. First, current use cases with accurate AI approaches may be identified for future implemen-tation of post-hoc explainability. Second, the investigation may unveil potential medical use cases for novel XAI approaches to improve performance and achieve explainability of results.
 +
<br><br>
 +
<b>Objective(s):</b> The aim of this project is to create an overview of current medical use cases for AI/ML approaches in extant literature.
 +
 +
<br><br>
 +
<b>Method(s):</b> Literature review
 +
 +
<br><br>
 +
<b>Introductory Literature:</b><br>
 +
Arrieta AB, et al. Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 2020; 58: 82–115.
 +
<br><br>
 +
Levy Y, Ellis TJ. A Systems Approach to Conduct an Effective Literature Review in Support of In-formation Systems Research. Informing Science Journal 2006; 9.
 +
<br><br>
 +
Quinn TP, Senadeera M, Jacobs S, Coghlan S, Le V. Trust and medical AI: the challenges we face and the expertise needed to overcome them. Journal of the American Medical Informatics Association 2020.
 +
|Beschreibung EN=<b>Background:</b>
 
With the ever-growing amounts of generated biomedical data (e.g., CT scans, X-ray images, omics data) it is becoming increasingly difficult to wholly analyze this data with conven-tional analysis methods. Therefore, researchers and practitioners are turning to artificial intelli-gence (AI) approaches (e.g., machine learning (ML), deep learning (DL)) for analysis. Although AI approaches for biomedical data promise improved performance and accuracy, they are opaque. Extant AI methods are inaccessible and non-transparent to humans (black box), thus limiting us in fully understanding and therefore trusting the produced outputs. <br>
 
With the ever-growing amounts of generated biomedical data (e.g., CT scans, X-ray images, omics data) it is becoming increasingly difficult to wholly analyze this data with conven-tional analysis methods. Therefore, researchers and practitioners are turning to artificial intelli-gence (AI) approaches (e.g., machine learning (ML), deep learning (DL)) for analysis. Although AI approaches for biomedical data promise improved performance and accuracy, they are opaque. Extant AI methods are inaccessible and non-transparent to humans (black box), thus limiting us in fully understanding and therefore trusting the produced outputs. <br>
 
To address this opacity of contemporary AI approaches a recent trend in AI research, namely Ex-plainable AI (XAI), focuses on producing (more) interpretable AI models whilst maintaining high levels of performance and accuracy. For this, XAI does not only aim to make extant ML and DL models more explainable post-hoc, but does also strive to produce new, interpretable ML and DL models. However, research concerned with XAI for non-image biomedical data (e.g., omics data) is scarce but promises to help with knowledge discovery (i.e., an enhanced understanding of bio-logical processes), improving the performance of and identifying errors in extant ML and DL mod-els, as well as justifying the application of AI approaches in personalized medicine.<br>
 
To address this opacity of contemporary AI approaches a recent trend in AI research, namely Ex-plainable AI (XAI), focuses on producing (more) interpretable AI models whilst maintaining high levels of performance and accuracy. For this, XAI does not only aim to make extant ML and DL models more explainable post-hoc, but does also strive to produce new, interpretable ML and DL models. However, research concerned with XAI for non-image biomedical data (e.g., omics data) is scarce but promises to help with knowledge discovery (i.e., an enhanced understanding of bio-logical processes), improving the performance of and identifying errors in extant ML and DL mod-els, as well as justifying the application of AI approaches in personalized medicine.<br>

Version vom 30. April 2021, 09:38 Uhr



Explainable Artificial Intelligence in Healthcare: A Citation and Co-Citation Analysis




Informationen zur Arbeit

Abschlussarbeitstyp: Bachelor
Betreuer: Philipp Toussaint
Forschungsgruppe: Critical Information Infrastructures

Archivierungsnummer: 4730
Abschlussarbeitsstatus: In Bearbeitung
Beginn: 03. Mai 2021
Abgabe: unbekannt

Weitere Informationen

Background: With the ever-growing amounts of generated biomedical data (e.g., CT scans, X-ray images, omics data) it is becoming increasingly difficult to wholly analyze this data with conven-tional analysis methods. Therefore, researchers and practitioners are turning to artificial intelli-gence (AI) approaches (e.g., machine learning (ML), deep learning (DL)) for analysis. Although AI approaches for biomedical data promise improved performance and accuracy, they are opaque. Extant AI methods are inaccessible and non-transparent to humans (black box), thus limiting us in fully understanding and therefore trusting the produced outputs.
To address this opacity of contemporary AI approaches a recent trend in AI research, namely Ex-plainable AI (XAI), focuses on producing (more) interpretable AI models whilst maintaining high levels of performance and accuracy. For this, XAI does not only aim to make extant ML and DL models more explainable post-hoc, but does also strive to produce new, interpretable ML and DL models. However, research concerned with XAI for non-image biomedical data (e.g., omics data) is scarce but promises to help with knowledge discovery (i.e., an enhanced understanding of bio-logical processes), improving the performance of and identifying errors in extant ML and DL mod-els, as well as justifying the application of AI approaches in personalized medicine.
Therefore, an investigation of current medical use cases for AI approaches is needed for two rea-sons. First, current use cases with accurate AI approaches may be identified for future implemen-tation of post-hoc explainability. Second, the investigation may unveil potential medical use cases for novel XAI approaches to improve performance and achieve explainability of results.

Objective(s): The aim of this project is to create an overview of current medical use cases for AI/ML approaches in extant literature.



Method(s): Literature review



Introductory Literature:
Arrieta AB, et al. Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 2020; 58: 82–115.

Levy Y, Ellis TJ. A Systems Approach to Conduct an Effective Literature Review in Support of In-formation Systems Research. Informing Science Journal 2006; 9.

Quinn TP, Senadeera M, Jacobs S, Coghlan S, Le V. Trust and medical AI: the challenges we face and the expertise needed to overcome them. Journal of the American Medical Informatics Association 2020.