Stage-oe-small.jpg

XAIOmics/en

Aus Aifbportal
< XAIOmics
Version vom 16. Juli 2020, 10:44 Uhr von Ny1755 (Diskussion | Beiträge) (Auto create by AifbPortalExt)
(Unterschied) ← Nächstältere Version | Aktuelle Version (Unterschied) | Nächstjüngere Version → (Unterschied)
Wechseln zu:Navigation, Suche
Xaiomics logo.png

Explainable Artificial Intelligence in Life Science: An Application to Omics Data


Contact: Ali Sunyaev

http://www.hidss4health.de


Project Status: active


Description

As it is becoming progressively challenging to wholly analyse the ever-increasing amounts of generated biomedical data (e.g., CT scans, X-ray images, omics data) by means of conventional analysis techniques, researchers and practitioners are turning to artificial intelligence (AI) approaches (e.g., deep learning) to analyse their data. Although the application of AI to biomedical data in many cases promises to deliver improved performance and accuracy, extant AI approaches often suffer from opacity. Their sub-symbolic representation of state is often inaccessible and non-transparent to humans, thus limiting us in fully understanding and therefore trusting the produced outputs. Explainable AI (XAI) describes a recent trend in AI research with the aim of addressing the opacity issue of contemporary AI approaches by producing (more) interpretable AI models whilst maintaining high levels of performance and accuracy. The objective of the XAIOmics research project is to design, develop, and evaluation an XAI approach to biomedical (i.e., omics) data. In particular, we will identify biomedical use cases and current, viable approaches in the domain of XAI and apply and adapt them to the identified use cases. With regards to the highly interdisciplinary field, a central research hurdle will be the development of an understanding for the different kinds of biomedical data and the subsequent feature engineering in the context of the design of the AI algorithms. In doing so, this project will not only aid researchers and physicians in obtaining a better understanding of the outputs of contemporary AI approaches for biomedical data but also create more transparency, which will support the building of trust in AI-based treatment and diagnosis decisions in personalized medicine.


Involved Persons
Ali Sunyaev, Scott Thiebes, Philipp Toussaint


Information

from: Juli 2020
until: Juni 2023
Funding: Helmholtz-Gemeinschaft


Partners

German Cancer Research Center (DKFZ)


Research Group

Critical Information Infrastructures


Publications Belonging to the Project
article
 - inproceedings
 - book
 - incollection
 - booklet
 - proceedings
 - phdthesis
 - techreport
 - deliverable
 - manual
 - misc
 - unpublished