Stage-oe-small.jpg

Thema4657: Unterschied zwischen den Versionen

Aus Aifbportal
Wechseln zu:Navigation, Suche
 
(kein Unterschied)

Aktuelle Version vom 10. August 2021, 13:41 Uhr



Trust-Building Factors of Artificial Intelligence Applications in Life Science: A Literature Review





Informationen zur Arbeit

Abschlussarbeitstyp: Bachelor
Betreuer: Maximilian RennerPhilipp ToussaintSebastian Lins
Forschungsgruppe: Critical Information Infrastructures
Partner: Yale University
Archivierungsnummer: 4657
Abschlussarbeitsstatus: Abgeschlossen
Beginn: 01. November 2020
Abgabe: unbekannt

Weitere Informationen

This Bachelor thesis is part of the DAAD RISE exchange program including an opportunity to do an internship of several months at Yale University

Deadline for application: Until 1 November 2020, the faster the better
In English : Letter of motivation, CV and current transcript of records
E-Mail to sebastian lins∂kit edu



Background


As it is becoming increasingly challenging to analyze the ever-increasing amounts of generated biomedical data (e.g., CT scans, X-ray images, omics data) using conventional analysis techniques, researchers and practitioners are turning to artificial intelligence (AI) approaches (e.g., deep learning) to analyze their data. Yet, although the application of AI to biomedical data may, in many cases, promise to deliver improved performance and accuracy, extant AI approaches often suffer from opacity. Their sub-symbolic representation of state is often inaccessible and non-transparent to humans, thus limiting researchers and practitioners in fully understanding and therefore trusting the produced outputs.

Trustworthy AI (TAI) builds on the idea that trust provides the foundation of societies, economies, and sustainable development. In such situations, individuals must willingly put themselves at risk or in vulnerable positions by delegating responsibility for actions to another (i.e., trusted) party (e.g., Building trust in AI to reliably and rapidly detect cancer on X-rays). Nevertheless, various perspectives on trust exist in the literature, comprising different dimensions and (partially opposing) interpretations. With that, individuals, organizations, and societies will therefore only ever realize the full potential of AI if trust can be established in its development, deployment, and use.

However, in contrast to conventional medical AI approaches, TAI has the potential to maximize the benefits of AI for Life Science applications (e.g., humans trust in AI-based medical decisions) while at the same time mitigating or even preventing its risks and dangers (e.g., misjudgment, legal compliance, or misuse of personal data).

Recent trends in AI research focus on the important TAI principle of explainability. Explainability enables trustworthiness by producing (more) interpretable AI models. These models may maintain high levels of performance and accuracy while simultaneously justifying AI-based treatment and diagnosis decisions in personalized medicine and other applications. Nonetheless, with explainability being only one key aspect of TAI, other principles like the consideration of justice and beneficence cannot be neglected in the context of TAI. Thus, hand in hand with the growing demand for AI-based life science applications, requirements to enable TAI in life science are yet to be determined. To address this pressing gap in research, this research project seeks to answer the following research question (RQ): RQ: What are the key characteristics of trustworthiness in AI-based life science applications?


Introductory literature


Lee, J. D.; & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human factors, 46(1), 50-80.


Independent High-Level Expert Group on Artificial Intelligence. (2019). Ethics Guidelines for Trustworthy AI. Brussels. European Commission, Retrieved from https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60419


Holzinger, A.; Biemann, C.; Pattichis, C. S.; & Kell, D. B. (2017). What do we need to build explainable AI systems for the medical domain?. arXiv preprint at https://arxiv.org/abs/1712.09923.


Samek, W.; Wiegand, T.; & Müller, K. R. (2017). Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv preprint at https://arxiv.org/abs/1708.08296.