Explainability of AI

Type

Master's thesis / Bachelor's thesis / supervised research

Prerequisites

  • Knowledge of deep learning with image data, natural language data or graph data
  • Proficiency in Python and deep learning frameworks (either PyTorch or Tensorflow)

Description

Over the last decade, deep learning methods have been deployed in numerous real-world, often safety-critical, applications. However, a major and growing concern remains the explainability of neural network decisions. A neural network operates as a black box: A priori, one can only comprehend the input and output of a neural net decision, not the reasoning leading to the decision. The explainable AI (XAI) field aims to develop explanation methods that “open the black box” and shed light on the reasoning behind neural network decisions.

References