Download PDFOpen PDF in browser

Self-explaining AI as an alternative to interpretable AI

EasyChair Preprint no. 3336

12 pagesDate: May 5, 2020

Abstract

The ability to explain decisions made by AI systems is highly sought after, especially in domains where human lives are at stake such as medicine or autonomous vehicles. While it is often possible to approximate the input-output relations of deep neural networks with a few human-understandable rules, the discovery of the double descent phenomena suggests that such approximations do not accurately capture the mechanism by which deep neural networks work. Double descent indicates that deep neural networks typically operate by smoothly interpolating between data points rather than by extracting a few high level rules. As a result, neural networks trained on complex real world data are inherently hard to interpret and prone to failure if asked to extrapolate. To show how we might be able to trust AI despite these problems we introduce the concept of self-explaining AI. Self-explaining AIs are capable of providing a human-understandable explanation of each decision along with confidence levels for both the decision and explanation. Some difficulties with this approach along with possible solutions are sketched. Finally, we argue it is important that deep learning based systems include a "warning light" based on techniques from applicability domain analysis to warn the user if a model is asked to extrapolate outside its training distribution. [ Please cite the AGI 2020 conference paper or arXiv version: https://arxiv.org/abs/2002.05149 ]

Keyphrases: Applicability domain, applicability domain analysis, Artificial Intelligence, black box, computer vision, deep learning, Deep Neural Network, double descent, Explainability, Explainable Artificial Intelligence, human understandable, interpretability, interpretable model, machine learning, Machine Learning Research, mechanistic explanation, neural network, robustness, self-awareness, Self-Explaining, self-explaining AI, transparency, Trust, Trustworthiness, XAI

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:3336,
  author = {Daniel Elton},
  title = {Self-explaining AI as an alternative to interpretable AI},
  howpublished = {EasyChair Preprint no. 3336},

  year = {EasyChair, 2020}}
Download PDFOpen PDF in browser