Download PDFOpen PDF in browser

Deep Learning: Unraveling the Black Box of Neural Networks

EasyChair Preprint no. 12535

8 pagesDate: March 18, 2024


Deep learning has revolutionized various fields by enabling the development of complex models capable of learning from vast amounts of data. However, the inner workings of deep neural networks often remain opaque, leading to the metaphorical characterization of these models as "black boxes." This paper aims to unravel the black box of neural networks by exploring methods and techniques for understanding and interpreting their decisions. Through a comprehensive review of existing literature, we examine approaches such as visualization, feature attribution, and model distillation, which shed light on the mechanisms underlying neural network predictions. By gaining insights into the inner workings of deep learning models, researchers and practitioners can improve model transparency, interpretability, and trustworthiness, ultimately advancing the broader adoption and impact of deep learning technology.

Keyphrases: Artificial Intelligence, black box, deep learning, Feature Attribution, interpretability, machine learning, model distillation, neural networks, transparency, visualization

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
  author = {Muhmmad Asif},
  title = {Deep Learning: Unraveling the Black Box of Neural Networks},
  howpublished = {EasyChair Preprint no. 12535},

  year = {EasyChair, 2024}}
Download PDFOpen PDF in browser