Download PDFOpen PDF in browser

On the Robustness of Active Learning

11 pagesPublished: December 10, 2019


Active Learning is concerned with the question of how to identify the most useful samples for a Machine Learning algorithm to be trained with. When applied correctly, it can be a very powerful tool to counteract the immense data requirements of Artificial Neural Networks. However, we find that it is often applied with not enough care and domain knowledge. As a consequence, unrealistic hopes are raised and transfer of the experimental results from one dataset to another becomes unnecessarily hard.
In this work we analyse the robustness of different Active Learning methods with respect to classifier capacity, exchangeability and type, as well as hyperparameters and falsely labelled data. Experiments reveal possible biases towards the architecture used for sample selection, resulting in suboptimal performance for other classifiers. We further propose the new ”Sum of Squared Logits” method based on the Simpson diversity index and investigate the effect of using the confusion matrix for balancing in sample selection.

Keyphrases: active learning, computer vision, Data Analytics, hierarchical networks, image classification

In: Diego Calvanese and Luca Iocchi (editors). GCAI 2019. Proceedings of the 5th Global Conference on Artificial Intelligence, vol 65, pages 152--162

BibTeX entry
  author    = {Lukas Hahn and Lutz Roese-Koerner and Peet Cremer and Urs Zimmermann and Ori Maoz and Anton Kummert},
  title     = {On the Robustness of Active Learning},
  booktitle = {GCAI 2019. Proceedings of the 5th Global Conference on Artificial Intelligence},
  editor    = {Diego Calvanese and Luca Iocchi},
  series    = {EPiC Series in Computing},
  volume    = {65},
  pages     = {152--162},
  year      = {2019},
  publisher = {EasyChair},
  bibsource = {EasyChair,},
  issn      = {2398-7340},
  url       = {},
  doi       = {10.29007/thws}}
Download PDFOpen PDF in browser