Download PDFOpen PDF in browser

Evaluating Simple and Complex Models’ Performance When Predicting Accepted Answers on Stack Overflow

EasyChair Preprint no. 8591

10 pagesDate: August 3, 2022

Abstract

Stack Overflow is used to solve programming issues during software development. Research efforts have looked to identify relevant content on this platform. In particular, researchers have proposed various modelling techniques to predict acceptable Stack Overflow answers. Less interest, however, has been dedicated to examining the performance and quality of typically used modelling methods with respect to the model and feature complexity. Such insights could be of practical significance to the many practitioners who develop models for Stack Overflow. This study examines the performance and quality of two modelling methods, of varying degree of complexity, used for predicting Java and JavaScript acceptable answers on Stack Overflow. Our dataset comprised 249,588 posts drawn from years 2014–2016. Outcomes reveal significant differences in models’ performances and quality given the type of features and complexity of models used. Researchers examining model performance and quality and feature complexity may leverage these findings in selecting suitable modelling approaches for Q&A prediction.

Keyphrases: feature selection, Modelling and Prediction, neural network, Random Forest, Stack Overflow

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:8591,
  author = {Osayande P. Omondiagbe and Sherlock A. Licorish and Stephen G. Macdonell},
  title = {Evaluating Simple and Complex Models’ Performance  When Predicting Accepted Answers on Stack Overflow},
  howpublished = {EasyChair Preprint no. 8591},

  year = {EasyChair, 2022}}
Download PDFOpen PDF in browser