Download PDFOpen PDF in browser

ARCH-COMP19 Repeatability Evaluation Report

8 pagesPublished: May 25, 2019


This report presents the results of the repeatability evaluation for the 3rd International Competition on Verifying Continuous and Hybrid Systems (ARCH-COMP'19). The competition took place as part of the workshop Applied Verification for Continuous and Hybrid Systems (ARCH) in 2019, affiliated with the Cyber-Physical Systems and Internet of Things (CPS-IoT Week'19). In its third edition, twenty-five tools submitted artifacts through a Git repository for the repeatability evaluation, applied to solve benchmark problems for eight competition categories. The majority of participants adhered to new requirements for this year's repeatability evaluation, namely to submit scripts to automatically install and execute tools in containerized virtual environments (specifically Dockerfiles to execute within Docker). The repeatability results represent a snapshot of the current landscape of tools and the types of benchmarks for which they are particularly suited and for which others may repeat their analyses. Due to the diversity of problems in verification of continuous and hybrid systems, as well as basing on standard practice in repeatability evaluations, we evaluate the tools with pass and/or failing being repeatable.

In: Goran Frehse and Matthias Althoff (editors). ARCH19. 6th International Workshop on Applied Verification of Continuous and Hybrid Systems, vol 61, pages 162--169

BibTeX entry
  author    = {Taylor T. Johnson},
  title     = {ARCH-COMP19 Repeatability Evaluation Report},
  booktitle = {ARCH19. 6th International Workshop on Applied Verification of Continuous and Hybrid Systems},
  editor    = {Goran Frehse and Matthias Althoff},
  series    = {EPiC Series in Computing},
  volume    = {61},
  pages     = {162--169},
  year      = {2019},
  publisher = {EasyChair},
  bibsource = {EasyChair,},
  issn      = {2398-7340},
  url       = {},
  doi       = {10.29007/wbl3}}
Download PDFOpen PDF in browser