The RERS Challenge 2015
Rigorous Examination of Reactive Systems (RERS)

The RERS Challenge 2015 is the 5th International Challenge on the Rigorous Examination of Reactive Systems and is co-located with the 15th International Conference on Runtime Verification. The event will be held in September 2015, in Vienna, Austria. RERS is designed to encourage software developers and researchers to apply and combine their tools and approaches in a free style manner to answer evaluation questions for specifically designed benchmark suites.

The benchmarks are automatically synthesized to exhibit chosen properties and then enhanced to include dedicated dimensions of difficulty, ranging from conceptual complexity of the properties (e.g. reachability, full safety, liveness), over size of the reactive systems (a few hundred lines to tens of thousands of them), to exploited language features (arrays and index arithmetics). They are therefore especially suited for community-overlapping tool comparisons. This years challenge, in addition, features problems that can be solved via monitoring. What distinguishes RERS from other challenges is that the challenge problems can be approached in a free-style manner: it is highly encouraged to combine and exploit all known (even unusual) approaches to software verification. In particular, participants are not constrained to their own tools.

The main aims of RERS 2015 are to:

  • encourage the combination of usually different research fields for better software verification results
  • provide a comparison foundation based on differently tailored benchmarks that reveals the strengths and weaknesses of specific approaches/li>
  • initiate a discussion for better benchmark generation reaching out across the usual community barriers to provide benchmarks useful for testing and comparing a wide variety of tools

Please direct any enquiries to the competition organizers.

RERS 2015 Committee

Call for Participation

The goal for RERS 2015 is to compare all sorts of software verification tools. Participants are welcome to test their tools on the provided benchmarks and even propose certain programming languages and features that should be supported by the benchmarks.

It is planned to support two main tracks on three levels of difficulty:

  • verification of safety and liveness properties (50 LTL and 50 reachable labels)
  • monitoring of 50 properties (30 easy properties and 20 hard properties for each problem)

Both tracks are run on three benchmarks problems and will be available in Java and C.

The competition will proceed as follows:

  • Training phase: the participants are invited to test their tools on a provided training problem. To validate their approach, the solutions will be provided.
  • Challenge phase: the participants can test their tools on the provided challenge problems and generate their solutions.
  • Evaluation phase: All turned in solutions will be evaluated for the percentage of right answers. Certificates will be provided for all participants reaching a certain threshold of right answers for each problem.

The detailed description of each phase will be available on this website.

Expected Important Dates

  • April 15, 2015: Release of training problem
  • May 1, 2015: Release of properties and (modified) benchmarks problems
  • August 31, 2015: Submission of solutions for LTL and reachability properties
  • September 1, 2015: Release of proper problems for the monitoring challenge
  • September 7, 2015: Submission of solutions to monitoring properties
  • At RV 2015: Presentation of results