20 Years of Dynamic Software Model Checking
Patrice Godefroid, Microsoft Research, USA
Dynamic software model checking consists of adapting model checking into a form of systematic testing that is applicable to industrial-size software. Over the last two decades, dozens of tools following this paradigm have been developed for checking concurrent and data-driven software.
This talk will review 20 years of research on dynamic software model checking. It will highlight some key milestones, applications, and successes. It will also discuss limitations, disappointments, and future work.
Short Bio: Patrice Godefroid is a Principal Researcher at Microsoft Research. He received a B.S. degree in Electrical Engineering (Computer Science elective) and a Ph.D. degree in Computer Science from the University of Liege, Belgium, in 1989 and 1994 respectively. From 1994 to 2006, he worked at Bell Laboratories (part of Lucent Technologies), where he was promoted to “distinguished member of technical staff” in 2001. His research interests include program (mostly software) specification, analysis, testing and verification.
Simulation-Based Falsification of Cyber-Physical Systems
Sriram Sankaranarayanan, University of Colorado Boulder, USA
In this talk, we present recent progress on the use of numerical simulations to falsify properties of Cyber-Physical Systems (CPS), that involve the closed-loop interactions between discrete-time control systems and continuous-time environments. Such falsification seeks to automatically discover violations of important safety and performance properties of CPS. In this talk, we present a progression of ideas on the automatic discovery of falsifying traces in models of CPS through repeated simulations guided by a robustness metric that measures distances between signals and properties. These ideas form part of tools such as S-Taliro and Breach, that perform simulation-based falsification of temporal logic specifications of CPS designed using Simulink/Stateflow ™ in Matlab ™.
We show the application of our approach to Artificial Pancreas controllers that automate the delivery of insulin to patients with Type-1 Diabetes to help ease the burden of controlling their blood glucose levels, while avoiding dangerous complications due to extremely low or high blood glucose levels.
Finally, stepping beyond offline simulation-based verification, we consider some of the challenges and opportunities for online, real-time monitoring of CPS in the context of the artificial pancreas project. This includes the notion of a “provable monitor” as a mitigation to some of the extreme situations that can be caused by device malfunctions. We consider the challenges of specifying and automatically synthesizing such monitors. Simultaneously, we also speculate on possibilities such as performance improvement through “runtime synthesis” enabled by long term monitoring of such systems.
Short Bio: Sriram Sankaranarayanan is an assistant professor of Computer Science at the University of Colorado, Boulder. His research interests include automatic techniques for reasoning about the behavior of computer and cyber-physical systems. Sriram obtained a PhD in 2005 from Stanford University where he was advised by Zohar Manna and Henny Sipma. Subsequently he worked as a research staff member at NEC research labs in Princeton, NJ. He has been on the faculty at CU Boulder since 2009. Sriram has been the recipient of awards including the President’s Gold Medal from IIT Kharagpur (2000), Siebel Scholarship (2005), the CAREER award from NSF (2009), Dean’s award for outstanding junior faculty (2012), outstanding teaching (2014), and the Provost’s faculty achievement award (2014).
Heisenbugs – When Programs Fail
Georg Weissenbacher, Vienna University of Technology, Austria
Heisenbugs are complex software bugs that alter their behaviour when attempts to isolate them are made. A pun on the name of physicist Werner Heisenberg, the term Heisenbugs refers to obscure bugs whose analysis is complicated by the probe effect, an unintended alteration of system behaviour caused by an observer.
Heisenbugs are most prevalent in concurrent systems, where the interplay of contemporary multi-core processors leads to intricate effects not anticipated by the programmer. It is exactly in these situations that automated analyses are most desirable. After addressing their common properties and causes, I will present different approaches my group developed to generate explanations of concurrency bugs. Explanations help the programmer to focus on the gist of the bug rather than dealing with the specifics of the failed execution. Attacking the problem from two very different angles, we use data mining as well as static analysis and automated reasoning to extract explanations from failed program executions. In this talk I will discuss the advantages as well as the shortcomings of both approaches.
Short Bio: Georg Weissenbacher is an assistant professor at Vienna University of Technology. He holds a doctorate from Oxford University, where he worked on software model checking, and spent two years at Princeton working on SAT-based hardware fault localisation. His current research on Heisenbugs is funded by a WWTF Vienna Research Groups for Young Investigators grant.