1. Generating effective tests for concurrent programs via AI automated planning techniques.
- Author
-
Razavi, Niloofar, Farzan, Azadeh, and McIlraith, Sheila
- Subjects
- *
AUTOMATED planning & scheduling , *COMPUTER multitasking , *CODING theory , *ALGORITHMS , *ARTIFICIAL intelligence , *FEASIBILITY studies - Abstract
Testing concurrent programs is a challenging problem due to interleaving explosion: even for a fixed set of inputs, there is a huge number of concurrent runs that need to be tested to account for scheduler behavior. Testing all possible schedules is not practical. Consequently, most effective testing algorithms only test a select subset of runs. For example, limiting testing to runs that contain data races or atomicity violations has been shown to capture a large proportion of concurrency bugs. In this paper we present a general approach to concurrent program testing that is based on techniques from artificial intelligence (AI) automated planning. We propose a framework for predicting concurrent program runs that violate a collection of generic correctness specifications for concurrent programs, namely runs that contain data races, atomicity violations, or null-pointer dereferences. Our prediction is based on observing an arbitrary run of the program, and using information collected from this run to model the behavior of the program, and to predict new runs that contain bugs with one of the above noted violation patterns. We characterize the problem of predicting such new runs as an AI sequential planning problem with the temporally extended goal of achieving a particular violation pattern. In contrast to many state-of-the-art approaches, in our approach feasibility of the predicted runs is guaranteed and, therefore, all generated runs are fully usable for testing. Moreover, our planning-based approach has the merit that it can easily accommodate a variety of violation patterns which serve as the selection criteria for guiding search in the state space of concurrent runs. This is achieved by simply modifying the planning goal. We have implemented our approach using state-of-the-art AI planning techniques and tested it within the Penelope concurrent program testing framework []. Nevertheless, the approach is general and is amenable to a variety of program testing frameworks. Our experiments with a benchmark suite showed that our approach is very fast and highly effective, finding all known bugs. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF