Dynamic Testing via Automata Learning
This paper presents dynamic testing, a method that exploits automata learning to systematically test (black box) systems almost without prerequisites. Based on interface descriptions and optional sample test cases, our method successively explores the system under test (SUT), in order to extrapolate a behavioural model. This is in turn used to steer the further exploration process. Due to the applied learning technique, our method is optimal in the sense that the extrapolated models are most concise (i.e. state minimal) in consistently representing all the information gathered during the exploration. Using the LearnLib, our framework for automata learning, our method can be elegantly combined with numerous optimisations of the learning procedure, with various choices of model structures, and with the option of dynamically/interactively enlarging the alphabet underlying the learning process. The latter is important in the Web context, where totally new situations may arise when following links. All these features are illustrated using as a case study the web application Mantis, a bug tracking system widely used in practice. In addition, we present another case study that demonstrates the scalability of the approach. We show how the dynamic testing procedure works and how behavioural models arise that concisely summarize the current testing effort. It turns out that these models reveal the system structure from a user perspective. Besides steering the automatic exploration process, they are ideal for user guidance and to support analyses to improve the system understanding, as they reveal the system structure from a user perspective.