Dharma Ganesan, Fraunhofer CESE
In recent years, several software test case execution frameworks (e.g., JUnit, CppUnit, Selenium, etc.) have been developed and used in many organizations. Such frameworks are very helpful to automatically run the test cases during nightly builds. They also help the programmers verify their source code modifications (a.k.a. regression testing). However, the job of designing the test cases is outside the scope of these test execution frameworks. Programmers (or testers) have to construct test cases manually. To overcome this limitation, model-based testing (MBT), which is a technique that derives test cases from an explicit behavioral model, has been proposed. In this presentation, we will present our experiences and lessons learned of applying an advanced MBT tool, called Spec Explorer, for automatically generating and executing innumerable number of test cases for NASA’s GMSEC API, which implements a software bus in different programming languages and also support different middleware technologies. We will explain how we were able to generate tests for several languages using one common model, and detected previously unknown behavioral errors and several requirements-level issues such as contradictions and incompleteness.