Why hasn't MBT been adopted by the SW industry? Notes from MoTiP 2012

Andres Kull

The 4th Workshop on Model-based Testing in Practice (MoTiP) took place in Dallas, Texas, November 27, 2012. This was a half-day event with 6 presentations and a keynote.

Despite of the glorious promises of increasing the quality and decreasing the testing costs of software, MBT has still not been endorsed by the industry on a wide scale. Where are the obstacles? 

Several workshop presentations tackled this issue.

Dr. Wolfgang Grieskamp from Google Cloud Division (former Principal Architect at Microsoft and previously a Senior Researcher at Microsoft Research) in his presentation pointed out that there is a high interest in the industry for MBT. MBT theory is mostly consolidated. There are many tools available today.

But what are the reasons that MBT still hasn't took off?

According to Wolfgang's statistics from Microsoft, 7 teams out of 10 seemed to drop MBT after the adoption. Why is it so?

Some answers from Wolfgang's keynote:

1) Adoption often bounds to individuals. As those people leave the team, the MBT will die in the team. We can conclude that there is something very hard in MBT that it survives in organizations only if there are MBT fans who drive it.

2) Paradigm shift does not stick when people are rotating in shipping cycles.

3) Maintenance problems. For example, at some phase people may start to make changes to the generated test cases instead of models...

Are the MBT benefits really as great as is presented by many tool vendors? The biggest model-based testing project to date was executed by Microsoft (2007-2011). Testing 300 of their protocols against the documentation. The overall effort of the project was 250+ person-years. The Microsoft Spec Explorer tool was used in the project and the project was mostly executed in China and India. Not all the tests in the project were generated from the models. 31% of the tests were automated by the same people writing manually the test scripts. Time effort was recorded in both cases that included the whole testing workflow starting from understanding the requirements, designing the test cases, executing and analysis them. According to Wolfgang, model based testing gave just 34-42% effort reduction compared to the traditional test automation. This is very different from the claims of the MBT vendors who promise 5-20x effort reduction. What is the reason of such big differences in efficiencies? The skills of the people involved might explain the differences.

Wolfgang pointed out that the tooling is the most serious issue in MBT adoption. The tools tend to be horrible complex. He listed the requirements for the tools that can succeed on industrial scale:

- test selection via symbolic state space exploration

- ways for engineers to control and visualise exploration and test selection - no push-button technology will do

- a modeling notation which meets requirements of modern programming languages (expressiveness, modularisation, etc.)

- seamless integration into an IDE, code assistance, intellisense, etc.

- being rocket-stable and reliable

All those are mostly engineering problems, not research problems.  According to Wolfgang's prediction it will take another 5-10 years until we are there.

Marijan Dusica from Simula  highlighted social aspects in adoption of MBT in industry in her presentation. 

- testing is a socio-technical rather than a technical process

- limited resources and commercial pressure for deliveries put less priority on the activities to adopting new technology

- modelling is not as widely accepted by the industry as some might think

Marijan listed practical guidelines for successful application of MBT in practice:

- MBT approach must be sound, well suited to the SUT and scalable to its complexity

- used tools must be efficient and user-friendly, not increasing the complexity of a MBT approach itself

- team adopting the MBT approach must invest in learning the new approach and make a firm commitment to it