Acceptance testing at synyx – Part 1
Overview – Why and how we do web-testing
In my team at synyx we wrote a lot of tests in 2012. Most of the tests were unit-tests (as a consequence of TDD), some stuff is also tested as integration-tests (sometimes because the stuff was hard to test as unit-tests, sometimes as addition to them to verify that interactions of components work properly). I can tell TDD and the special focus on tests changed the way we work pretty much and of course boosted the quality of our applications even further. Its not that we did not write tests before, but once you develop test driven you can start to trust your code which makes refactorings (evolution) easy.
But there are always components that are hard to test. This includes code related to user interfaces, complete work-flows and -sigh- Internet Explorer. So, at the end of 2012 we decided to give automated browser-tests another change (we did evaluate and try this yeeeears ago but – for several reasons – we did not make good expieriences with it).
Arguments to do it
Testing backend-components has become easy as soon as you are practiced writing tests and follow some design principles like dependency injection. But usually, easy testing stops as soon as you enter the web-layer. Yes, I know its possible to write tests for Spring MVC controllers but going down this road always felt a bit weird. And even if you have these tests, you want to test the whole thing (Controller, JSPs, Filters, Interceptors and what not) in an integrative way. So the best solution is running automated tests of the deployed application using a real browser.
In fact, since the browsers that display our applications differ in some details we even have to test the apps in many of them. Or, at least with those we want to ensure compability with our applications. For example, some of the bugs that were reported for our last application only affect one of the browsers out there (mostly a particular version of Internet Explorer). These bugs were not detected early because developers/qa tend not to test everything in every browser – especially if they have to log on to one or more remote windows machines in order to do so. Lately, the amount of JavaScript that is used within our software increases, hence, this gets even more important.
The last and one of the most important arguments for webtests is that they are acceptance tests and live in another scope. In contrast, unit and integration tests are more like whitebox-tests: I tend to say that the latter are for us developers. They give us confidence and the freedom to safely extend and change our application. These tests are testing from the inside and have knowledge of the system. They do not really affect the business people (besides from some strange cases where they request a certain amount of test coverage).
But acceptance tests do really focus the business value of the application. They usually test complete workflows or “features” of an application. The product owners user stories should have acceptance criteria that can be expressed as acceptance tests. The tests should not care about how these criterias are met but if. So acceptance tests are testing from the “outside” as a complete blackbox (without knowledge of the internals of the application) test.
Of course these tests can be executed continuously and by this it can be ensured that the user story or feature works as expected – and always will. So these tests are not only for us developers, they are for our clients. By the way this also makes good and colourful reporting even more important.
How we do it – Overview
This post should be the beginning of a whole series that describes how we do web-testing at synyx. So after I gave a quick overview why we do it let me tell you how we do it in a high level overview. Afterwards there will be follow-up posts that describe the important aspects in more detail.
- Tests are written in Java/JUnit using Selenium Webdriver
- Seleniums RemoteWebdriver allows the browser to run on another host as the test
- The grid functionality of selenium-server is used to be able to request a big variation of different browsers and versions using the same initialization-strategy and – of course – to scale up
- The tests are executed automatically several times – once for each browser we want to ensure compability with
- Tests are written in BDD-style and use abstractions of actions (Steps) and pages
- Tests are reported in a nice “manager-friendly” way including pie charts and screenshots
- Jenkins executes these tests and generates the report continuously against a system that is automatically deployed (continuous deployment)
So stay tuned for detailed information about ATDD / webtests at synyx during the next weeks.
Max Ivanov
Nice description. We build new project because our 15 years old project not works properly, I should make little researche and decide wich tolls and methods will be used in project.
Marc Kannegiesser
Hi. Thanks for your comment. Especially for legacy projects webtests might be a good place to start because you do not have to refactor so mutch in order to run the tests and because of the "testing from the outside" thing.