Acceptance Tests as Specifications

December 18th, 2013

In my previous article about acceptance testing I wrote about how tools like FitNesse and Cucumber focus on collaboration between developers and testers. Of course that’s not all these tools help us with. In this post we’ll take a deeper look at acceptance testing as a practice and see what the tools gain us.

Acceptance Testing is an agile practice that refers to functional testing a user story. Sometimes it is referred to as Agile Acceptance Testing or Specification by Example. Acceptance testing addresses one of the most significant problems in software development: the requirements.

Requirements

Often requirements don’t come across to the team members the way business people intended them to. There are many reasons for this to happen. Often these reasons can be reduced down to the team having a different context than the business people. Because the team often has only a basic understanding of the business world, the team easily misinterprets the ambiguous requirements.

This problem isn’t limited to agile methodologies. The problem is more persistent, for a few decades already. Remember those classical waterfall projects? For sure the requirements weren’t communicated briefly like user stories. They were however written from the perspective of business people and often got misinterpreted further down the production chain. By the time business people could see the system and verify that it didn’t match what they had in mind, it was too late. Many of these projects were canceled as a result, with lots of wasted money as a result. Some of these projects even caused bankruptcy.

The agile planning game (today known as Scrum) helped reducing the impact of this problem. Its short feedback loops ensure that business people in an early stage will become aware of discrepancies between their world and the system as it gets build. In case the team got it wrong, only one sprint of work is wasted. Now that’s still ridiculously expensive. Nobody wants to throw away an entire sprint’s work only because the requirements were poorly communicated. But on the plus side, most projects will survive one such a sprint.

Another approach to fixing the requirements problem is to reduce the level of ambiguity in the requirements. We could use a formal language to write a specification of the requirements. Before you tell me that won’t work or is simply not practical, let me explain. We don’t need math precision. We need to reduce the level of ambiguity just enough so that humans will understand. And conveniently enough on most projects such a formal specification is already created. It’s called the test plan.

Test Plan

The test plan defines the inputs to the system, the actions on the system, and the expected outputs in a rather formal manner. Thereby reducing the level of ambiguity enough so that humans will understand.

By executing the test plan against the system QA gets feedback about any gaps between the behavior of the system (the interpretation of the developers) and their own interpretations. Agile reduces the amount of differences by putting developers and testers in the same team, but still QA writes the test plan while developers implement the feature. At best the team finds certain differences in interpretation while implementing.

Compared to business people finding that the team didn’t implement ‘the right thing’ after a sprint, having the team to find this herself within the sprint is much, much better. But it causes rework nonetheless and that takes up some of our valuable time.

What if we could learn these gaps before implementing a feature? That could reduce the rework even further and get the feature even closer to right from the start. Note that I am definitely not claiming that the team shouldn’t make any mistakes, they will. But given an opportunity to prevent certain mistakes by early learning, we should take it.

Specification by Example

So business people write a bunch of stories. The team does some form of estimation of each story’s size and takes some stories in sprint. At this point the team doesn’t know all the details yet. To find out they can organize a meeting with a business expert and ask her any questions to learn these details. This is where the team discusses for example a complex business algorithm or details about the UI. Together with the business expert the team creates a set of examples and acceptance criteria.

[Note that it is helpful if all team members attends these meetings to gain a shared understanding of the feature. If not, make sure to form a delegation of at least one developer and one tester.]

The team records the examples and acceptance criteria as test scenarios. Preferably the business expert reviews the scenarios afterwards, to assert the outcome of their meeting. This is the point where the tools pop back in. Tools like FitNesse and Cucumber use a simple yet formal language for defining test scenarios and example tables. At this stage the benefit of using a tool would be that you don’t have to invent such a language yourself.

The entire team now can read the what the feature is supposed to do and is on the same page. The first thing the developers (or sometimes the testers) do after the scenarios are written is to create some glue code. This code binds (glues) the scenarios to the system to allow the test tool to run the scenarios against the system and verify the outputs.

Power of Tools

This makes these test tools really powerful. The test scenarios can be executed any time we wish, at high speed. Writing test scenarios and glue code are typically tasks for humans, but humans are terrible at consistently repeating the same tasks, making them extremely inefficient test execution platforms. Computers on the other hand are very effective at consistently repeating tasks, which makes them well-suited as an execution platform for our test scenarios.

Not only the test scenarios help building a shared understanding of a feature before the implementation. Every team member can (and should) read the scenarios, as can the business stakeholders. The team is not done implementing until all scenarios pass.

Because the scenarios can be executed by computers, they make for an excellent regression suite. We can have the computer run our scenarios every build, or at least a few times a day. As the team gets feedback from the test suite quickly after they change something, they can catch regression bugs in an early stage.

This may all sounds just too theoretical. Therefore I will continue on acceptance testing with a more practical post about using the practices in our development process.