This means understanding what problem the customer is trying to solve with each new feature. One practice I have been coaching teams to try, is defining acceptance tests at the feature level (or theme / epic if you are practicing Scrum). ATDD (Acceptance Test Driven Development), BDD (Behavioural Driven Development) or Specification by Example all talk about defining tests at the story level. We often forget about the feature level – the bigger picture. How do we know when the feature is “DONE”?
In specification workshops, we start with a feature, decompose it into stories, and often lose or forget to keep that feature in mind when testing. Now, if we created acceptance tests – both desired and undesired behaviour – at the feature level, then have something to test.
When we decompose the feature into stories, I suggest creating a story to “Test Feature A”, with the corresponding acceptance tests. I have found the idea of creating a “Feature DONE” definition to be very powerful. There are many tests that do not make sense to do at the story level, but apply to the feature. Many of the Quadrant 4 (technical facing tests that critique the product) fall into this category. Some examples are load tests, usability, and browser compatibility.
The ‘Test Feature’ story is prioritized after all the individual stories are DONE. Tasks for this story might include: Test browser compatibility, Perform load test, Automate GUI tests, Create user documentation, and Perform UAT (User Acceptance Test). Testing at the feature level enables us to run these tests when it makes the most sense. End users can be given the chance to test the complete feature and give feedback as soon as it is complete.
Adding acceptance tests to the feature, defining Feature DONE to include the tasks and tests that make sense at this level helps teams to consider the big picture and the business problem that they are trying to solve