A few weeks ago, Katja Obring wrote a blog post which I encourage everyone to read. There were quite a few comments – some who agreed and some who did not, which is fair. Everyone is entitled to their opinion. This post is to share my opinion.
One of my favourite sayings is: “automated tests are a change detector,” and I’m a firm believer in that sentiment. However, I really mean “automated regression tests are a change detector.” This slight change gives a different perspective – that the frequently run tests catch any changes (perhaps defects) that teams didn’t think of when developing new stories.
If a team automates after the code is written, when it runs it for the first time, it may catch a defect. I believe those are the first two examples that Katja uses. Every time that test is run after – assuming the defect is fixed, it should run cleanly. It will fail if, and only if, someone changed that behaviour. It may be because of a new story or caused accidentally as a side-effect of something else. The test fails, and the team can then decide whether the test detected a failure, or the test is now wrong and should be corrected.
When a test fails, someone looks at it to determine the cause and what to do with that information. In some cases, AI/ML can diagnose what the problem and give the team options. As of today, I think we’re a way away from AI/ML automatically changing the code or test to fix the problem accurately.
There are a few different automation strategies such as chaos engineering, property-based testing or model-based testing that will find new defects since they are aimed at creating many different types of scenarios that were not thought about previously, and a human could not possibly do in the time they have.
When teams practice writing their tests before writing code, the intention is to help write more testable code and ensure the code behaves as expected. Test-driven development (TDD) is an example of this at the low level and programmers can run the red-green-refactor cycle to help design their code.
Acceptance test-driven development (ATDD) guides development from a higher-level viewpoint – from the perspective of the business. Those tests are written from a behaviour perspective – what does the customer expect to happen? In both these cases, the tests will fail first because there is no code written. They will pass once the code is written because the programmer codes to the test. Those tests that guide development may or may not become automated regression tests.
Katja mentions the idea of continual test improvement, and I’m a huge fan of refactoring tests – just like programmers do with code. With every new story, we should encourage looking at existing automated tests and decide which need to change, be deleted or if more should be added.
Focus on value
Automation is a tool – to be used as such. It does not replace human-centric testing but should complement it. I deliberately stayed away from naming roles about who should be writing the tests or looking at the test failures, because every team does it differently.
To quote Dan Ashby, “Focus on the bug and how to resolve it.” The conversation should NOT be about whether an automated test or a human found the defect. Much of the value add of a tester is determining what to test, helping to decide which tests should be automated, and which should be explored by a human being. Other value added by a tester is being able to articulate why we need a test in the first place – what risk might it be mitigating.