How Does QA Fit within Agile?

At an open space at Agilepalooza today, we were discussing the role of QA in Agile. The level of interest, as well as the sheer number of QAs attending, was surprising.

One frequent theme was that the QA folks have to become a lot more proactive by getting involved at the beginning of each iteration and start planning how to test each feature. They also need to be much more integrated with the team and test things as they are developed, rather than the more traditional approach of waiting until everything is hypothetically done with development. The specific approaches varied – in some cases the QA folks were fully integrated into the team, and in other cases they remained in a distinct QA department but worked closely with the developers. Some wrote “QA Acceptance Criteria” for each story up front at the planning meeting, while others developed test plans while the stories were in development. But the overall approaches were very similar.

I got very interested when the discussion turned to automated testing. Unit testing in general, and TDD specifically, are very valuable tools to help deliver quality code. Pair programming does a lot for quality as well. But frankly these seem insufficient. I may be biased, but I think you need a good automated testing tool to create and execute functional integration tests, as well as performance/load testing. And, although the goal of 100% automated testing is definitely the right goal, there are some cases where a manual test makes more sense.

Our definition of done includes creating automated tests as far as is reasonable. In most cases, we create a LISA test case (which may execute against a LISA virtual service). Sometimes we rely on JUnit tests. And, when absolutely necessary, we write manual test cases which are maintained in an internal system.

This is all the team’s responsibility; it typically falls to the coder to write his or her own tests. The keys are that we’re not “throwing it over the wall” to another team, and that we don’t get “credit” for it if the testing isn’t done.

Of course, the usual objection to having the developer write the tests is that developers don’t have the same focus on “How can I break this?” that a specialized quality analyst does. (Seriously, who would type letters into a text field that clearly calls for numbers? What is wrong with those people?) So our teams include a full time quality analyst. His job is not to write or execute the tests; instead he acts to advise and assist us. He specifies what sort of testing is required, helps come up with test scenarios, helps us create the automated tests, advises us on existing automated or manual tests, helps set up the testing environments, and reviews/approves our test plans.

Running in parallel, we have a separate QA group. Their primary focus is on automating as many of our manual tests as they can. Essentially they work off a “Test Plan” backlog, working iteratively just like the product development teams. This team is also responsible for executing all the manual tests when we do a regression test for each release. We’re still struggling with what to do about bugs found during that regression test – you don’t want to keep an iteration open (or re-open it after you’ve started the next one), but critical bugs must be fixed before the software can be commercially released. So we’re considering a “hardening sprint,” which is admittedly not consistent with the ideas of quality and “done” that are deeply embedded in Agile, but it seems to be an expedient temporary solution.

This is by no means a perfect approach – we still have a distinct QA department and perhaps more distinct roles on our team than we should have. Without doing a full regression test within each iteration we are delivering code of admittedly unknown quality. But it’s still a lot better than the “throw it over the wall” process, and seems to be a reasonable solution for us right now.