How Does QA Fit within Agile?

At an open space at Agilepalooza today, we were discussing the role of QA in Agile. The level of interest, as well as the sheer number of QAs attending, was surprising.

One frequent theme was that the QA folks have to become a lot more proactive by getting involved at the beginning of each iteration and start planning how to test each feature. They also need to be much more integrated with the team and test things as they are developed, rather than the more traditional approach of waiting until everything is hypothetically done with development. The specific approaches varied – in some cases the QA folks were fully integrated into the team, and in other cases they remained in a distinct QA department but worked closely with the developers. Some wrote “QA Acceptance Criteria” for each story up front at the planning meeting, while others developed test plans while the stories were in development. But the overall approaches were very similar.

I got very interested when the discussion turned to automated testing. Unit testing in general, and TDD specifically, are very valuable tools to help deliver quality code. Pair programming does a lot for quality as well. But frankly these seem insufficient. I may be biased, but I think you need a good automated testing tool to create and execute functional integration tests, as well as performance/load testing. And, although the goal of 100% automated testing is definitely the right goal, there are some cases where a manual test makes more sense.

Our definition of done includes creating automated tests as far as is reasonable. In most cases, we create a LISA test case (which may execute against a LISA virtual service). Sometimes we rely on JUnit tests. And, when absolutely necessary, we write manual test cases which are maintained in an internal system.

This is all the team’s responsibility; it typically falls to the coder to write his or her own tests. The keys are that we’re not “throwing it over the wall” to another team, and that we don’t get “credit” for it if the testing isn’t done.

Of course, the usual objection to having the developer write the tests is that developers don’t have the same focus on “How can I break this?” that a specialized quality analyst does. (Seriously, who would type letters into a text field that clearly calls for numbers? What is wrong with those people?) So our teams include a full time quality analyst. His job is not to write or execute the tests; instead he acts to advise and assist us. He specifies what sort of testing is required, helps come up with test scenarios, helps us create the automated tests, advises us on existing automated or manual tests, helps set up the testing environments, and reviews/approves our test plans.

Running in parallel, we have a separate QA group. Their primary focus is on automating as many of our manual tests as they can. Essentially they work off a “Test Plan” backlog, working iteratively just like the product development teams. This team is also responsible for executing all the manual tests when we do a regression test for each release. We’re still struggling with what to do about bugs found during that regression test – you don’t want to keep an iteration open (or re-open it after you’ve started the next one), but critical bugs must be fixed before the software can be commercially released. So we’re considering a “hardening sprint,” which is admittedly not consistent with the ideas of quality and “done” that are deeply embedded in Agile, but it seems to be an expedient temporary solution.

This is by no means a perfect approach – we still have a distinct QA department and perhaps more distinct roles on our team than we should have. Without doing a full regression test within each iteration we are delivering code of admittedly unknown quality. But it’s still a lot better than the “throw it over the wall” process, and seems to be a reasonable solution for us right now.

Killing Zombies

Around our office, the product owner and I have started to use the metaphor of “killing zombies” to discuss moving tickets through our issue tracking system.  It makes the discussion a lot more fun, and it turns out to be a very good metaphor.  Sometimes you think a zombie is dead and it isn’t.  Sometimes they’re really hard to kill.  Sometimes they just wander off into the woods.  And it’s just more fun to ask “Which zombie should we kill first?” than “What’s the relative priority of these two stories?”

I found a zombie wandering through the woods today, and it turns out it’s been there quite a while – 2 1/2 years in fact.  The initial case was to address an inconsistent spelling of something in one menu item in our application, and an incidental question about the tooltip and whether the item was in the right spot anyway.  It was classified as high priority.

This case has bounced around since then between no less than 5 managers and 4 developers, for 2 1/2 years.

The irony is that I renamed the menu item, fixed the tooltip, and moved it to the right location over a year ago without even knowing about this ticket.  The last year’s worth of comments and activity all note this fact.  But still the stubborn zombie just won’t die.

I am not sure, but I think it might be a decent idea to just close any case that’s over a year old, as it’s obviously low priority.  Carpet bomb the woods to kill any lingering zombies.  In any case, this demonstrates how important it is to have a streamlined process and a well groomed backlog so that zombies get dispatched fairly quickly.  Otherwise, like this one, they’ll come back over and over to eat your brains.

Talk Amongst Yourselves

I just finished attending an excellent Certified ScrumMaster training course by Mike Cohn. I knew most of the material already but it was an excellent refresher course and it was also great to hear Mike’s experience and opinion on several topics, not to mention that of the other students.

When Mike covered the topic of using cross-functional or feature-oriented teams, rather than component-oriented teams, one fairly predictable question came up. ”But how do you ensure that what team X does is compatible with what team Y does?” I know I’ve heard this question before, usually from someone who is an architect, DBA, or in a similar role. For instance, if you have one DBA on each of three teams, how can you then prevent the database designs from wildly diverging? This is particularly a concern in large enterprises where many applications will share some architectural component, such as a database, or exposing a public API.

Of course there is a good answer to this question, which is to organize communities of practice. It’s helpful that there is a name to this practice, and I’m glad that it’s included in the Scrum training.

But what I found interesting is that the question was asked at all. The answer seemed obvious to me. If you have some architectural component that is shared by many teams, why wouldn’t the interested parties simply talk to each other? Why wouldn’t it occur to people to talk about what they’re doing with others who are doing similar things? It just seems natural that the testers would get together and chat about their various test automation tools and maybe (hopefully) agree on one to use across teams. It seems like the programmers would just naturally get together across teams and chat about what the public API is going to look like.

It’s almost as though when you draw boxes around a group of names and call it an “org chart” we see those boxes as big walls, which you aren’t supposed to go outside of. What kind of bizarre behavior is that?

A good development practice – whether it’s termed agile or not – ought to be a way to help you do your work, not to limit you. Having a cross functional team, daily standups, and user stories, doesn’t mean that’s all the communication you are allowed to have. These are not limits.

Make friends. Put information on a wiki. Send out broadcast email. Organize informal meetings. Talk amongst yourselves. It’s really OK.

Why Aren’t We a Profession?

I saw an article in the New York Times that the lawyers who provided the legal rationale for waterboarding may be subject to professional discipline, but probably not criminal charges.

Without getting into any political issues, or whether these interrogation techniques constituted torture, or even if torture was justifiable, I’d like to talk about what this situation tells us about the legal profession.

In short, the operative principle is that lawyers are expected to adhere to a certain standard, regardless of any pressure from outside sources or circumstances, including what their bosses want them to do. From the article:

Among the questions it is expected to consider is whether the memos reflected the lawyers’ independent judgments of the limits of the federal anti-torture statute or were skewed deliberately to justify what the C.I.A. proposed. … Several legal scholars have remarked that in approving waterboarding the Justice Department lawyers did not cite cases in which the United States government had prosecuted American law enforcement officials and Japanese interrogators in World War II for using the procedure.

In other words, the Justice Department is trying to decide if these lawyers caved to pressure, or came to their decisions independently. The fact that they didn’t refer to cases that wouldn’t support their opinions indicates it was the former.

Think about what other professions are expected to adhere to certain standards that rise above concerns for any outside circumstances or pressure. Doctors are a great example. Accountants have international standards they must adhere to. Architects and engineers come to mind immediately. Do you want to live in a house that you suspect was approved by an architect because his boss was breathing down his neck?

My question is, why don’t developers have the same sort of standards?

Our world runs on software. If a doctor makes a mistake, maybe a patient dies. I suspect one day in the near future a software bug is going to kill an awful lot of people (ever heard of Therac-25?). Scenarios in which bad code costs a lot of people a lot of money are not exactly hard to conceive. And it’s only going to get worse as time goes by.

But even if we weren’t building systems that can seriously disrupt people’s lives, we still should hold ourselves to a higher standard. The code we write every day is essential to the businesses that pay us. If that inventory management application has an obscure bug in the spaghetti logic, one day it’s going to cause some serious issues for the company that trusts it – the company that trusted us to build that system.

Finally, developers are very well paid – about double the US median income – for our services. We should be counted on, as professionals, to do the best job possible. We should be trusted not to just slap something together to hit an arbitrary deadline to make us, our team, or our boss, look good. When we write code that’s fragile and hard to maintain, that costs our employer money – and not just a little bit.

The next time someone comes back to that code to fix a bug or add a feature, and they are afraid to change it due to a lack of automated tests, or they can’t quite figure out where to make the needed change, or they miss one of the seventeen places that logic is duplicated – that’s as much a loss to the company as if we just flat out stole the equivalent of the next programmer’s salary as he struggles with our sloppy code.

When we lie to our bosses about how close we are to finishing a project, so that management continues the project rather than redirect us to something with a higher ROI, we are responsible for wasting our employer’s money.

When we don’t bother validating a feature request, don’t get feedback on what’s really needed, and don’t exercise our own judgment, but instead just do the quickest implementation of the narrowest and most simplistic interpretation of the requirement/request, we have not earned our pay.

When we don’t thoroughly test the code we write (using automated or manual techniques), and just toss it over the wall for QA to deal with, we are responsible for wasting their time, as well as the ramifications if our buggy code makes it into the wild.

When, due to our own ignorance and laziness, we choose an approach, framework, design, or language that is not the most appropriate for the situation at hand, we are responsible for the results. If there’s a different approach that we didn’t evaluate but would have made the development faster, the code clearer and simpler, the maintenance easier, or the deployment and operation smoother, well, there’s nobody to blame but us, is there?

Software developers need to take responsibility for the decisions we make, or fail to make, and we need to do it collectively. We need to move toward becoming a profession.

You are Responsible for your Own Career

Bob Martin recently ranted

YOU, and NO ONE ELSE, is responsible for your career. Your employer is not responsible for it. You should not depend on your employer to advance your career. You should not depend on your employer to buy you books… If they won’t buy them, YOU buy them! It’s not your employers responsibility to teach you a new language. It’s great if they send you to a training course, but if they don’t YOU teach the language to your self!

Uncle Bob’s rant was provoked by folks complaining about Michael Feathers’ list of papers every software developer should read. Feathers had the audacity to actually link to articles hosted at a (gasp!) paid site.

I feel roughly the same way about developers who maybe vaguely wish they had a better career, but do approximately nothing about it. Hope may be a good campaign slogan, but it is not a strategy. You’re the only one who actually cares about your own career.

Here are a handful of ideas on how to take charge of your career:

If you believe a given tool will make you more effective at your current job, buy your own tools. There are a few cases where this is not possible (e.g., expensive testing or modelling tools), but seriously, how much is a copy of Visual Studio, IntelliJ, or MyEclipse? Need a better machine, more memory, another monitor? Unless you have a very restrictive IS department, I bet you can get away with it.

Uncle Bob mentions purchasing books. Most of the good programming books run about $40. Even if you read one every other month, that’s less than $300 for the entire year. If we actually take the time to read them well and learn, these books will be invaluable at advancing our careers.

You should learn a new language or framework each year. Get on the job search websites and figure out which ones are hot, then get busy learning. Most of the languages, tools, frameworks, etc., are either free or very inexpensive.

Even better than going it alone, you could start a discussion group at your company. If you’re unemployed right now, you can probably find plenty of other unemployed developers who are interested in the same things as you. Pick a book and read through it together in a couple of months. Learn a framework or language together.

There are enough user groups in the DFW metroplex that I could probably go to 1-2 per week if I wanted to. Many of these groups are free; some have minimal costs or meet for lunch or happy hour. If you tried hard, you might spend $20 or so each month. For that trivial price, you get both education and networking.

With slightly more effort, you could probably become a speaker at one of these groups. But why stop there? You could surely put together a 30 minute talk on something useful you know. At that point it’s just a matter of finding places that will let you talk to them. Practice at your own company. Then call your friends at other companies and see if you can come do a “lunch and learn” for a team. If it helps, offer to bring the lunch. You can feed a team of 10-12 for less than $100. For that price, you gain credibility and expand your network. Quite an ROI, isn’t it?

Start a project that matters to you. You surely have a hobby, or belong to an organization, or care about a charity, that would provide an opportunity to develop something. Use a language or framework that you want to learn. Practice the development skills you’ve been learning from all the great books you’re reading. Build the entire application with BDD and TDD. Work iteratively. Get some of the friends you’ve made through the user groups you’re attending to help you out.

If you don’t have any good ideas for projects to start, then you can always freelance/moonlight or contribute to an open source project. Freelancing/moonlighting has the nice benefit of getting paid, as well as building up a network and client base that could turn into a lucrative career going solo. Working on open source software helps build your reputation, as well as hone your development and collaboration skills.

Your career can be something that happens to you, or something you take charge of. Which will it be? What can you start doing right now to take charge?

Prefer Delegation to Inheritance

Benjamin Nortier at 21st Century Code Works has a great post exploring of the Liskov Substititution Principle in which he tries to determine if Square “is-a” Rectangle. He rightly concludes that a Square is not a Rectangle as far as development is concerned, whatever may be true in mathematics. (HT: Bob Koss)

I think the improper subclassing that Benjamin discusses usually stems from a commendable effort to reuse code, rather than abstract ontological considerations. Square and Rectangle share a lot of code, so it’s pretty tempting to extend Rectangle. A better way to do this is through delegation.

Let’s assume you have this class:

public class Rectangle {
  public void setWidth(int width) { this.width = width; }
  public void setHeight(int height) { this.height = height; }
  public int getArea(return width * height; }
  public int getPerimeter(return (width + height) * 2; }

Now you need to implement class Square, and it’s pretty tempting to simply extend Rectangle and change the definition of setWidth() and setHeight(). But there is a better way.

public class Square {
  private Rectangle r;
  public void setSide(int side) { r.setWidth(side); r.setHeight(side); }
  public int getArea() { return r.getArea(); }
  public int getPerimeter() { return r.getPerimeter(); }

This approach is superior because it does not violate the Liskov Substitution Principle, yet it still achieves about the same level of code reuse. But that reuse no longer comes at the cost of violating encapsulation.

5 Reasons Standardizing Your Process is a Great Idea

I recently wrote about some reasons that standardizing your development process is a terrible idea. But I’ll admit there are a few reasons that it might be a great idea instead.

Depending on the company you work for, you might need to comply with certain external regulations. At a minimum, you’re going to have to write down what your process is and adhere to that process, which will require some level of standardization. This doesn’t mean you need to document and control every part of your development process; less is probably better in these cases.

I’m sure we’ve all seen the teams where the “Scrum Master” dishes out assignments and treats the daily standups like a status meeting. Whatever that is, it ain’t agile. But as long as they’re using the right words, and in the absence of any standard, what’s to stop them? A standardized process is objective and can be assessed. Standardization will force an organization to actually flesh out what kind of environment and processes the people will use. With some level of standardization, at least there are some rules that teams can go back to when faced with various situations – the temptation to blow off unit tests, pressure to work at an unsustainable pace, etc.

Along these lines, a standardized process can help keep a team from slouching toward mediocrity. It’s nice to think that all developers are highly disciplined and will stick to the best practices they have mastered. But it doesn’t tend to work out that way, and before too long you’ve blown off pair programming, the daily standup takes half an hour, the end of the iteration slips now and then, and nothing is ever “done done”. Woops. A standardized process shows us what the standard actually is, and can be enforced when needed to keep us from mediocrity.

A standardized process also allows certain organizational efficiencies that would be hard to achieve with multiple, constantly varying, processes. For instance, if managers can count on getting the same metrics from each team, each iteration, it will help them be able to recognize problems and respond appropriately. When the system adminstrators know just what to expect from the development teams, and when, it will help deployments go more smoothly. If IPMs, retros, and demos, can be coordinated, it can help stakeholders get to those meetings and avoid scheduling conflicts.

Ultimately, though, the best reason to standardize your process – and the reason most people want to do it anyway – is that it allows the organization to grow and learn. Each team doesn’t have to re-learn each lesson. Newbies can come up to speed on the practices before they’ve really mastered the philosophy. Understanding is important, but it’s not everything. It’s OK to learn by seeing what was successful for others. The danger here, of course, is that the growth and learning never come, and good practices simply become tradition with no underlying comprehension of the motivations or forces involved.

In conclusion, I think some minimal standardization is probably a net good, although I’m not sure it needs to be formalized. Certainly you want a default process for teams to start out with. Any level of standardization must leave room for experimentation and innovation within that process, and can never be a substitute for learning and understanding.

Manifesto for Software Craftsmanship

The Manifesto for Software Craftsmanship has been published. Go read it and sign it!

I’m thrilled to see something like this come out. I know from my own experience the tendency to just hack it and go on, and then the abuse/distortion of agile ideas to justify such sloppiness. Or the atitude that “I’m just a code monkey” that disavows any ultimate responsibility for the overall success of the project.

I look forward to the time when professionalism and craftsmanship are the norm in our profession, just as they are among lawyers, doctors, architects, engineers, musicians, etc.

Five Reasons Standardizing Your Process is a Terrible Idea

Soon after an organization adopts agile software development methods, it seems that mangers inevitably talk about the need to “mature” and “standardize” their process. Here are a few reasons standardizing your agile process is a terrible idea.

Standardizing on a process, across teams and projects, prevents innovation and progress. As the Agile Manifesto urges, “At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.” If a process is standardized, then it will be either impossible or very difficult to adjust your practices. One thing I really love about iterative development is that you can run experiments on your development process itself. You can modify your process for an iteration or two and see how it works. If it’s an improvement, you can continue. If it’s detrimental, then at least you learned something. Rigid standardization prevents this.

In the worst scenario, standardization can force teams to knowingly do the wrong things. No project is alike, no team is alike. The context always varies, even perhaps from iteration to iteration. The fact is that a “best practice” for one team may not be a good practice for a team in a different context. There are few, if any, absolutely best practices that are applicable to all situations. A standard process can force a team to follow a rule that it intuitively knows to be inappropriate.

Standardization will require some type of process or oversight, which will exist solely to enforce the standardized process. There will be some amount of work involved to make sure each team is following the rules. Otherwise how do you know that team down the hall is actually doing TDD instead of just writing tests after the fact? Somebody has to own, and enforce, the process. This flies in the face of simplicity – maximizing the amount of work not done.

A standardized process ruins the outcome-oriented spirit of agile software development. If a rigid process that I can’t change is being enforced, and I follow the process and work diligently, I can’t be held accountable for failure. After all, I followed the rules. Instead, development teams should stick to the motto “Get ‘r done!”

Related to that, standardizing a process removes need to actually understand why certain practices exist. When you don’t have to derive, or at least continually justify, your practices from first principles, then it’s fairly easy to not bother understanding what those first principles are, and how they correlate to your practices. Allowing teams to vary their process as needed will help ensure that those processes flow from actual common values.

I’m not against all forms of standardization, and I think an organization – even the entire craft of software development – should have some “default” practices. Stay tuned for a future post on why standardizing your process is a wonderful idea!

Name your unit tests clearly

I advocate unit testing, and particularly test driven development, for several reasons. The most obvious is that unit testing helps prevent bugs. Good unit tests also help you design better – if it’s hard to test your class, you’ll refactor it so it is easier to test, and consequently wind up with a more loosely coupled design.

In addition, good unit tests can serve as excellent documentation of what the code is supposed to do. This is actually kind of hard to do well. One technique that helps me with this is to name my unit tests clearly. Each test correlates to one behavior of a class, and the name of the test reflects that behavior.

For instance, say you have an application that is responsible for order fulfillment and billing. An order has a purchaser, and optionally a shipping address. If the shipping address is null, you want to use the purchaser’s billing address. The code might look like this:

public class Order {
  private Address shippingAddress;
  private Purchaser purchaser;
  // ...
  public Address getShippingAddress() {
    if (shippingAddress != null) {
      return shippingAddress;
    else {
      return purchaser.getBillingAddress();

A typical way of writing unit tests (which I blame on Eclipse’s JUnit plugin) would be to write a single testGetShippingAddress test, and test both cases inside it:

public void testGetShippingAddress() {
  Address billing = new Address(...);
  Address shipping = new Address(...);
  Order o = new Order();
  Purchaser p = new Purchaser(billing,...);
  assertEquals(shipping, o.getShippingAddress);

  // now test what happens if shipping not explicitly set
  assertEquals(billing, o.getShippingAddress);

This does effectively test the class, but it isn’t very clear about the behavior if the shipping address isn’t set – particularly if you don’t notice that the shipping address was being set back to null. Instead, we could write it like this:

public void testCanExplicitlySetShippingAddress() {
  Address billing = new Address(...);
  Address shipping = new Address(...);
  Order o = new Order();
  Purchaser p = new Purchaser(billing,...);
  assertEquals(shipping, o.getShippingAddress);

public void testUsePurchasersBillingAddressIfShippingAddressNotSet() {
  Address billing = new Address(...);
  Order o = new Order();
  Purchaser p = new Purchaser(billing,...);
  assertEquals(billing, o.getShippingAddress);

It’s true that there’s now some duplication between these methods, but that would be removed fairly easily and I only left it in for clarity. What we’re left with is two simple, clear tests that leave no doubt about what this class is supposed to do. If this class’s behavior changes in the future, seeing that testUsePurchasersBillingAddressIfShippingAddressNotSet() failed is going to make a lot more sense than seeing that testGetBillingAddress() failed somewhere, and digging through that test to figure out just what I broke. You can just drop the word “test” from the front of each test method and it reads like a simple spec.

This approach also works well with TDD’s “write a test, make it pass, refactor, check in” approach. Each test represents exactly one behavior and supports the iterative approach of building a class one failing test at a time. In fact, I suspect that using fewer, coarser-grained tests (as in my first example) is pretty good evidence that the coder didn’t do TDD.