How the QA team DOES own Quality

canhasqualityI’ve spent the last 4 blog posts discussing how everyone except the QA team owns quality so, to cap this series off, lets discuss what role the QA team has in the delivery of a quality product… because it’s an important one.


The QA Team

As I stated in the opening of this series you are here to accurately measure the status of customer quality regularly in a repeatable fashion throughout the development cycle. The words here are critically important.

  • Accurately You are responsible for designing the series of tests that the product must pass in order to be released. You won’t be able to test every configuration and stress test possible before the product is release so you have to make sure you are testing what’s important. Just ask the person that had to test the iPhone 4 antenna… and left off actually holding the phone like most users.  I’m sure they had thousands of hours of testing that went into that release… they just missed what was important.
  • Measure – There are lots of ways to measure quality but the most important are going to include:
    • Test Plan pass fail reports. When tests are executed do the pass or fail. If they fail why do they fail?
    • Known Bug counts by severity.  All products ship with bugs so it’s not just important to know how many there are… but at what severity the known issues are throughout the cycle so that the team can fix what’s most important first. The point is to make sure people are aware of the bugs.
    • Code Complexity – If you are shipping software you need to be able to communicate where it’s complex and look to reduce complexity BEFORE it’s developed in the first place by reviewing development plans, tickets, or user stories.  You should push back on requirements that will make the product untestable and unnecessarily complex.
    • Test Counts VS tests remaining – Everything needs to have coverage… this really measures your work backlog.
  • Regularly – It’s meaningless if you stand up at the end of development and say 50% of our tests fail. Its your job to put quality in everyone else’s face so they can make informed decisions about adding more feature work, the time required to lock down the release, and work that must be done. The more frequently you can run tests the faster you will find issues and the less it will cost to fix them.
  • Repeatable The test plan and test cases must be structured so that it can be run the same way consistently.  This means the more automated and structured it is the better.  You should be able to hand a test plan and automation over to any new person on the team and they should be able to repeat your results.  Even the automated tests should be manually checked, but having a nightly build report from all the unit & integration tests is critical to finding issues fast.
  • Customer – You should be reading every customer reported issue.  Those are one that snuck out. It needs to be fixed and have a test case added to it.  Customers are doing you a favor by telling you about something and its your job to stand up and make sure those issues are addressed.
  • As a bonus – nothing beats a few good rounds of ad-hoc testing by creative people trying to break things.

      Ok, this concludes this 5 part series on product quality.  It’s obviously something I’m pretty passionate about and I hope you enjoyed it.  If you are digging back for the first 4 posts in this series to see how other roles own quality.. here they are:


      2 responses to “How the QA team DOES own Quality”

      1. Matt LeClair says:

        Nice post Josh!

        Regarding measuring quality, I think this is far tougher than it appears to be, because for the most part, there aren’t *any* ways to directly measure quality (perhaps beyond direct customer feedback). What we *do* have are a million different metrics that we like to think describe the quality of our product, and half a dozen different stakeholders who all have their own particular “pet metrics” that they think “measure quality the best”.

        I think that QA needs to move away from the tired old metrics of Code Coverage, Pass/Fail reports, etc etc, and start thinking outside the box again.

        Code Coverage doesn’t tell you anything other than how much of the shipping code your (good or bad) tests exercised. Badly written tests that cover 100% of the product don’t determine quality.

        Pass/Fail reports are a nice barometer to show trends adn such, but in and of themselves also don’t help to measure quality directly. Two consecutive reports might show a pass rate of 90%, YAY! WE”RE ABOVE THE BAR! Except on day 1, the failing 10% are minor issues that the team would be fine with shipping, and on day 2, teh failing 10% are massive priority one features that are ship-stoppers. However, both reports say 90% Pass Rate!!

        Test needs to get into a mode of finding ways to measure quality on a feature level. How many bugs have been found in Feature X? how many were “bad”? Are there any trends I can spot? If I find a trend, did a single developer introduce those bugs? Perhaps that dev worked on other code and introduce similar flaws in other features? How can I break down my test coverage to ensure that I have solid coverage of the highest priority features, while still providing enough coverage of lower priority areas to generate confidence that we won’t have a ship-stopping bug in those areas?

        Test is getting stagnant, imo, and its’ time we stepped up adn started changing the tired old saws of what test is supposed to do.

        [WORDPRESS HASHCASH] The poster sent us ‘0 which is not a hashcash value.

        • Anonymous says:

          I agree that you can get much deeper. We do code analysis, for example, that shows us what areas have the most churn and therefore the most risk. We also look at developer stats. I simply assert that you have to at least be able to measure the basics.

      Leave a Reply

      Your email address will not be published. Required fields are marked *