Thursday, May 13, 2010

Help - Bug Fix Rates

Bug fix rate for the Test Organization is defined as:

   (Number of Bugs Fixed that were Found by the Test Org/ Number of Bugs Found by the Test Org) x 100

The higher the rate, infers a greater alignment with development and priorities, i.e. test is not testing features where bugs won't get fixed.

However, what's a realistic bug fix rate goal?

   70%? i.e. 70% of bugs found by Test are fixed.

Does anyone know of any numbers out there I can compare against?

Thank you!

7 comments:

  1. This really puzzles me. How are you planning to use this metric? What behavior are you trying to motivate or discourage?

    I would never have a Goal that motivated testers to avoid reporting bugs that they encounter, as it seems this will.

    ReplyDelete
  2. Hi Joe, this metric is one piece of the jigsaw puzzle of evaluating the effectiveness of testing.

    The more effective testing is, means that the bugs which we file, are being fixed.

    We can improve our effectiveness by focusing testing in the right areas, to the right depth and in alignment with business priorities.

    Bugs in areas of the software that are high business priorities are more likely to be fixed - as test we are adding value when we find bugs that are fixed.

    Finding bugs (and testing) that does not result in fixes, does not add value.

    Tracking the Bug Fix Rate of testing will allow us to critique the effectiveness of test process improvement efforts. So, we do not spend time on tpi that does not yield results.

    I'm not sure how looking a the CR fix rate motivates testers to not report bugs - it motivates testers to test in the right area and report bugs in the right area.

    This metric is most certainly not the full picture when reviewing test effectiveness, but it is a useful data point.

    ReplyDelete
  3. "Tracking the Bug Fix Rate of testing will allow us to critique the effectiveness of test process improvement efforts."

    I guess we'll need to agree to disagree on that one.

    The decision to fix or not fix a bug is a business decision. Penalizing testers for reporting a bug that the business decided shouldn't be fixed seems odd to me.

    If you use your Bug Fix Rate as a measure of your effectiveness, I would expect to see bug reporting skewed away from what is actually found during testing, to what could be used to improve the metric.

    I can (consciously or not) choose to hold back on reporting bugs that I suspect might not be fixed. I could even wait until my friend the developer agrees to fix the bug, before I bother reporting it.

    If I choose wisely, I get a better metric, presumably happier management, and nobody is the wiser. If I choose poorly, customers will report bugs that I have already found, but chose not to report.

    I don't understand about "testers testing in the right area". Is that a big problem in your shop? Do testers often wander off their job and find real bugs in areas that you'd rather they avoid?

    ReplyDelete
  4. Anyway, to answer your question, "what's a realistic bug fix rate goal?"

    Based on what you are saying, it seems like anything less than 100% would indicate a problem in your shop.

    ReplyDelete
  5. This is one of those typical dangerous measurements.

    I see value in it if using it as a part of an analysis with knowledge about details, e.g. "let's have a look at which bugs gets fixed and see what we can learn from it".

    But, as Joe points out, probably countereffective if used as a metric: "testers must be above 75% fix rate".

    /Rikard

    ReplyDelete
  6. I agree that a metric taken alone can be dangerous, have undesired side-effects, and does not reflect on the entire system of testing.

    This is just one metric in a suite of both qualitative and quantitative metrics.

    The main objective of this metric is to put some number on test effectiveness (as in, how test influences final software quality). It will be used as part of an analysis to determine what gets fixed and what doesn't get fixed and to use this information to direct testing to the what gets fixed areas.

    I don't believe that 100% is a realistic goal for this metric - perfection is not possible. And testers do not have control over what is fixed.

    I was hoping someone would have some real data where this has been measured elsewhere.

    ReplyDelete
  7. As a neophyte engineering mgr years ago, I needed to learn about this area. First I tracked the bug find rate of a group of 10 test engineers, and found that they found bugs at a rate of 11 per workday. I was amazed that this was a constant with a correlation co-efficient of 0.99 over 2 months.

    The bug fix rate was about the same, but more variable; developers tended to fix the easy bugs first, and then the harder bugs slowed them down.

    With better planning for new functionality, we were able to find about 32 bugs/workday. The developers were also more focused, and could fix more per day, but the overall trend of fixing easy bugs first then harder ones slowed them down over time.

    The groups found and fixed over 20,000 bugs over the years, so that is one data point for this particular environment and set of tools.

    Another interesting thing was the likelihood that a bug report was a bug. Over many software releases, involving many different subsystems (drivers, compilers, a file system, utilities, applications, etc.), basically the developer agreed with the test engineer 78% .. 80% of the time that the issue was a bug because they changed either code or documentation to make the problem report go away. About 10% of the time, the report could fairly be called a duplicate of another problem. Another 10% of the time, there might not be enough info to resolve the issue. About 1% of the time an issue was identified that was escalated and we decided "NO FIX", which was documented in the release notes.

    This intro caused me to pay attention to this area over my career. Believing that "I can do better than this", I've tracked my own bug effectiveness rate, and was surprised that it is still in the 78% .. 80% area on many other projects and using other development tools.

    The one thing I will note is, that it is much easier to find bugs in the user interface than in other areas.

    Another note: the past predicts the future... if an area has a lot (or a only a few) bugs in one release, when the next release comes around, it will again have a lot (or just a few) bugs.

    ReplyDelete