Monday, May 31, 2010

SQS Test Manager Forum

SQS hosted a Test Manager Forum at Croke Park on May 20th.

The event hosted a number of talks along with a demo of Microsoft's Visual Studio 2010. The Test Manger element with its seamless integration with sharepoint is very impressive and I'm itching to get my hands on the software to try it out myself.

Fortunately, luck smiled on me and I won a book on the tool. So watch this space for follow ups.

The day was great, the talks gave a foundation for additional discussion during the numerous coffee breaks which was ideal. All too often, a conference is so crammed full of presentations, that the attendees don't have the opportunity to chat and discuss.

Hopefully, the forum will run again next year! It was great to chat with fellow test managers and listen to their thoughts and opinions.

Thursday, May 13, 2010

Help - Bug Fix Rates

Bug fix rate for the Test Organization is defined as:

   (Number of Bugs Fixed that were Found by the Test Org/ Number of Bugs Found by the Test Org) x 100

The higher the rate, infers a greater alignment with development and priorities, i.e. test is not testing features where bugs won't get fixed.

However, what's a realistic bug fix rate goal?

   70%? i.e. 70% of bugs found by Test are fixed.

Does anyone know of any numbers out there I can compare against?

Thank you!

Monday, May 10, 2010

Eliminate Waste – Key to Effective Testing

"Eliminate Waste" is the fundamental principle of LEAN.

Waste is defined as anything that does not create value for a customer.

It's essential to learn to identify waste if you are to eliminate waste.

If there is a way to do without it, it is waste!

The 7 wastes of software development are:

  1. Partially Done Work
  2. Extra Processes
  3. Extra Features
  4. Task Switching
  5. Waiting
  6. Motion
  7. Defects

My translation to testing:

  1. Partially Done Work
  2. Extra Processes
  3. Unneeded Test Infrastructure
  4. Task Switching
  5. Waiting
  6. Motion
  7. Passing Tests

Unneeded test infrastructure encompasses extra test features/tools that are not utilized in the test effort but are nice to have (kind of like extra features, they are nice to have but not used). Similar to software development, it is best to not commit to extra test infrastructure features until they are actually needed.

Passing tests do not add value to testing when the main objective of testing is to find defects. Passing tests do not find defects.

Eliminating waste increases time available for activities that do provide value and allow testing to be as effective as it can be.

Thursday, May 6, 2010

Am I Creating Value With My Testing?

Jonathan Kohl wrote a great article for Star Tester:

called "Am I Creating Value With My Testing?".

He makes a great point. As testers we can easily get consumed with the techniques, the status reports, the analysis, test process improvement, maturity models, open source tools, etc, etc, etc. But we need to, regularly, take our heads out of the sand and ask "Am I Creating Value with My Testing?".

Test provides a service - and while we can be extremely busy working, we MUST take the time to check that the work we are so busy doing, does in fact provide value. Otherwise, what's the point?

Wednesday, May 5, 2010

The Method Behind My Testing Madness

If you had to describe how you find bugs, would you be able to clearly and succinctly answer? I'm not sure I could.

For a new software product, new to me or brand new to the market, one of the first things I will do is to sit down with the documentation and a highlighter pen. Using the highlighter pen, I will underline the claims made in the documentation. Not the bits where it tells you how to do this or how to do that or how it's the best product out there since sliced bread. I'll just underline the text that claims it can achieve something. Then these become the first things I will test in the software.

These claims are the main drivers as to why someone will part with money to purchase this software product, and above all these features must work. Not only must they work, but they should be designed in a manner that allows a novice user, me in this case, to easily figure out how to use the software without hours or even minutes of studying the manual.

So, when first using a new piece of software, you have an opportunity to truly have an effect on the quality of the user experience. It is your first user experience of the software and you can make suggestions regarding how the ease of use of the tool can be improved. Developers will appreciate this input, by the time they themselves use the software, they know it inside and out, they work around usability issues without even realizing.

The testing of these claims garnered from the documentation will feed my testing charters for exploratory testing sessions. I will allow myself to divert from the charter when I question what will happen if I move off over into this area, or double-click on this icon when I haven't been directed by the documentation to do so.

Each claim will be a separate testing charter and will kick-start my exploratory testing of the software.

You cannot underestimate the power of exploratory testing. It feeds your knowledge of the software and focus' you mind down the path of most effective destruction. Your goal is to break the software in as many different and interesting ways as is possible. Success is in each bug report written up to be:
  • Clear
  • Concise
  • As much root cause analysis as possible/required
  • Steps to reproduce
  • Why you consider it a defect
  • Or, why you consider it a worthy enhancement
Remember, developers will judge you on your bug reports. Making their life as easy as possible when triaging and debugging a failure will put you in their good books. This means you will get more of your bugs fixed than the next person. When it comes down to it, what's the point in all the time and effort testing the software and finding defects if they don't get fixed?

Exploratory testing is intellectually and creatively taxing so when I start to lag, I'll move my attention to easier defect finding practices. For me, these include:
  • Negative testing – does the software give appropriate and helpful error messages?
  • Load testing – what happens when I load a very large file into the software?
  • Confusion testing – can I confuse the software? For example, double-clicking in numerous different locations in quick succession.
  • Comparative testing – comparing against other software tools in the tool suite? Do they have the same look and feel? What are the differences?
  • OS testing – if the software is supported on different software OS's, does the software behave in the same way irrespective of which OS it is executing on? Does the software have a similar look and feel across different OS's?
  • Competitor testing – how does the software compare against rival products?
For me, to remain alert, I need to switch my focus regularly. Too long using one testing methodology will cause me to overlook defects and usability issues. By switching methodologies I do not give my brain any opportunity to go into automaton mode. Testing is an intellectual and complex task. If your brain is asleep during it, you won't find the cool bugs!

Finally, the most important question: does the software provide the functionality that the customer requires to complete their work?

Remember, quality is not just about lack of defects, it's also about providing the functionality that the customer needs.