Monday, May 31, 2010
SQS Test Manager Forum
The event hosted a number of talks along with a demo of Microsoft's Visual Studio 2010. The Test Manger element with its seamless integration with sharepoint is very impressive and I'm itching to get my hands on the software to try it out myself.
Fortunately, luck smiled on me and I won a book on the tool. So watch this space for follow ups.
The day was great, the talks gave a foundation for additional discussion during the numerous coffee breaks which was ideal. All too often, a conference is so crammed full of presentations, that the attendees don't have the opportunity to chat and discuss.
Hopefully, the forum will run again next year! It was great to chat with fellow test managers and listen to their thoughts and opinions.
Thursday, May 13, 2010
Help - Bug Fix Rates
(Number of Bugs Fixed that were Found by the Test Org/ Number of Bugs Found by the Test Org) x 100
The higher the rate, infers a greater alignment with development and priorities, i.e. test is not testing features where bugs won't get fixed.
However, what's a realistic bug fix rate goal?
70%? i.e. 70% of bugs found by Test are fixed.
Does anyone know of any numbers out there I can compare against?
Thank you!
Monday, May 10, 2010
Eliminate Waste – Key to Effective Testing
"Eliminate Waste" is the fundamental principle of LEAN.
Waste is defined as anything that does not create value for a customer.
It's essential to learn to identify waste if you are to eliminate waste.
If there is a way to do without it, it is waste!
The 7 wastes of software development are:
- Partially Done Work
- Extra Processes
- Extra Features
- Task Switching
- Waiting
- Motion
- Defects
My translation to testing:
- Partially Done Work
- Extra Processes
- Unneeded Test Infrastructure
- Task Switching
- Waiting
- Motion
- Passing Tests
Unneeded test infrastructure encompasses extra test features/tools that are not utilized in the test effort but are nice to have (kind of like extra features, they are nice to have but not used). Similar to software development, it is best to not commit to extra test infrastructure features until they are actually needed.
Passing tests do not add value to testing when the main objective of testing is to find defects. Passing tests do not find defects.
Eliminating waste increases time available for activities that do provide value and allow testing to be as effective as it can be.
Thursday, May 6, 2010
Am I Creating Value With My Testing?
http://qualtech.newsweaver.ie/startester/bjvul98tll6-a0tqjjw4f4
called "Am I Creating Value With My Testing?".
He makes a great point. As testers we can easily get consumed with the techniques, the status reports, the analysis, test process improvement, maturity models, open source tools, etc, etc, etc. But we need to, regularly, take our heads out of the sand and ask "Am I Creating Value with My Testing?".
Test provides a service - and while we can be extremely busy working, we MUST take the time to check that the work we are so busy doing, does in fact provide value. Otherwise, what's the point?
Wednesday, May 5, 2010
The Method Behind My Testing Madness
If you had to describe how you find bugs, would you be able to clearly and succinctly answer? I'm not sure I could.
For a new software product, new to me or brand new to the market, one of the first things I will do is to sit down with the documentation and a highlighter pen. Using the highlighter pen, I will underline the claims made in the documentation. Not the bits where it tells you how to do this or how to do that or how it's the best product out there since sliced bread. I'll just underline the text that claims it can achieve something. Then these become the first things I will test in the software.
These claims are the main drivers as to why someone will part with money to purchase this software product, and above all these features must work. Not only must they work, but they should be designed in a manner that allows a novice user, me in this case, to easily figure out how to use the software without hours or even minutes of studying the manual.
So, when first using a new piece of software, you have an opportunity to truly have an effect on the quality of the user experience. It is your first user experience of the software and you can make suggestions regarding how the ease of use of the tool can be improved. Developers will appreciate this input, by the time they themselves use the software, they know it inside and out, they work around usability issues without even realizing.
The testing of these claims garnered from the documentation will feed my testing charters for exploratory testing sessions. I will allow myself to divert from the charter when I question what will happen if I move off over into this area, or double-click on this icon when I haven't been directed by the documentation to do so.
Each claim will be a separate testing charter and will kick-start my exploratory testing of the software.
You cannot underestimate the power of exploratory testing. It feeds your knowledge of the software and focus' you mind down the path of most effective destruction. Your goal is to break the software in as many different and interesting ways as is possible. Success is in each bug report written up to be:
- Clear
- Concise
- As much root cause analysis as possible/required
- Steps to reproduce
- Why you consider it a defect
- Or, why you consider it a worthy enhancement
Exploratory testing is intellectually and creatively taxing so when I start to lag, I'll move my attention to easier defect finding practices. For me, these include:
- Negative testing – does the software give appropriate and helpful error messages?
- Load testing – what happens when I load a very large file into the software?
- Confusion testing – can I confuse the software? For example, double-clicking in numerous different locations in quick succession.
- Comparative testing – comparing against other software tools in the tool suite? Do they have the same look and feel? What are the differences?
- OS testing – if the software is supported on different software OS's, does the software behave in the same way irrespective of which OS it is executing on? Does the software have a similar look and feel across different OS's?
- Competitor testing – how does the software compare against rival products?
Finally, the most important question: does the software provide the functionality that the customer requires to complete their work?
Remember, quality is not just about lack of defects, it's also about providing the functionality that the customer needs.