Monday, November 22, 2010

My TPI Advice

Can be summed up as follows:

  • Research all models.
  • Use models as an "a la carte" menu.
  • Create a model that suits your context.
  • Continuously improve the process.
  • Don't try to do everything at once.
  • Establish a cadence of continuous improvement.
  • Lean principles break down barriers, & thus work best when a senior leader champions them.

And don't forget:

  • Measure where you are before you start.
  • Decide where to go.
  • Communicate at all times.
  • People make it all work!

Tuesday, October 26, 2010

My Task Management System

This is an ode to Outlook 2007 and Dave Allen's "Getting Things Done". Without either, my life would be an unstructured, disorganized mess!

I use Outlook 2007 for absolutely everything, including:

  • When to nag
  • When to test
  • What to test
  • Action Items from meetings
  • Books to Read
  • Blogs to Post
  • When to follow up with someone
  • Status Reports
  • Time Cards
  • Mgmt Stuff
  • Emails to reply to
  • Reminder to update my goals
  • Etc.

Pretty much everything in my work life goes into Outlook. I know you're probably wondering what about my actual outside-work life? Yeah, that's a different story. Put it this way, had to run down to the shop last night because I forgot to buy food. Boyfriend and dog was not impressed, especially boyfriend, since I only returned with aforementioned dog food. Oops!

For testing and management, I find that having a reliable and trustworthy task management system is essential so that I can prioritize, keep track of where projects are, and not "forget" something I was supposed to do. It protects my credibility and helps me professionally.

So, if I need to test a feature, what do I do? Well, first, I'll look up when the feature is scheduled to be available to test. I'll create a task to represent this. I'll also create a separate task to contact the developer to get background information. And, I'll also create a task related to creating automated test cases for our regression test suite. So, I'll have something akin to the following in Outlook Task Manager:

  1. Seek information on software feature by X date
  2. Exploratory test software feature by X date
  3. Thoroughly test software feature by X date
  4. Feed results back to developer and stakeholders by X date
  5. Add test cases to automation test suite by X date

But, what happens if when I first start testing and I find a bug that is so severe that further tests are blocked? I find out when it'll be fixed and update the due dates. That way I can forget about testing this feature until I get pinged by the developer or my task pops up to remind me. I've found this extremely useful when the developer has not fixed the issue. The reminder reminds me to follow up.

I also use Outlook to block-book testing time. I don't believe in multi-tasking when I'm testing. If I'm planning to exploratory test, then I need to be focused on the task at hand. There are too many distractions in day to day life in the office, so I schedule an X-hour meeting in my calendar with the sole purpose is to test. I find this an extremely effective way to test and to uncover bugs.

It doesn't matter whether you use Outlook or some other tool, but use something. The brain cannot be trusted. I can testify to that as I'm driving home and it reminds me to add notes to a bug report. Not exactly the best time, why didn't my brain remind me 5 minutes before leaving the office? Because you can't trust it!

Key take home message: Don't trust your brain! Have a system!

Monday, October 18, 2010

General Systems Thinking

I've always had a knack for taking what I learn in some subject area, generalizing it, and applying it to another area. I never really thought about it and it came in extremely useful when I embarked on a testing career. In my testing career, I was easily able to take my developed expertise in the software under test and apply it whenever I moved onto another project with new software to come up to speed on. I thought everyone did this and it was just how people think and learn.

Then James Bach introduced me to "An Introduction to General System Thinking" by Gerald M. Weinberg. Reading this book was a revelation. Suddenly I was able to put a name to what I had been doing (albeit with limited capability).

At the core of General Systems Thinking is modeling of systems. It suggests that everything around us in the world and everything that we learn can be represented as a model – strip out the details that are unique to that model and now you have a model that will help you get up to speed quickly in another area. Gerry Weinberg does a far better job at explaining this!

I believe it's a vital element of exploratory testing. As you learn, you are building up a model of the software under test, and from that model you will take various routes through your exploratory testing session. You even build up models of testing over time, which model (or heuristic) you choose will depend on your exploratory testing session.

From a testing perspective, if you haven't read this book, I would highly recommend it. It will challenge you to improve how you approach a problem and how you test.

If you have any examples of how creating and applying a model in your testing has been effective, I'd love to hear about it.


 


 

Wednesday, October 13, 2010

Management Versus Leadership

I'm very excited by the movement away from traditional management, the "because I told you so" school, and towards a more inclusive leadership style, the "lets' look at this together and understand why this is needed".

Management is concerned about achieving results.

Leadership is the process of developing and communicating a vision for the future, motivating people and gaining their commitment and engagement.

Managers have a defined role in an organization. It is formal and has authority by virtue of the position.

While, leaders exert an informal influence, they are role models who have no positional authority but are respected by their peers and thereby have "followers".

A manager can be a leader but a leader does not need to be a manager.

Employing leadership techniques in your management career will aid you in building a team that you can trust and who execute on what matters.

Issues are resolved faster and stay resolved longer when those affected play an active part in reaching the solution.

According to Terry Gillen in "Leadership Skills for Boosting Performance", the attributes of leadership are:

  • Vision
    • Managers maintain things, leaders change things.
    • Leaders have to be able to see in their mind's eye how they want things to be.
  • Integrity
    • Consistency, openness, honesty and respect for people are essential if you want them to follow you.
  • Determination
    • Leaders encounter more obstacles changing things than managers encounter maintain things by they do not give up.
    • They are so determined they persevere.
  • Believability
    • Integrity and determination combine with interpersonal skills to add to a leader's believability.
    • Another ingredient is enthusiasm. If you are not enthusiastic about your vision and your people, no one will believe in you.
  • Technical and Managerial Competence
    • You need to be good enough to avoid making a fool of yourself and losing too much doing things and managing processes.
    • You need to spend a good chunk of your time taking action to support your vision and values.
  • Interpersonal Skills
    • You will have to build rapport with people, influence them and sometimes be assertive with them.
  • People-Oriented
    • Effective leaders enjoy being with people, working with them and development them.
  • Positive Thinking
    • Leaders encounter obstacles. Positive thinking is the starting point of the determination that carries leaders through, over and around them.
    • Attitudes are contagious.
  • Walking the Talk
    • Walking the talk is an important part of integrity.
    • People can see you mean what you say and that you apply it to yourself.

So what does this have to do with testing? Everything!

Whether you are a manager or not, there is a great opportunity to become a leader in your team and in your organization.

Do at least one thing each day to give someone a feeling of uplift and confidence. Support those around you.

More importantly, being a leader means being engaged in the work, enjoying the work, and making a positive contribution. That's a much nicer place to be than just doing what you're told and waiting for the clock to strike home-time!


 

Wednesday, October 6, 2010

LEAN: Simple Rules

The LEAN simple rules:

  1. Spend time only on what adds real customer value.
  2. When you have tough problems, increase feedback.
  3. Keep your options open as long as practical, but no longer.
  4. Deliver value to customers as soon as they ask for it.
  5. Let the people who add value use their full potential.
  6. Don't try to tack on quality after the fact – build it in.
  7. Beware of the temptation to optimize parts at the expense of the whole.

I love these rules. They apply to testing just as much as they apply to software development.

Monday, September 27, 2010

Testers Should Drive Software Testability


Software testability is a key aspect to allow the detection of difficult to uncover defects in software. Software testability supports the testing process and facilitates the creation of better quality software.


Software testability can be described as the probability that a piece of software will fail on its next execution during testing if the software includes a fault.


Testing and testability are complimentary: testing can reveal faults (testability cannot) but testability can suggest locations where faults can hide from testing (something testing cannot do alone).


Software testability must be designed into the software as it is developed. Therefore, it is an attribute of the software that requires close development cooperation with test.


Designing for testability requires that software is designed with a greater ability to fail when faults do exist.

James Bach proposes a set of Heuristics of software testability:

  • Controllability
    The better we can control it, the more the testing can be automated and optimized. 

  • Visibility
    What you see is what can be tested.

  • Availability
    To test it, we have to get at it.

  • Simplicity
    The simpler it is, the less there is to test.

  • Stability
    The fewer the changes, the fewer the disruptions to testing.

  • Information
    The more information we have, the smarter we will test.
A given piece of software will or will not hide a given fault from testing. The software testability of software will determine whether that fault is easily detectable by testing or not.


The more controllable the software, the more we can automate. The more we can automate, the less likely that human error will allow a defect to escape into the customer's hands.


Software testability is extremely valuable where functionality cannot be tested using a black box methodology. Software testability is tightly aligned with white box testing. Software testability must be designed into the software, so tester knowledge of any incorporated testability is required.


By including and improving the testability of algorithms in a software product, it will allow the testing organization to add automated and optimized white box tests to the automated test suite which will immediately fail and highlight a newly introduced bug.

 
For example, a particular algorithm when triggered in debug mode could print to STDOUT a specific message communicating that it is executing. If that STDOUT message is not detected by automation, the test will fail, thereby highlighting that the algorithm is no longer executing.


In black box and customer focused software testing, only those aspects of the software that are observable can be tested by the testing organization. The inclusion of debug information on specific software features will increase the ability and ease with which to test those software features which are not observable via black box techniques.


The simple inclusion of debug information will facilitate automated tests being developed to test whether an algorithm is executing or not. These automated tests will execute each and every build so if a regression is introduced it is detected immediately, a bug is filed, and development is made aware of the bug sooner rather than later.


The automation of these types of tests will also free up manual testing time so that a tester's time is spent on testing that cannot be carried out in an automated fashion, thus increasing test coverage of the software product.


By setting the tests up in automation, it also excludes the possibility of human error in not detecting the existence of a bug.


Software is said to have high testability if it tends to expose faults during exploratory black box testing, producing failures for most of the inputs that execute a fault. Defects can be identified and fixed quickly.


Software has low testability if it tends to project faults from detection during exploratory black box testing producing correct output for most inputs that execute a fault. These defects are extremely worrisome as customers using the software believe that everything is going as expected until down the line, suddenly the defect is exposed. This category of error can adversely affect the customer's business and thereby undermines the software's credibility and trustworthiness.


The goal of increasing the testability of software is not just to detect defects but more importantly, to detect defects as soon as they are introduced. Thus, reducing the cost and time to fix the bug and producing higher quality software each build of the release.


Reference: www.satisfice.com

Wednesday, September 22, 2010

Brain Rules

The best book I've read this year is "Brain Rules: 12 Principles for Surviving and Thriving at Work, Home, and School" by John Medina.

It is very enjoyable to read, not one of those books where you bribe yourself with a cup of tea and a biscuit if you just finish the chapter you're currently reading. It's easy to get engrossed in the text and not realize that hours have gone by.

The 12 principles are:

  1. Exercise
  2. Survival
  3. Wiring
  4. Attention
  5. Short Term Memory
  6. Long Term Memory
  7. Sleep
  8. Stress
  9. Sensory Integration
  10. Vision
  11. Gender
  12. Exploration

I'm not giving away. A lot of material is available for free at www.brainrules.net. However, you'll get a lot more out of the material if you read the book too.

So the question remains: What does this have to do with testing?

Well, testing is an intellectually demanding task where concentration, focus, being alert is essential if you are to be effective. Based on what I learnt in this book I made some changes.

For example, I am a lark! I am at my most productive in the morning. Why fight that? So I get into work before 8am every morning to absolute joyous silence (apparently larks are rare in engineering and testing – who knew?) and I get a good 3 or 4 hours of productive work done. If I test during this period, I am far more likely to uncover high priority bugs. To allow for this interrupted time, I've rescheduled all my manager-type meetings to the afternoon.

Another example is how I've integrated little walks into my day, even if it's just going all the way to the canteen (a good 10 minute return journey) to get coffee instead of making it on my floor. These little walks recharge my brain and boost my brain power. I come back to my desk and easily refocus and apply myself to my work.

Etc, etc, etc.

Read the book – it really will open your eyes to the physiology and evolutional biology behind how we think and work.

Thursday, July 15, 2010

Books on Testing - Recommendations?

I've read the following books dedicated to the subject of testing:
  • Testing Computer Software (Kaner, Falk, Nguyen)
  • Lessons Learned in Software Testing (Kaner, Bach, Pettichord)
  • Agile Testing: A Practical Guide for Testers and Agile Teams (Crispin, Gregory)
  • Exploratory Software Testing: Tips, Trcks, Tours, and Techniques to Guide Test Design (Whittaker)
  • Foundations of Software Testing (Graham, Van Veenendaal, Evans, Black)
  • Software Testing (BCS)
  • Automated Software Testing (Dustin, Rashka, Paul)
  • Effective Software Testing: 50 Ways to Improve Your Testing (Dustin)
  • How to Break Software: A Practical Guide to Testing (Whittaker)
  • How to Break Web Software: Functional and Security Testing of Web Applications and Web Services (Andrews, Whittaker)
  • The Art of Software Testing (Myers)
  • Managing the Test People (McKay)
  • Managing the Test Process (Black)
  • TMAP Next for Result-Driven Testing (Koomen, Aalst, Broekman, Vroon)
  • TPI NEXT, Business Drovem Test Process Improvement (Sogeti)
Obviously, if I search for "software testing" on Amazon, quite a number of books are returned.

Any recommendations?

Friday, July 9, 2010

Why Invest In Test Process Improvement?

Test Process Improvement is the continuous improvement of the quality and the efficiency of the testing process, in the context of the whole software development life cycle.

But, test process improvement is not testing. Why should you invest in it? Aren't you wasting time that you could be using to find bugs?

  • Reduce overhead
  • Increase test efficiency & effectiveness
  • Allow test to embrace change (key to agile testing)
  • Focus is on delivering results
  • Improve Test's influence in order to deliver better quality
  • Participating in a TPI program motivates & empowers test engineers

My organization invested in continuous TPI.

One major outcome was to increase the efficiency of the automation execution and analysis process. In many projects, this decreased from a 5 day duration to 1 day! This has freed up time for more manual and customer-like testing.

So yes, time was invested in TPI, but at the end of the pipeline, we now have more time to invest in more effective testing.

TPI is a cost but spending that cost in the right area will produce a return. The key is to pick the right test area to improve!

Monday, July 5, 2010

Rapid Software Testing, Dublin, September 2010 - places limited!

Rapid Software Testing is a three-day, hands-on class that teaches testing as a sophisticated thinking art.

The philosophy presented in this class is not like traditional approaches to testing, which ignore the thinking part of testing and instead advocate never-ending paperwork. Products have become too complex for that, time is too short, and testers are too expensive. Rapid testing uses a cyclic approach and heuristic methods to constantly re-optimize testing to fit the needs of your clients.

The Rapid approach isn't just testing with a speed or sense of urgency; it's mission-focused testing that eliminates unnecessary work, assures that everything necessary gets done, and constantly asks what testing can do to speed the project as a whole. This class presents an approach to testing that begins with developing personal skills and extends to the ultimate mission of software testing: lighting the way of the project by evaluating the product. One important tool of rapid testing is the discipline of exploratory testing-essentially a testing martial art. Exploratory testing combines test design, test execution, test result interpretation, and learning into a seamless process that finds a lot of problems quickly.

If you are an experienced tester, you'll find out how to articulate those intellectual processes of testing that you already practice intuitively. If you're a new tester, hands-on testing exercises help you gain critical experience.
 
MICHAEL BOLTON
Michael Bolton is an established and recognised leading-edge expert in the area of Software Testing. Michael has extensive experience in delivering workshops, tutorials, and conference presentations on the topic of Rapid Software Testing. He has over 20 years of experience in the computer industry testing, developing, managing, and writing about software.

Contact susan@isa-skillnet.com to book your place

Monday, June 28, 2010

Don’t Forget The Retrospective!

A retrospective is a valuable tool in any organization which values and expects continuous improvement.


A retrospective gives the opportunity to:

• look back and assess;

• consider;

• take everything into account;

• identify lessons learned;

• acknowledge success;

• try to set a better course for the upcoming project, or continuation of the current project;

A retrospective is a chance for the team to improve what they are doing and how they feel about what they are doing. By not holding regular retrospectives, teams are missing out on a valuable opportunity to grow and succeed.

Retrospectives enable whole-team learning, act as catalysts for change, and generate action.

Retrospectives empower team members and allow each and every member of a team to contribute and feel heard. Team members can see how they actively contribute to the increased success of the team.

Retrospectives focus on real problems that affect teams. During retrospectives, teams discover real solutions that they can implement immediately. Note, that it is the team who discover the real solutions. The solutions are not dictated to them from outside the team. This is where empowerment kicks in!

Organization is the key to an effective retrospective. Everyone must be made aware of the manner in which the retrospective will operate and what is expected of each and every participant.

Here are some suggested rules of engagement:

• We will try not to interrupt. (If the meeting is held over multiple sites, please verbally indicate when you are finished speaking – this will aid everyone as sometimes we can think the other person has finished speaking when they have not as we don’t have access to the visual clues.)

• We will accept everyone’s opinion without judgement.

• A reason should accompany each opinion.

• We will accept everyone’s supporting reason without judgement.

• We will talk from our own perspective. We do not try to imagine the perspective of others who may not be in attendance.

Set out the main objective of the meeting, for example, “The main objective of the retrospective is to seek out and understand the perspective and feedback from each member of the team. We will collate all feedback into positive and negative lists.”

Every opinion and perspective is valued!

The collated feedback from a retrospective can then be used to feed process improvement through the next stages of the project or into the next project.

Don’t forget: a retrospective is an opportunity! Don’t miss it!

Tuesday, June 22, 2010

Freakonomics

Freakonomics is a book by Steven D. Levitt & Stephen J. Dubner.

It has nothing to do with testing.

However, it’s a great read (very easy to pick up, read a chapter, get on with your life, pick up, read another chapter, and so on).

Plus, it makes you think! I love this, I love that feeling you get in your head when your pre-existing ideas and concepts are intellectually challenged and suddenly you can see in your mind all these doors opening that each leading to new ideas and new challenging concepts.

Via real world examples, it illustrates the difference between causation versus correlation.

This does have an impact on testing, albeit subtle and indirect.

When I root cause analyze a bug I’ve found, I need to be able to differentiate between what actually caused the bug and what just happens to exist alongside the bug.

Quick example: Drowning and ice cream sales are positively correlated. An increase in ice cream sales does not cause an increase in the number of people who drown.

I believe that critical thinking skills are essential if you are to become a really good tester – this book helps to develop those skills. And it’s actually pleasant to read!

Wednesday, June 16, 2010

New To Testing?

Recently, two new engineers joined my team. They were straight out of university and had no formal testing experience (other than an hour of making sure the code does what the professor asked – not really testing).

This afforded me a great opportunity to stop and think about how best to develop people new to testing into truly stellar testers. So what was my training plan?

  1. Read Testing Computer Software by Cem Kaner, Jack Falk, and Hung Q. Nguyen.
  2. In parallel, run through the software under test tutorials which we release to our customers.
  3. In parallel, learn PERL (our scripting language of preference) where they coded specific scripts to solve a detailed specification. The script would invariable support testing.
  4. In parallel, study ISTQB Foundation and pass foundation exam. Note1

This was their first month.

Then they had to use what they learnt.

  1. Participate in paired testing sessions with more experience testers (mentors).
  2. While testing, any bugs they uncovered, they would report while being guided by their test mentor. Thereby learning how to write effective bug reports.

Then, it was time to walk on their own.

  1. Participate in test bashes where they are given a small bounded area of the software that is well described, has low complexity, with support from peers who have high expertise in the software under test.
  2. File bugs where appropriate.
  3. Create summary reports for development and stakeholders on test coverage, test results and quality opinion.

Within 2 months I was receiving feedback from testers and developers at other sites around the globe praising the contribution of the new testers. Now, 6 months on? I have two great testers who I trust. They find good bugs, high priority bugs. They are great at critically analyzing the software under test and I continue to get regular feedback praising them.

At the time, it was quite a turnaround to invest so much time and energy in development. Previously, most people joined and started bashing away at the tool within hours of setting foot on company soil.

The investment has paid off and we continue to prioritize their learning in parallel to their project work today. These two engineers have a very bright future in testing and will easily surpass me and their more "experienced" peers in a very short time! Success!

How did you learn to test? How do you train testers in your organization? Please let me know, I'd love to have some external input.

Note1: The ISTQB Foundation Certification by no means equals testing proficiency. Frankly, in my opinion, the foundation exam is more about terminology than anything else. However, it does give a sense of achievement and is something to work towards. Also, I found for engineers who are completely new to test, it was an effective discussion trigger. They would frequently have discussions between themselves regarding the topics covered in ISTQB as well as asking me questions regarding the actual practicality of testing versus the theory. From that aspect, I found it very valuable. The cost? Book + Exam Fee – not bad and the engineers felt a sense of achievement and career advancement.

Tuesday, June 15, 2010

Michael Bolton's Rapid Software Testing Course, Dublin September 13th

Rapid Software Testing 
  • Date: Monday 13th to Wednesday 15th September 2010
  • Venue: Xilinx, Citywest Business Park
  • In association with Testing Times and Xilinx Ireland
  • Duration: 3 day course (9.00am to 5.30pm)
  • Cost to non-members: €1,700 per person
  • Cost to Software Skillnet Members* after Grant aid: €770 per person
*Membership to Skillnet is Free
The course cost is all-inclusive for the 3 day course and covers all materials and refreshments   
This event is being grant aided by the Software Skillnet
Contact susan@isa-skillnet.com to book your place
COURSE DESCRIPTION 
Rapid Software Testing is a three-day, hands-on class that teaches testing as a sophisticated thinking art. The philosophy presented in this class is not like traditional approaches to testing, which ignore the thinking part of testing and instead advocate never-ending paperwork. Products have become too complex for that, time is too short, and testers are too expensive. Rapid testing uses a cyclic approach and heuristic methods to constantly re-optimize testing to fit the needs of your clients. 

The Rapid approach isn't just testing with a speed or sense of urgency; it's mission-focused testing that eliminates unnecessary work, assures that everything necessary gets done, and constantly asks what testing can do to speed the project as a whole. This class presents an approach to testing that begins with developing personal skills and extends to the ultimate mission of software testing: lighting the way of the project by evaluating the product.

One important tool of rapid testing is the discipline of exploratory testing-essentially a testing martial art. Exploratory testing combines test design, test execution, test result interpretation, and learning into a seamless process that finds a lot of problems quickly. If you are an experienced tester, you'll find out how to articulate those intellectual processes of testing that you already practice intuitively. If you're a new tester, hands-on testing exercises help you gain critical experience.



ABOUT MICHAEL BOLTON
Michael Bolton is an established and recognised leading-edge expert in the area of Software Testing. Michael has extensive experience in delivering workshops, tutorials, and conference presentations on the topic of Rapid Software Testing. He has over 20 years of experience in the computer industry testing, developing, managing, and writing about software.

Contact susan@isa-skillnet.com to book your place

http://www.isa-skillnet.com/Training_Courses/88 Software Skillnet
c/o Irish Software Association
Confederation House
84-86 Lr Baggot Street
Dublin 2
Phone: 086 8067200
Email: susan@isa-skillnet.com

Monday, June 14, 2010

Being an Effective Test Lead

The ultimate expectation of a test lead is to:

   "always do what most needs to be done without waiting to be asked"

A test lead position has both technical and managerial elements.

A test lead:

- is a domain expert of the software under test
- is a testing expert
- has testing experience to draw upon
- has project planning capabilities
- has excellent communication skills
- actively manages all stakeholders

However, the test lead also has responsibilities towards his/her team to:

- energize
- empower
- support
- communicate
- coach & mentor

The most important test lead function is to get his/her people excited and inspired - to energize them!

The second most important test lead function is to develop the skills and capabilities in his/her team so that they can get the job done efficiently and effectively!


A test lead is only ever as good as his/her team and it is the test lead's responsibility to get the most from the team.

A successful team = a successful team lead!

Test leads:

 
 

- get things done individually and through others
- use scarce resources to the best advantage
- cope with change and uncertainty
- achieve and deliver results individually and through others
- assess and proactively manage risks
- allocates testing resources where they are most effective and needed
- gathers evidence to support quality judgments
- coach and mentor
- delegate

Delegation is a vital tool in a test lead's arsenal. A test lead cannot do everything themselves. They must trust their team and delegate tasks to them. Delegation is extremely effective when executed well.

Delegation Hints & Tips
 
For delegation to be effective, test leads must also give authority to their team members and ensure that those team members have the resources necessary to complete tasks effectively.

The seven steps to effective delegation are as follows:

1. Communicate the task.

2. Set the task in context.

3. Determine standards.

4. Grant authority.

5. Provide support.

6. Get commitment.

7. Keep in touch.

Visit your team members and check their progress on a regular basis.

Use a formal system to track assignments and due dates.

Motivation Hints & Tips

Motivated team members are vital to the success of the project and the team.

To motivate your team:

- Personally thank people for doing a good job. Do it timely, often, and sincerely.

- Take the time to meet with and listen to people - as much as they need or want.

- Provide people with specific and frequent feedback about their performance. Support them in improving performance.

- Recognize, reward, and support the promotion of high performers.

- Provide information on how their work affects the greater scheme of things in the company. Show them that they make a difference!

- Involve people in decisions, especially those decisions that affect them. Involvement equals commitment.

- Give people a chance to grow and develop new skills, encourage them to be their best.

- Provide people with a sense of ownership in their work and their work environment.

- Strive to create a team that is open, trusting, and fun. Encourage new ideas, suggestions, and initiative.

- Learn from rather than punish mistakes.

- Celebrate success!

Remember: For the most part, YOU determine how motivated and de-motivated your team is!

Thursday, June 3, 2010

Principles for Testers

Lisa Crispin & Janet Gregory propose the following ten principles for agile testers:

  1. Provide continuous feedback
  2. Deliver value to the customer
  3. Enable face-to-face communication
  4. Have courage
  5. Keep it simple
  6. Practice continuous improvement
  7. Respond to change
  8. Self-organize
  9. Focus on people
  10. Enjoy

I believe that these principles are relevant to all testers, not just agile testers.

Lisa & Janet define an agile tester as:

"a professional tester who embraces change, collaborates well with both technical and business people, and understands the concept of using tests to document requirements and drive development."

Again, this isn't unique to agile testing. In my mind, all the high expectations we have of agile testers should also be applied to any tester.

Why have lower standards for testers not involved in agile software development?

Reference: Agile Testing – A Practical Guide by Lisa Crispin & Janet Gregory

Monday, May 31, 2010

SQS Test Manager Forum

SQS hosted a Test Manager Forum at Croke Park on May 20th.

The event hosted a number of talks along with a demo of Microsoft's Visual Studio 2010. The Test Manger element with its seamless integration with sharepoint is very impressive and I'm itching to get my hands on the software to try it out myself.

Fortunately, luck smiled on me and I won a book on the tool. So watch this space for follow ups.

The day was great, the talks gave a foundation for additional discussion during the numerous coffee breaks which was ideal. All too often, a conference is so crammed full of presentations, that the attendees don't have the opportunity to chat and discuss.

Hopefully, the forum will run again next year! It was great to chat with fellow test managers and listen to their thoughts and opinions.

Thursday, May 13, 2010

Help - Bug Fix Rates

Bug fix rate for the Test Organization is defined as:

   (Number of Bugs Fixed that were Found by the Test Org/ Number of Bugs Found by the Test Org) x 100

The higher the rate, infers a greater alignment with development and priorities, i.e. test is not testing features where bugs won't get fixed.

However, what's a realistic bug fix rate goal?

   70%? i.e. 70% of bugs found by Test are fixed.

Does anyone know of any numbers out there I can compare against?

Thank you!

Monday, May 10, 2010

Eliminate Waste – Key to Effective Testing

"Eliminate Waste" is the fundamental principle of LEAN.

Waste is defined as anything that does not create value for a customer.

It's essential to learn to identify waste if you are to eliminate waste.

If there is a way to do without it, it is waste!

The 7 wastes of software development are:

  1. Partially Done Work
  2. Extra Processes
  3. Extra Features
  4. Task Switching
  5. Waiting
  6. Motion
  7. Defects

My translation to testing:

  1. Partially Done Work
  2. Extra Processes
  3. Unneeded Test Infrastructure
  4. Task Switching
  5. Waiting
  6. Motion
  7. Passing Tests

Unneeded test infrastructure encompasses extra test features/tools that are not utilized in the test effort but are nice to have (kind of like extra features, they are nice to have but not used). Similar to software development, it is best to not commit to extra test infrastructure features until they are actually needed.

Passing tests do not add value to testing when the main objective of testing is to find defects. Passing tests do not find defects.

Eliminating waste increases time available for activities that do provide value and allow testing to be as effective as it can be.

Thursday, May 6, 2010

Am I Creating Value With My Testing?

Jonathan Kohl wrote a great article for Star Tester:

http://qualtech.newsweaver.ie/startester/bjvul98tll6-a0tqjjw4f4

called "Am I Creating Value With My Testing?".

He makes a great point. As testers we can easily get consumed with the techniques, the status reports, the analysis, test process improvement, maturity models, open source tools, etc, etc, etc. But we need to, regularly, take our heads out of the sand and ask "Am I Creating Value with My Testing?".

Test provides a service - and while we can be extremely busy working, we MUST take the time to check that the work we are so busy doing, does in fact provide value. Otherwise, what's the point?

Wednesday, May 5, 2010

The Method Behind My Testing Madness


If you had to describe how you find bugs, would you be able to clearly and succinctly answer? I'm not sure I could.
 

For a new software product, new to me or brand new to the market, one of the first things I will do is to sit down with the documentation and a highlighter pen. Using the highlighter pen, I will underline the claims made in the documentation. Not the bits where it tells you how to do this or how to do that or how it's the best product out there since sliced bread. I'll just underline the text that claims it can achieve something. Then these become the first things I will test in the software.



These claims are the main drivers as to why someone will part with money to purchase this software product, and above all these features must work. Not only must they work, but they should be designed in a manner that allows a novice user, me in this case, to easily figure out how to use the software without hours or even minutes of studying the manual.



So, when first using a new piece of software, you have an opportunity to truly have an effect on the quality of the user experience. It is your first user experience of the software and you can make suggestions regarding how the ease of use of the tool can be improved. Developers will appreciate this input, by the time they themselves use the software, they know it inside and out, they work around usability issues without even realizing.



The testing of these claims garnered from the documentation will feed my testing charters for exploratory testing sessions. I will allow myself to divert from the charter when I question what will happen if I move off over into this area, or double-click on this icon when I haven't been directed by the documentation to do so.



Each claim will be a separate testing charter and will kick-start my exploratory testing of the software.



You cannot underestimate the power of exploratory testing. It feeds your knowledge of the software and focus' you mind down the path of most effective destruction. Your goal is to break the software in as many different and interesting ways as is possible. Success is in each bug report written up to be:
  • Clear
  • Concise
  • As much root cause analysis as possible/required
  • Steps to reproduce
  • Why you consider it a defect
  • Or, why you consider it a worthy enhancement
Remember, developers will judge you on your bug reports. Making their life as easy as possible when triaging and debugging a failure will put you in their good books. This means you will get more of your bugs fixed than the next person. When it comes down to it, what's the point in all the time and effort testing the software and finding defects if they don't get fixed?



Exploratory testing is intellectually and creatively taxing so when I start to lag, I'll move my attention to easier defect finding practices. For me, these include:
  • Negative testing – does the software give appropriate and helpful error messages?
  • Load testing – what happens when I load a very large file into the software?
  • Confusion testing – can I confuse the software? For example, double-clicking in numerous different locations in quick succession.
  • Comparative testing – comparing against other software tools in the tool suite? Do they have the same look and feel? What are the differences?
  • OS testing – if the software is supported on different software OS's, does the software behave in the same way irrespective of which OS it is executing on? Does the software have a similar look and feel across different OS's?
  • Competitor testing – how does the software compare against rival products?
For me, to remain alert, I need to switch my focus regularly. Too long using one testing methodology will cause me to overlook defects and usability issues. By switching methodologies I do not give my brain any opportunity to go into automaton mode. Testing is an intellectual and complex task. If your brain is asleep during it, you won't find the cool bugs!



Finally, the most important question: does the software provide the functionality that the customer requires to complete their work?



Remember, quality is not just about lack of defects, it's also about providing the functionality that the customer needs.

Thursday, April 29, 2010

Top 100 Software Testing Blogs

Testingminded.com have put together a list of the Top 100 Software Testing Blogs:

http://www.testingminded.com/2010/04/top-100-software-testing-blogs.html

Unfortunately, I didn't make it - that'll teach me to update my blog more regularly and have something worthwhile to read on it! :-)

Tuesday, April 27, 2010

Michael Bolton's "Burning Issues of the Day"

A mini track session presentation at EuroSTAR 2009 in Stockholm, Sweden:

http://www.developsense.com/presentations/2009-12-EuroSTAR-BurningIssues.pps

This is a must read! Takes 5 minutes and I actually laughed out load. Some of the slides are extremely close to the truth.

One of my favourites:

Bad metrics are not “better than nothing”. Friendly fire is not better than not shooting.

And another goody:

The Agilistas did not discover pairing, or test-first programming. They’re like teenagers who’ve just discovered sex. It IS great, but calm down.

Monday, April 26, 2010

Review of Professional Tester Mag

Within a week of hearing that Professional Tester is back on the shelves, I had signed up for a subscription.

A European-based testing magazine with the aim "to provide practical help and inspiration to testers everywhere" – fantastic!

However, it may be too early as the magazine is just finding its feet, but I am fighting disappointment.

Number 1 = 15 pages, decreasing to 13 once ads are removed.

Number 2 = 19 pages. Okay, it's meatier. It has a couple more articles.

The articles have interesting elements – but they're not rocking my world!

BUT, it's only number 2 and a magazine is only as good as its contributors so I'm putting out a call to those that read my blog: lets share our knowledge!

Let's make this magazine something that works for us! There is a large test community in Ireland who are knowledgeable and experienced and this magazine is a great opportunity for you to contribute!

So contribute!

Friday, April 23, 2010

Testing Women

I love this post:

http://ladybug010.wordpress.com/2010/02/23/high-heels-in-high-tech-women-working-in-the-software-field/

As a woman, working in an engineering company, I am definitely in the minority, which frankly I'm used to now. In college I was also in the minority which was definitely a culture shock having come from 14 years of convent schooling (all girls, everywhere!). But I'm over it now, it's the norm for me.

There are difficulties sometimes with perception and opinions. An adjective used to describe a guy is positive yet when the same adjective is used to describe a woman, it has negative connotations.

I definitely agree, having a mix of gender in a team is beneficial to the team's success. It reduces testosterone level, and as one man said to me, it has a calming effect. And we all think differently, which is what you want in a testing team.

Yet somehow, I seem to be collecting women! In a department of 16, with two managers, I have all 4 women (including myself) in my team. We actually have a majority of woman! How weird is that?

What Makes a Good Exploratory Tester?

Exploratory testing combines learning, test design, and test execution into one test approach.

We apply heuristics and techniques in a disciplined way so that the actual testing reveals more implications than just thinking about a problem.

As you test, you learn more about the system under test and can use that information to help design new tests.

Exploratory testing should start with a charter of what aspects of the functionality will be explored.

It requires critical thinking, interpreting the results, and comparing them to expectations or similar systems.

With exploratory testing, each tester has a different approach to a problem, and has a unique style of working.

However, there are certain attributes that make for a good exploratory tester.

A good tester:

  • Is systematic, but pursues "smells" (anomalies, pieces that aren't consistent).
  • Learns to recognize problems through the use of Oracles.
  • Chooses a theme or role or mission statement to focus testing.
  • Time-boxes sessions and side trips.
  • Thinks about what the expert or novice user would do.
  • Explores together with domain experts.
  • Checks out similar or competitive applications.


 

Thursday, April 22, 2010

Good Testers Vs Great Testers

Successful projects are a result of good people allowed to do good work.

Good testers are continually looking for ways the team can do a better job of producing high-quality software.

They help the developer and customer teams address any kind of issue that might arise.

Creativity, openness to ideas, willingness to take on any task or role, focus on the customer, and a constant view of the big picture are just some components of an effective testing mind-set.

Good testers have an instinct and understanding for where and how software might fail, and how to track down failures.

In my opinion the difference between a good tester and a great tester is that a great tester has the soft skills to influence and communicate in a manner that they become vital to the project, whether they are fighting their way in or are enthusiastically welcomed with an open door.

Let's be honest, not always will everything line up wonderfully for us so that we can do the great job that we know we can do. But blaming the project removes our ability to control and influence the situation so that we can bend to our will.

Great testing requires a toolbox full of soft skills, including:

  • Communication
  • Influencing
  • Negotiating
  • Stakeholder management
  • Emotional Intelligence

We often don't sufficiently prioritize the development of these skills in our testers. It's much easier to make the case for ISTQB training expenditure than a soft skill class.

However, if we are to develop great testers, we must invest in soft skills as well as technical testing knowledge.

Wednesday, April 21, 2010

Great Quote on Quality

So I have a chest infection and I'm stuck at home sick, and I'm so very bored. Having given up on daytime tv, I'm reading through back issues of Better Software (I never said I wasn't a bit of a nerd!). Anyways, I came across this great quote from James Bach on Quality:

"Quality is not an intrinsic characteristic of software. It's a relationship among the product, the people who have expectations about the product, and the world around them."

Why Testers Should Study Critical Thinking?


Critical thinking is the process of applying reasoned and disciplined thinking to a subject. So why should testers study critical thinking?

Well duh! You're a tester. Every day you have a piece of software that you must break. That you must critically analyze to determine where to invest your time. Where to invest your time where it will pay off in bug reports but also in areas that the customer will actually need to use. All to achieve the end goal of finding those bugs that will detract the customer from their end goal.

The best testers use critical thinking, they find more bugs, they find better bugs, and they use the software under test as a customer would.

Critical thinking facilitates a tester in:
  • understanding the logical connections between software components;
  • identifying assumptions in the design of the software under test;
  • evaluating the correctness of the design of the software;
  • evaluating the ease of use of the software;
  • detecting inconsistencies in the software design;
  • identifying mistakes or bugs in the software;
  • recognizing which features are the highest priority and why;
  • unearthing bias in one's own thinking and that of the developers of the software. 
Furthermore, critical thinking is a necessity in the effective creation of arguments. Daily testers must argue or advocate for bug fixes, testability features, or enhancements in the software.

Acquiring critical thinking skills helps you to develop more reasoned arguments and draw out the inferences in others' arguments. With heightened critical thinking to support your advocacy, you will have more of your bugs fixed, you will gain the credibility that developers respect, thereby, developers will incorporate your testability suggestions and your enhancement ideas into the software.


Critical thinking enhances language and presentation skills. Thinking clearly and systematically can improve the way we express our ideas.


In learning how to analyze the logical structure of texts, critical thinking also improves comprehension abilities. Comprehension abilities are a necessity when reviewing system requirement specifications. Critical thinking will aid you in reviewing these documents, to ask clarifying questions, and to pinpoint inconsistencies in the design.


Critical thinking promotes creativity. To come up with a creative (and problem-solving) solution to a problem involves not just having new ideas. It must also be the case that the new ideas being generated are useful and relevant to the task at hand as well as effectively communicated to others. Critical thinking plays a crucial role in evaluating the merit of new ideas, selecting the best ones, identifying the holes in the argument, and modifying them if necessary. Where in testing does this place an essential role? In exploratory testing of course!


Exploratory testing is all about creativity, determining new routes through the software, determining new ways the software can fail, and determining how the customer may use the software. Critical thinking will aid you in your decision regarding which routes to take, it will focus your mind on the areas of software that may be bug-heavy, and because it feeds creativity, you may come up with one or two ideas that just "spring to mind". Why? Because you have an improved process of thinking which causes your mind to open itself up to a whole new set of possibilities.


Critical thinking involves:
  • Analyzing tasks
  • Identifying assumptions
  • Analyzing and classifying
  • Making comparisons
  • Problem solving
  • Questioning and challenging ideas
  • Observing facts versus assumptions & inferences
  • Judging the validity of the source and the worth of the evidence
  • Forming and effectively communicating opinions & arguments
  • Identifying arguments
  • Evaluating the validity of an argument
  • Drawing inferences
  • Making generalizations
All of the above are the meat and bones of testing and are the very things that no testing class can effectively teach you.


The study of critical thinking will empower your testing and will increase your effectiveness.

Tuesday, April 20, 2010

Sogeti Ireland

I've just finished viewing another Sogeti Webinar, "Applying TPI®NEXT to your situation".

Sogeti is a commercial business and I have links to them:

  • I'm on the softtest committee who are sponsored by Sogeti
  • Recently, I got them in-house to deliver "Advanced Test Techniques"

They're not concrete links – I don't benefit in any way financially from them (although I'll never say no to a free book). But they are links nonetheless.

So having "come clean" with above links, I want to take this opportunity to thank Sogeti.

They are doing an incredible job of educating the test community in Ireland with their free webinars!

Every time I attend a webinar, I learn something, my pre-existing opinions may be challenged, or I look at something in a new way that opens up exciting ideas and opportunities to build on.

Each webinar is recorded so if I have a conflict and am unable to watch "live", I can catch up later.

If you're not viewing these webinars, then you are missing out!

Check out: http://www.sogeti.ie/en/News--Events/Events/ for their upcoming scheduled webinars and http://www.sogeti.ie/en/Resources--Downloads/Thought-Leadership-Presentations/ for their archive.

Monday, April 12, 2010

User Profiles in Testing

I just finished reading Joel Montveliksy's blog called:

5 Ideas on how “User Profiles” can Improve Your Testing

I really like the concept.

What struck me most is I asked myself, "how well do I understand the users of the software that I test?"

Joel proposes that each user profile should contain all the personal and professional traits that are relevant to the work this person does with your software. That includes describing how the user works, what their goals for using your software are, what annoys and frustrates them, what delights them, and so on....

How many of us can sit down and describe our users?

Shouldn't these user profiles, whether we use them or not, be created and validated by others? To ensure that our assumptions are correct?

Shouldn't this be absolutely essential if our testing is as effective as it can be?

SoftTest Webinar: Testing a SaaS Platform on an Agile World

SoftTest free webinar Wednesday, April 21, 2010 11:00 AM - 12:00 PM BST

Testing a SaaS (Software as a Service) Platform on an Agile World

SaaS (Software as a Service) products and applications are becoming more common in today’s development Industry, especially as Cloud Computing becomes a household name pushed forward by software players such as Google, Microsoft, Apple, etc.

In contrast to regular Web-Based systems, SaaS applications require a different approach to testing than what we are used to from other more traditional projects. In some cases testing a SaaS system is simpler than testing a regular Web-based platforms, but in many others it is a lot more challenging and demanding. It’s made even more interesting by the fact that many of the teams developing SaaS Applications are based Agile Development Methodologies.

In this webinar, Joel Montvelisky will provide an overview of the main areas to cover when testing a SaaS Application or Platform based on his own experience at PractiTest (a SaaS QA Management Platform developed by his company). He will give some insights into the type of approaches and ideas that work best, and will also talk about some of the tools and methodologies his team currently uses while testing PractiTest.

The seminar is aimed at Test Engineers, Test Leaders, QA Managers, Project Managers, Developers and Development Managers.

Please see below a link to the registration page.

https://www2.gotomeeting.com/register/506544250

Webinar features:

All of the content is online. For audio, you can listen with headphones or speakers (VOIP option) or dial-in using a local number.

We can also record the presentation and have it available to view afterwards.

Short Bio
Joel Montvelisky is one of the founders and Product Architect of PractiTest, a company providing an End-to-End Test and QA Management System. He is also a QA Instructor and Consultant for multiple high-tech firms in Israel.

Joel has been part of the QA Industry for over 13 years, having managed the QA in companies ranging from small Internet Start-Ups and all the way to HP/Mercury where he managed the QA for TestDirector/Quality Center, QTP, WR and additional products in their Testing Platforms Family.

A member of the Advisory Board of the Israeli Testing Certification Board (the Israeli chapter of the ISTQB); he publishes articles and a periodic QA blog under http:/qablog.practitest.com/, and is an active speaker in local and international conferences. Joel holds a B.Sc in Industrial Engineering and an MBA from Tel Aviv University.


Thanks to our sponsors and our supporters in organising this webinar:
Sogeti Ireland
InterTradeIreland
Software Skillnet
Anne-Marie Charrett of Testing Times

Thursday, February 18, 2010

Professional Tester Mag

FYI: Professional Tester has been relaunched in January/February 2010.

It's available free for electronic download at:

http://www.professionaltester.com/files/PT-issue1.pdf

This month's episode contains:

- TPI Next
- VELT
- Towards Quantitative Governance
- Improving Process Improvement
- Rude Coarse Analysis
- Incident Log

Within Europe subscription is €50 per year (6 printed mags). Alternatively, it looks like you can always download the electronic version for FREE.

SoftTest Events February 2010

I just finished my third speaking engagement with SoftTest today.

On Tuesday I was in Belfast, Wednesday in Dublin and Thursday in Cork. My voice no longer works and I'm rather tired now.

I spoke on Lean Test Process Improvement in Agile Testing and I welcome any questions/suggestions on such.

I highly recommend that if you have a topic you could speak on, please volunteer. On a personal front, you gain so much from standing up in front of your peers and speaking.

Don't forget, it also facilitates the sharing of knowledge among the Irish testing community. The Irish testing community is part of the knowledge economy and it is mutually beneficial to help each other out!

I'll be posting my ppt on test-soft.com in the coming days.

If you attended my talk - Thank You! It was an absolute pleasure!

For those you didn't - you missed out on a free learning (I hope) event. Keep an eye on softtest.ie for forthcoming events. More will be advertised very soon.