A couple of months ago I was asked to help one development team with their testing approach. Team has a long history but they’re working with new technology and adapting new patterns (like TDD, code reviewing or automated acceptance tests) for their development approach. They’re also helped by a consultant who has a good understanding of modern development approaches. But when it comes to diversifying their testing approaches, they felt that they needed help. Considering though the complexity of the system they’re building, they felt that help is needed in diversifying their testing. This is something I’ve tried to help with.
How to gather test ideas?
As this team has a lot domain and technical knowledge, I wanted to utilize this by giving them a chance to brainstorm test ideas. Fortunately they’re co-located, which gave me more options on how to facilitate a brainstorming session.
I decided to use Me We Us (https://www.thekua.com/atwork/2011/10/coaching-tool-me-we-us/) facilitation technique for generating the ideas. After that we could organize, prioritize and further plan them.
One challenge with brainstorming is that you usually need some guidance. I mean, if I try to brainstorm “anything” - I’m going to be having a hard time with coming up with ideas. Scope is too wide. Which is why I - personally - prefer having some overall ideas that can be used for coming up with more concrete and context specific ideas. For this team I picked some pointers from James Bach’s Heuristic Test Strategy Model (http://www.satisfice.com/tools/htsm.pdf) and Elisabeth Hendricksson’s Test Heuristics Cheat Sheet (http://testobsessed.com/wp-content/uploads/2011/04/testheuristicscheatsheetv1.pdf). I flavored those with something I came up with after familiarizing with architecture and use cases. That gave some guidance for people who wanted some pointers. I tried to emphasize though that you didn’t have to use them.
Arranging brainstorming sessions
After I managed to figure out the format for the sessions, I booked three 45 minute sessions.
In the first session we wrote to post-it notes test ideas
- 7 minutes individually
- 7 minutes with pair
After that pairs introduced the ideas and we categorized them the best we could (e.g. security, database, services down, etc.). Then I took pictures and collected the notes.
In the second session we further prioritized the ideas to NOW - LATER - EVEN LATER on a whiteboard. Prioritization was affected mainly how mature the system was for particular tests and how badly developers wanted to know the result of the test.
In the third session we created tasks for the backlog from the ideas that were on NOW. We also wrote some high level notes on what would be actually done in case of each idea.
When we had put all the ideas from NOW column to JIRA as tasks - then we performed the testing as a group for first ideas that we included to sprint. One person had a computer and acted as a driver during the whole session (120 minutes in our first session - later ones where one hour). Others engaged by commenting and observing how the system behaved. Everyone, except one person, managed keep themselves (from my perspective) mentally engaged. But it was also a good learning experience on how rotation (as in mobbing) might be useful to amplify learning and engagement. That’s something that is good to introduce later after they noticed downsides of having one driver during the whole session.
We had in total 3 test sessions as a group, where we managed to learn quite a lot how the system behaved and didn’t behave. Through those sessions we’ve managed to add diversity to testing that developers do by themselves as part of the development.
We still need to figure out how to make these kind of sessions - when needed - part of the development process.