18
Mar

Why I record all my test sessions

I’ve been working with exploratory and session based testing for a while now and I like this form of testing for its flexibility and adaptability. I manage my sessions with session based test management, outlined by Jonathan Bach in his “Session-Based Test Management” paper [1] with a small addition; I also record all of my session with a screen capture tool. With sessions spanning up to 90 minutes, the screen captures becomes rather large files, even with compression. It also takes a fair bit of resources from the computer to capture and encode in real time, so why do I insist on doing this for every session?

First of all, I’m mainly testing web based software, where I do not have to be overly concerned about what consequences to the SUT the extra load on my PC might have. If I tested locally installed software, I would have to take this more into consideration, but I’ll cross that bridge when I get to it.

The advantages that I see in capturing each and every session are:

  • Attaching snippets to bug reports
  • Reviewing captures to help reproduce bugs found in sessions
  • Helping me keep focused during the session
  • Allowing me to be the navigator

Attaching snippets to bug reports

From the capture, I can cut out snippets that highlight a certain sequence or error that I have experienced during the session. This might be the most obvious of the advantages, but I rarely use it myself. The reason for this is that I will generally try to reproduce the bug in a simpler and more straight-forward manner than how I first saw it in the session. When I succeed in this, then I keep to reporting the steps I’ve taken, possibly with attached screenshots. It’s only when I can’t reproduce the bug that I cut out the snippet from the video and attach to the bug. In this case, the snippet is very valuable, though.

Reviewing captures to help reproduce bugs found in sessions

This part ties into what I wrote in the previous section. Personally I like to stay with the charter as much as possible in a session and avoid things that break my “flow”. One of the things that I have found breaks my flow is reproducing and reporting bugs. In a session, when I find something that looks like a bug, I like to just make a note of it in order to return to it after the session. This is just a personal preference and others might prefer to investigate bug when they are fresh in memory, but if you work like me and just make a note of it, it’s crucial to remember what you did to provoke the bug. In most cases I can remember it from the notes I make, but in a few cases I only think I remember, but I have actually missed a small, but critical step. In these situations, I can go back and review the video to retrace my own steps. Just finding that missing step makes the whole recording worth the effort.

Helping me keep focused during the session

The two first advantages have been reasonably objective in what they can help with. The two last ones are much more subjective and directly related to the way I approach sessions. This one is purely psychological and based on the fact that I am very good at tricking myself. As everyone else, I want my sessions to be uninterrupted by email, IM, phone calls, etc. Unfortunately, I’m really bad at remembering to turn off IM, email notifications, etc., which means that I occasionally get notifications popping up on my screen during a session. With these notifications comes an almost irresistible urge to check the message behind the notification, but when I have the recording software running, it’s almost like having someone watching over my shoulder and I get a very bad consciousness if I actually check my email. I know that I can just pause the video, or cut out any part from it later, but that doesn’t matter; the video still helps me focus on the session.

Allowing me to be the navigator

This is an analogy to the roles in an XP team and ties into how I like to work with sessions. In an XP pair, there is a “driver” that writes the code and focuses on details and a “navigator” that looks at the bigger picture, design, architecture, etc. In a session, I like to be the “navigator” in the sense that I don’t want to get bogged down in details about a bug; I want to follow the flow of the charter, think of new tests and notice irregularities in the software. Since the video will record every step I take, I don’t have to divert attentions to this, but can focus on what I want to do in the session.

Those are the advantages I see in recording every session I perform. You might ask how long I store these files, considering the size of each file. As with most other things, the answer is that it depends. I’m likely to keep all recordings for one iteration, if that’s how the development is done, but if some bugs are left open after an iteration, I might consider keeping the video until the bug is closed, just in case the developers need some extra information that I did not think of when reporting the bug. If I have a large enough hard drive, I can even consider keeping all of the videos until the end of the project/release.

In my day-to-day work I use Ubuntu Linux and I always try to use this for testing, whenever the requirements do not prevent this. I like Linux for several reasons, one of them being the abundance of free utilities, for example the ones I can use to record and edit videos. To record my session on Linux, I use XVidCap [2] and for editing the videos and extracting snippets I use AVIDemux [3]. I know that there are several good tools to both Windows and Mac (albeit not for free), but I do not have any personal experience from using these.

[1] http://www.satisfice.com/articles/sbtm.pdf
[2] http://xvidcap.sourceforge.net/
[3] http://fixounet.free.fr/avidemux/

13
Dec

A great testing experience

I have been working with test now for some years, and for each year I realize how much more there is to learn, but also that inspiration is important to maintain.

One of my best testing experiences which gave me a lot of new knowledge and inspiration was the SWET conference, Swedish Workshop on Exploratory Testing. So far three SWETs have been held, and I have had the pleasure of attending SWET2 (1) and SWET3 (2).

First of all, it was a feeling of honor to attend SWET2 because it is a conference to which you get invited. At SWET3, Henrik Andersson and I were organizing the conference and acting as facilitators.
Each SWET has a theme on which the presentations and discussions should focus, e.g. for SWET2 the theme was “Test Planning and Status Reporting for Exploratory Testing”, at SWET3 the theme was “Teaching Testing”. All conference participants need to send in an abstract on the subject. Besides relating to the conference theme, the abstracts should be based on personal experiences. The organizers then create a prioritized list of abstracts in which order the presentations will be held.

At SWET, the presentations are not very long, about 20 minutes, but the questions and comments may continue as long as it is interesting. This form is based on LAWST, http://lawst.com/. Each participant gets a set of four cards to be used for getting attention after the presentations; “New question”, “Comment on a previous question”, “Urgent attention” and “Rat hole”. “Rat hole means that a discussion is stuck and is not going forward. One participant, normally one of the organizers, is acting as facilitator and keeping track of by whom and in which order cards are raised.
Because each presentation with questions and comments may take quite a while, it is usually only the 3-4 first presentations on the list that actually are held.

The first thing that struck me when I arrived to SWET2 was the relaxed and friendly atmosphere. I was a bit nervous, but that feeling quickly went away. Some of us arrived the night before the conference and we had a nice dinner and great time together. We talked about a lot of things, including test of course. At SWET3, it was great to once again meet some of the testers from SWET2 and exciting to make some new acquaintances.

When the actual conference starts, each person in the room gets to introduce themselves shortly and to say something about their expectations for the conference. Already at this time, I noticed that focus and concentration sharpened among us. We all wanted to get a lot out of the meeting. It was a group of testers who are very serious and enthusiastic about their profession.

All of the presentations were of high quality and I found it interesting and inspiring to listen to these skilled testers telling us about their experiences. Besides that, I also got a lot of good tips and I could relate to own experiences and how I can change and improve my way of working. During the presentations, the listeners take notes to remember questions and comments they come up with.
You may think that the idea of focusing on a theme would give small variations in the different presentations, but on the contrary; there were big variations. The fact is that each presentation gave input to so many questions, and each question resulted in comments and new questions and so on. The queue of questions and comments became very long so it took some time before one got the chance to say something. Sometimes, the comments and questions actually drifted too far away from the presentation, in which case the facilitator could interrupt and remind the participants to stay focus.
One thing I noticed was that discussions started off at quite a high level, in that sense that some things were already somehow understood, because all of us already has studied Exploratory Testing and we also share many valuations about testing. The questions and comments took a lot of time and this is a situation which I rarely have experienced before, the possibility to discuss something without time pressure. I would love to do this more in life.
But it was not just getting answers for you questions that gave a value; it was also a great forum for practicing the ability to listen, to questioning, to argument and to receive feedback on my own ideas.

SWET also includes a session called “Lightning Talks”. During this session, any one of the attendances is allowed to do a very short presentation about any test related subject that they want to share with the others. There is not very much time for questions during this session, but I managed to take some interesting notes which I brought with me from the conference.

Well, after dinner, the first “working day” was over, but the rest of the evening was just as rewarding, maybe a bit less formal. Lots of interesting discussions, nice music and a few good beers. When the conference ended after lunch the second day, I was a bit tired but really glad and inspired. I had learned a lot and made new friends within the Exploratory Testing community.

1. At SWET2, we were 15 persons; Henrik Andersson, Azin Bergman, Sigurdur Birgisson, Rikard Edgren, Henrik Emilsson, Ola Hyltén, Martin Jansson, Johan Jonasson, Saam Koroorian, Simon Morley, Torbjörn Ryber, Fredrik Scheja, Christin Wiedemann, Steve Öberg and myself Robert Bergqvist.
2. At SWET3, we were 11 persons; Anders Claesson, Henrik Andersson, Johan Jonasson, Maria Kedemo, Ola Hyltén, Oscar Cosmo, Petter Mattsson, Rikard Edgren, Sigurdur Birgisson, Simon Morley and myself Robert Bergqvist.

5
Dec

BBST Test Design Course

,

The Association for Software Testing has just launched the third course in the Black Box Software Testing series, developed by Cem Kaner et al. This course is the Test Design and it follows the Foundations and Bug advocacy courses. I was lucky to be able to join the first pilot version of the Test Design course that is just about to finish these days.

These are the objectives for the course:
This is an introductory survey of test design. The course introduces students to:

  • many techniques at a superficial level (what the technique is),
  • a few techniques at a practical level (how to do it),
  • ways to mentally organize this collection,
  • using the Heuristic Test Strategy Model for test planning and design, and
  • using concept mapping tools for test planning.

We don’t have time to develop your skills in these techniques. Our next courses will focus on one technique each. THESE will build deeper knowledge and skill, technique by technique.

This looks like a reasonable scoping of the course, but already in lecture one, I went “Wow!”. There are truly many techniques out there that I have not even heard of, never mind given a try. To give you an idea of the amount of data gone through in the first lecture; the videos for the first lecture spans just under 52 minutes total and covers 143 slides!

Fortunately the following lectures started looking into the few selected techniques at a more practical level, thus slowing down the ferocious flow of new information. With the more practical level came exercises, though. The test design course has more exercises than the previous two courses and there are two more extensive exercises that span two lectures. The practical aspect certainly has a more prominent role this course compared to the previous ones.

Most of the exercises in the course do not require peer feedback, which also is a change from earlier courses. It seemed convenient to start with that I did not have to spend time on the peer feedback, but I soon realized that I missed it, both reading through the other students’ work with the aim to give feedback and to get the feedback. In the end I probably ended up reading through more of the other students’ work that I would have if I had been assigned one or two to review. It’s hard to take the time doing this if it’s not mandatory, though.

Talking of time; I was fortunate that I could do this during a period where I could spend some extra time on the course. I spent all the extra time I had available and I still felt like I could have done more. I don’t know how the other poor students managed to get all of the work done besides their full-time jobs. Taking these BBST online courses do require good time management skills of the participants. The foundation course does teach this. Maybe I should revisit that course again?

Throughout all the BBST courses I have taken, the quiz questions have been a source of frustration. This course was no exception in the beginning. My main objection with some (far from all) questions were that they felt more like traps than confirmations about the content. I would review the feedback from the quiz questions I got wrong and think this has nothing to do with my understanding of the course material; it’s just down to how one would interpret the test in the question or in the answer. An interpretation I often did not agree to.

I know that Cem Kaner wants to use the quiz also to help the students practice precise reading and I Cem Kaner did elaborate with more feedback in one of the questions where I voiced my complaints. I can understand his motives. I still don’t really agree to them, but I have decided to accept it. Now, if I miss an answer and I don’t agree to the explanation the quiz gives as feedback, I’ll just ignore it. The other questions are still good opportunities to learn in case I’ve misunderstood something from the lectures.

I also gladly endure the quizzes considering the overall benefit from the course. The test design is a high-paced course with a ton of information. A lot of this information was new to me and I feel I have learned many new techniques during the last few weeks. I have had the chance to scratch on the surface for a few of them. Far from enough to really understand and master any of the techniques, but the course has opened up my eyes to many new things and I now know how I can start an approach and where I can find further information when I need it.

After the course I feel a boost of motivation and lot of very exciting ideas that I now just want to find the first opportunity at work to practice and put my newly found knowledge to good use.

Thanks to Cem Kaner, Becky Fiedler, Michael Larsen and all my fellow students that made the Test Design course such a rewarding challenge!

30
Jun

Using an External Test Team

, , ,

An objection that we have heard to using an external test team, especially in an agile setup, is that since the developers do most or all of the testing now, they would lose a valuable experience and a chance to learn if the test is outsourced to an external team.

The way I see it, there are a couple of fallacies in this concern. First of all, even though there usually is a higher level of developer tests in agile, these tests are not a substitute for manual tests, they are a complement. Second, if the developers learn from their own testing, they are less likely to find new points of views; they’ll learn, but mainly within their existing box.

Regarding the developer tests; these are to a large extent automated white box tests that the developer or continuous integration system run as an integrated part of the check-in and release process. These tests are invaluable for two reasons, they give early feedback to the developers and they can test things that are not possible with black box testing. However, these tests cannot replace the analytical and exploratory skills of a human tester. The developers tests are great at weeding out bugs that potentially would waste the time of a human tester, which allows the tester to focus on much more complicated tests and harder to find bugs. A symbiosis of automated developer tests and human analytical and exploratory tests brings the best of two worlds together.

When talking about learning and education, there are a couple of benefits of bringing in external testers. We are all limited to look for bugs in areas and situation that we can imagine would happen. This also applies to developers and relying on only developers for testing will bias the testing towards the areas and situations they think of. IF working with an external team of testers, another point of view is added and more areas and situations will be covered. When bugs are found, developers and testers can have a dialog about them and the developers can learn to see new areas and situations.

Tools can to some extent also bring in a new perspective and find bugs in areas where developers are not looking. However, most tools need to be configured and if this configuration is done by the developers, there is still a risk that the tool will biased towards the point of view of the developers. Also, a tool cannot have a dialog with the developers about the thought process behind a bug, limiting the understanding and learning that can be made from bugs a tool finds.

Finally, by bringing testers into the picture and take some of the testing job off the developers, time will be freed up for the developers. This time can  be used for training that will help the developers prevent bugs, not just fixing them.

Sida 16 av 16« Första...1213141516