21
Jan

What a wonderful year it has been!

For once i had some time during the holiday time to reflect on the past year and what a wonderful year it has been.
We have had the opportunity to welcome several new people. Bolette, Daniel, Vivien, Erik, Göran, Maria and Bahader have all moved into the house. This has created a new and very exciting dynamic to our group and has been an energy boost that takes us to a new level. Not only have we grown a lot but what makes me sleep well at night is that the demand of our services also grown and that we at this moment are sold out.
House of Test has always cared much for the future of testing and how we train new testers. For the past two years we have run our own internal test training program, the incubator. Johanna, Henke and Martin graduated this spring after two years of incredibly hard work. We are very proud of what they have accomplished and they are ready to face any testing challenge with confidence. Very well done folks! Henke and Martin are off to new adventures but we decided to keep Johanna and offer her employment at HoT. So from this fall Johanna has been a full fledged Hottie and she is really growing into that suit!
We have now taken the incubator program even further. From september 2014 we are responsible of two programs training over 60 students to become testers. This is a higher education funded by the government and it is full time study that runs over 1,5 years. We are sure that this will make a great impact on what others can expect from a junior tester. It is all about raising the bar!
We have entered several new relationships where i would like to mention our partnership with ebay. From this summer we have had four consultants working at ebay in London, Berlin and now recently even Sydney Australia. We are helping ebay to ,demonstrate by doing, what impact skilled context driven testing can have to an organisation. This is a very unique opportunity that we are both proud and grateful to have been given.
During 2014 we have continued to share our experiences all over the world. Some of the conferences that we have been presenting on are Nordic Testing Days, Let´s Test Sweden, StarWest, Copenhagen Context, ThinkTest, CAST, Let´s Test Oz, Agile Testing Days, Let´s Test South Africa.
We feel that by sharing our experiences we learn, get deeper insights and get new ideas. This is a huge part of our development to be in the forefront of testing. We are extra proud of Martin, Andreas and Lars who all have taken the step to present on major conferences this year. An extra mention to Bolette who will make her debut on the stage at Copenhagen Context 2015.
Finally looking into the crystal ball we see a bright year in front of us. We look forward to welcome amazing people into our house. We are gaining ground and are very excited about new relations we will build with clients over this year. I believe there will be some interesting opportunities laying at our feets.
Also very excited about the students that will graduate at the end of 2015. Trust me when i say they are already impressive and I’m sure they will knock you off your feet.
As usual we will not back down form a debate and we will continue our quest to call bullshit when we hear it. We need to raise the bar of software testing and get rid of dehumanizing and commodity thinking around testing.
Testing is a challenging craft that requires sharp skills and that is why we take it so seriously!
Wish you all a wonderful 2015!
14
Sep

Why I signed the STOP ISO 29119 petition

First of all I must say that I’m sad that such an infected debate between various factions continue to rage in the testing community. It seems to entrench the various views and making it harder for to build bridges between different points of views.

I also think it’s important to point out that the contents of ISO 29119 is not all bad per se. I have read several comments in discussions where people find support in the material presented in ISO 29119 and rightly claim that we cannot all be thought leaders in the test community and come up with new ideas and thus, finding inspiration and guidelines in the work from other people is essential to become a better tester. the ISO 29119 does contain material and information that some testers will find valuable in some context, no doubt about it.

So far, so good, but I nonetheless signed the petition against ISO 29119. Why did I do that? I’m actually a latecomer to the petition and I had the opportunity to read the response to the petition, published here: http://www.softwaretestingstandard.org/29119petitionresponse.php before signing up. In there, I think Dr Stuart Reid does post some valid responses to some of the objections put forward in the petition and in conjunction with the petition; for example I can live with the material not being free and I’m certain that it has been applied to several organizations while the ISO 29119 was in draft state – probably also successfully.

However, then some details in the response that bugged me started to stand out. The first thing is that the response is posted on a site that does not seem to allow any comments or open discussion. Neither could I find any direct link to the response on discussion forums on e.g. LinkedIn. This does counter the argument that everyone is invited to discuss the material in an open and transparent way.

I’ll start a bit out of order, with the following passage from the response:

According to ISO, standards are “Guideline documentation that reflects agreements on products, practices, or operations by nationally or internationally recognized industrial, professional, trade associations or governmental bodies”.

They are guideline documents therefore they are not compulsory unless mandated by an individual or an organization…

Fair, but why not call them ISO guidelines, then? One of the advantages the Dr Reid mentions with the ISO 29119 is that is builds a common vocabulary and that it should not re-invent the wheel. I think standard English already has a perfectly functioning definition of “standard” – and it’s not the same as “guideline”. My fear, and I think many with me, is that many stakeholders in companies will not be able to make this very important distinction. They’ll just read “standard” and read their own interpretation into that.

Dr Reid also writes [emphasis added by me]:

There are some useful IEEE testing standards (e.g. IEEE 829, IEEE 1028) and national standards (e.g. BS 7925-1/-2) but there were large gaps in the standards relating to software testing, such as organizational-level testing, test management and non-functional testing, where no useful standards existed at all. This means that consumers of software testing services (and testers themselves) had no single source of information on good testing practice.

I believe that the single most important factor for the evolution/revolution in testing has been the fact that there has been a multitude of different sources of good information. Many of them conflicting with each other, but full of ideas that help me get different points of views which I can then bring back with me to my projects and apply as appropriate. Admittedly, some sources can be hard to find, and if there were a single portal with all testing related information on the web, then it would probably be helpful, but this is not what the ISO is aiming for. As far as I can tell ISO wants to create a single source of ISO controlled information, which again makes it looks much more like a standard than a guideline.

Moving on to the question about agile and exploratory testing. From Dr. Reid:

The scope of this initial work (ISO/IEC/IEEE 29119 parts 123 & 4) was largely defined by the existing IEEE and BSI standards (which they would replace), although it was clear from the onset that a completely new ‘Test Processes’ standard would be required, in particular to ensure that agile life cycles and exploratory testing were considered, as well as more traditional approaches to software projects and to testing.

Actually, the agile life cycles and exploratory testing are not my main concern; they are already out there and more or less widely accepted in the testing community. What worries me more is what we might miss out on in the future. If ISO 29119 requires “a completely new ‘Test Process'” to accommodate agile and ET, then that also means that agile and ET came about despite material from IEEE, BSI, ISO, ISTQB, etc, not because of it. How does ISO ensure that 29119 does not curtail the next advancement in software testing – something that would require the ISO 29119 test process to be completely rewritten?

The concern about future needs for the test process (or other parts of the ISO 29119) ties into the tailoring of the ISO 29119. It has been stated several places that an organization can basically tailor it any way they like, to fit any context, but if that is so, then it seems not so helpful. It’s like making scaffolding out of play-doh – I can form it into whatever I like, but i won’t really support me much. If, on the other hand, the scaffolding is more rigid, so that it supports me, then it will also be less possible to tailor to anything. Which one is the ISO 29119 – steel or play-doh?

Also, even if ET is covered in the ISO 29119, it might not bee on equal footing. I think it’s telling the Dr. Reid chose the following paragraph in his response to the petition:

“When deciding whether to use scripted testing, unscripted testing or a hybrid of both, the primary consideration is the risk profile of the test item. For example, a hybrid practice might use scripted testing to test high risk test items and unscripted testing to test low risk test items on the same project.”

And stepping back a bit to “standards” and “best practices”. Dr Reid writes [emphasis added by me]:

  • Definition of good practice in the testing industry – a guideline for testing professionals, a benchmark for those buying testing services and a basis for future improvements to testing practices. Note that we do not claim that these standards define ‘best practice’, which will change based on the specific context.
  • A baseline for the testing discipline – for instance, the standard on test techniques and measures provides an ideal baseline for comparison of the effectiveness of test design techniques (for instance, by academics performing research) and a means of ensuring consistency of test coverage measurement (which is useful for tool developers).

Isn’t “ideal” synonymous to “best”? Maybe splitting hairs a bit, but I find it interesting that the word “ideal” is used in the very next paragraph after stating that ISO 29119 does not define “best practices”.

As stated before, I’ve not really been involved in this debate before and I have not tried to get involved in the ISO work with 29119, but I can’t help having a few reflections on the following from Dr. Reid:

However, as a Working Group (WG) we can only gain consensus when those with substantial objections raise them via the ISO/IEC or IEEE processes. The petition talks of sustained opposition. A petition initiated a year after the publication of the first three standards (after over 6 years’ development) represents input to the standards after the fact and inputs can now only be included in future maintenance versions of the standards as they evolve

  1. The following quote from “The Hitchhiker’s Guide to the Galaxy” comes to mind:
    • “What do you mean you’ve never been to Alpha Centauri? Oh, for heaven’s sake, mankind! It’s only four light years away, you know! I’m sorry, but if you can’t be bothered to take an interest in local affairs, that’s your own lookout! Energize the demolition beam!”
  2. Does it really matter when feedback comes? It does worry me somewhat that ISO basically says “It doesn’t matter whether we have published something good or bad – since it has been published, then we must stick to our predefined change request process”.

After all of this, I must still say that I kind of wished that I could endorse ISO 29119; a lot of talented and experienced people have put a lot of effort and thoughts into this and it does contain some valuable information that I very well might end up using in some project some day.

I would not have signed the petition if I thought ISO really sees the 29119 as just another asset in the great library of thoughts and ideas out there in the testing community. However, the more I read the discussions around ISO 29119 and the responses from proponents like Dr. Reid, the more I feel that there is a disconnect between what e.g. Dr. Reid writes about standards, guidelines, best practices, tailoring, etc. and what he actually means. For something that has the potential impact like the ISO 29119, I believe the risk is just too great and it would need more testing before release – that’s why I have signed the petition to stop ISO 29119.

14
Sep

Response to Tour Testing

I came across this article http://konsultbolag1.se/tour-testing (in Swedish) about how Exploratory Test (ET) and Tour testing can be a good complement to scripted/manual testing. It covers problems and benefits associated to the approach.

When I first skimmed through this article I thought “there are some things I don’t really agree with”. After reading it more thoroughly it more became “there are a lot of things I don’t agree with at all”.
So since there is no way to respond to the article directly I will do it here.

First we probably need to talk about definitions.

I will assume that when we talk manual tests we mean tests written down in detailed step by step instructions then executed by a human tester with an expected result, sometimes also called scripted testing or checking (more about checking can be found here: http://www.satisfice.com/blog/archives/856). Which by the way is a great way to kill a tester’s spirit.

My interpretation of ET is in line with what James Bach says here (http://www.satisfice.com/articles/what_is_et.shtml).

When I talk about session based testing (a way to manage ET) I mean Session Based Test Management (http://www.satisfice.com/sbtm/).

There can of course be other interpretations for ET and Session Based but this is the original source.

 

Ok so let’s get on with the comments (Swedish quote from the article then Translated to English).

“Under tiden dokumenteras testförfarandet. Detta för att det ska gå att återupprepa scenariot vid avvikelser, följar upp testet samt återupprepa testet senare.”
“Meanwhile the test procedure is documented. This is so that the scenario can be reproduced on anomalies, do follow up on the test and to reproduce the test later”

  • How much you should document depends on a number of things, e.g. what kind of product you are testing, how long time you have, what the stakeholders need and so on. The documentation can be in rigid step by step (for e.g. regulatory or law conformance), session notes, screen recordings or simply kept in the testers mind. You should always think about what the purpose of the documentation is and if the worth of it outweighs the cost of documenting.

  • Although if you file a bug then it probably is important to have clear step by step instructions so developers, the future you and possible other stakeholders can understand what you did, how you did it and the impact.

  • If you during your ET testing find a scenario you think is worth repeating you can of course note down the steps and possibly automate it but one of the benefits is that you often don’t repeat a test since there is often more value in doing a new test (can be rather similar but not exact the same) rather than repeating it.

 

“Exploratory testing eller Utforskande tester (UT) kan vara ett bra komplement till din testning och dessutom kan denna typ av tester vara en mycket effektiv metod för att hitta de där retsamma buggarna som annars brukar slinka igenom – om dessa tester görs på rätt sätt vill säga! Det är en allmän uppfattning att UT hittar fler kritiska buggar än traditionell testfallsbaserad testning eftersom testaren kan reagera och undersöka mjukvaran utifrån dess beteende istället för att strikt följa ett testfall. Frågan är bara hur man ska göra för att lyckas med sina utforskande tester och undvika riskerna?”
“Exploratory test can be a good complement to your testing and also this type of test can be a very efficient method for finding those pesky bugs that otherwise slip through – if these tests are done in the right way that is! It is general belief that ET finds more critical bugs than traditional test case based testing since the tester can act and investigate the software based on its behavior instead of strictly following a test case. Question is what you should do to succeed with your exploratory tests and avoid the risks?”

  • First the article talks about that ET can be a good complement and then that it finds more critical bugs, in that case I would say it should be more than a complement. Although going by James Bach “To the extent that the next test we do is influenced by the result of the last test we did, we are doing exploratory testing.” it is hard to understand what the other testing (that ET should be a complement to) is.

 

“Det ska sedan en lång tid tillbaka sitta i ryggmärgen hos testaren att kunna använda sig av alla olika teststrategier och tekniker för att garantera mjukvarans kvalitet.”
“It should since a long time back be second nature for the tester to use different test strategies and techniques to guarantee the quality of the software”

  • This needs to be said, we can never guarantee the quality of the software, we can however give information about what we have seen/not seen and our impression about the software.

  • And yes, it takes time to become a great tester. As with everything else you need to practice and learn continuously to become better and better but this is not only for ET but for all testing.

 

“Jag tycker att man med en metod som kallas Tour testing slår hål på den myten till viss del eftersom testerna här kan anpassas till testarens erfarenhet och systemkännedom.”
“I think that you with a method called Tour testing can puncture the myth to a certain degree since these tests can be adapted to the testers experience and system knowledge”

  • You don’t need Tour testing to adapt the testing to the tester’s experience and system knowledge, you can do that with ET and the right guidance and education. Tour testing is just one way to approach the product in different ways. You can also use personas where you take on the role of a fictive person. Maybe it is someone that is impatient, someone that is lazy and use all shortcuts or someone who want to exploit the program (think security). You can also use the different hats technique and so on.

  • Tour testing isn’t a replacement for testing knowledge. Tour testing is a technique and hence you can with practice become better at it. It is not something that magically makes testing simple.

“Min uppfattning är att UT framförallt kräver styrning, mål och dokumentation för att vara effektiv och inte erfarenhet.”
“My view is that ET above all requires control, goal and documentation to be efficient and not experience.”

  • First off I would say that then it is not ET. Of course we might want some kind of control (e.g. by using SBTM) but if control and a clear goal is the important parts then it sound more like checking. And that control should be more important than experience seems to me as a lack of trust in the testers. I would translate it to “I’m the only one who know what to test so do exactly as I say”. This is for me as a tester a horrible place to work in. As a test lead, trust in your team is very important. Of course trust needs to be earned and new testers might need some guidance in the beginning before they have shown that they can be trusted fully. But to rank control above experience and skills is to me just plain wrong.

“För att belysa riskerna med UT kan man dra en parallell mellan testaren och en turist som ska åka till London för första gången utan att göra någon som helst research om staden innan. För att upptäcka staden vandrar turisten planlöst omkring på gatorna i hopp om att stöta på roliga och intressanta saker. Troligtvis kommer turisten att stöta på en och annan intressant sak, men utan någon förkunskap blir det svårt att förstå vad det är och betydelsen av upptäckterna som gjorts. Det här gör inte upplevelsen speciellt rik eller kvalitativ och framförallt är risken stor att turisten missar massa intressanta sevärdheter eftersom tiden rinner iväg när denne planlöst går omkring. Turisten vet inte heller hur stor staden är så det blir svårt att veta hur mycket som finns att utforska. Om detta översätts till test av mjukvara så förstår vi snart att stora risker uppstår. Visst kommer turisten ändå uppleva delar av London och troligt är att denne hittar några bakgator och genuina ställen som man annars kanske inte hade stött på.”
“To highlight the risks with ET you can make a parallel between the tester and a tourist that is going to London for the first time without doing any research at all about the city. In order to discover the city the tourist aimlessly wanders around the streets hopeing to bump into fun and interesting things. The tourist will likely bump into one or another interesting thing, but without any pre-knowledge it will be hard to understand what it is and the meaning of the discoveries made. This doesn’t make the experience very rich or qualitative and above all the risk is large great that the tourist misses a lot of interesting sights since the time flies when he aimlessly wanders. The tourist also doesn’t know how big the city is so it will hard to know how much there is to explore. If this is translated to test of the software you will quickly understand that big risks appears. Sure the tourist will still experience part of London and probably find some back alleys and genuine places that you otherwise might not have bumped into.”

  • So many things to say about this metaphor. First, it is very few testers that when they start testing have zero knowledge of the product they are working on and have no idea how it should behave. Even when non testers are brought in to test the product they usually have used some similar product that can act as an oracle (a reference to tell you if the product misbehaves). E.g. if it is a Windows program there are some parts that all Windows programs share (like how to close it, normal shortcuts …) and if it is an app there are other apps out there to compare to. Also feelings is a good oracle, if you feel confused, angry, bored, intrigued or excited can tell you a lot of the product.

  • A tourist still knows what preferences she have and what she likes. Is this pretty, was this far from the hotel, did it taste good and so on. For all those questions we have a gut feeling answer. She can usually understand some signs (at least those with images) to help her navigate and if she can’t that is also good information (think usability). She can find interesting areas that she doesn’t have time to investigate now but if she returns she can spend some time on it (future tests). She will understand if she ends up in a dead end and need to circle back. And day by day she will learn more and more about the product.

  • You can get a lot of information from someone that have no knowledge about the product, everything they find strange is a potential issue from a usability/help documentation view. And if you are a tester hired to test this product you will have more information than Mr. random on the street. Pairing a new tester or Mr. random with an experienced tester is a great way to learn about these issues that the more senior tester maybe have grown blind to after testing the product for a long time.

  • For this to be a real problem we need change the human tourist to an alien that has landed in London and this is the first time seeing earth. It will not know if the air is breathable, won’t understand any signs at all, maybe won’t realize if it hit a wall and won’t understand if it is humans or cars that is the dominating species. Although this would translate to a brain dead tester and not a new tester.

  • Risk of missing interesting attractions is always the problem. But we can of course do some risk analysis or similar to try and find the areas we should visit, but also experienced testers need to do that. When a risk analysis have been done you can easily assign a new tester to one of those areas instead of letting her loose on the whole product (if that isn’t what you want to do).

  • The risk as I see it here is not with the tester doing the ET session but rather the Test Lead (or team) planning the sessions and guiding/teaching the new tester.

  • And by the way it is the back alleys that can be the interesting parts. The interesting tourist attractions (Big Ben, London Eye) can probably be covered by automated checks since we already know where they are, how they look and some have been there for a couple of hundreds or years.

 

“Riskerna med manuella testfallsbaserade tester kan istället likställas med en väl förberedd turist som innan resan till London läser alla möjliga guideböcker och antecknar vad man vill se och göra under sin vistelse. Turisten kollar upp kartan, valuta, restauranger, transportmedel, väder, evenemang, ja allt man kan tänka sig”
“The risks with manual test case based tests can instead be equaled with a well prepared tourist which before the trip to London reads all different kinds of guide books and notes what to see and do during the stay. The tourist checks the map, currency, restaurants, transportation, weather, events, everything you can imagine”

  • This is a really good thing to do as a tester regardless if you are to create automated checks or run ET. You check old bugs, specifications, similar products, help pages, older versions of the product, expectations and so on to get better understanding what to expect (get more oracles). This is part of the daily work to learn more about the product so you can make better calls what and where to test.

  • The weather part here is a good example since it as with many things in a software project is impossible to predict. A human running a ET session is probably much better in noticing these differences than an automated check or when running a predefined script (since people tend to become zombies when following detailed instructions).

 

“En stor risk med UT är att fokus endast hamnar på positiva, funktionella tester då testarna inte får någon styrning. Man utgår ifrån det man vet, vilket brukar vara krav eller sina egna erfarenheter om hur mjukvaran ska fungera och vad den ska klara av. Detta är vanligt bland juniora och ovana testare och dessa positiva tester kan i mångt och mycket likställas med ”checking”. Genom Tour testing kan man på ett tydligare sätt få juniora testare att hitta infallsvinklar och angreppssätt samtidigt som man hela tiden bygger på kompetensen och utökar verktygslådan.”
“A big risk with ET is that focus only is put on positive, functional tests since the tester is not controlled. You start with what you know, which usually is requirements or your own experience about how the software should work and what it can do. This is common among junior and novel testers and these positive tests can basically be equalled with “checking”. With Tour testing you can in a clearer way get junior testers to find angle of approach and ways to attack at the same time as you build competence and expand the toolbox.”

  • Why should ET lead to a tendency to only test positive tests? If this is the mindset of the tester how would it differ when creating automated checks?

  • Why shouldn’t we be able to control what we test with ET? It is just a matter of deciding and communicating what parts and with what focus we want to have. This is where the debrief and charter part of SBTM comes into play. In debrief the test lead have a chance to hear what has been tested and can feedback and guide the tester if she is off track by changing the charters.
    There are many ways to handle this and many test conference sessions are about this. You can e.g. use mind maps to plan, heuristics (a fallible shortcut to a solution of a problem) to help remember what different things to test and dashboards to visualize what has and are to be tested.

  • Again here is very little trust in the testers and feels more like an “I know best” mentality. Is this really the case with all junior testers? I would say it depends on the person, how they are educated and guided and not number of years in the testing trade.

  • I don’t agree that if we only run positive tests we can compare it to checking. Checking can be done on negative tests as well (here I assume that negative tests are some test where we expect the product to fail) and if you run ET it is not that you follow predefined steps, you go where it is important to go (depending on current information and priorities) even if it is only positive flows. This is the core of ET and something a check can’t do. And as pointed out, a junior tester doesn’t know that much about the product so if she wants to try something she will not know if it is a positive or negative test.

 

“En väldigt stor risk med UT som växer med komplexiteten på mjukvaran är att man får en relativt låg test-täckning (coverage) mot vad man borde kunna få med UT. Eftersom människan i sin natur är snäll och gärna vill hitta vägar runt problem så finns risk att testare går runt komplexa delar med knepiga funktioner och testar i de områden/programflöden där de känner sig hemma. Det kan också vara av ren vana som i att ”Jag sparar alltid mina värden genom att använda Arkiv-menyn” istället för att använda de snabbkommandon eller andra mekanismer som finns för spara i applikationen.”
“A very big risk with ET which increase with the complexity of the software is that you get a relatively low test-coverage compared to what you should get with ET. Since man is by nature good and rather find ways around the problem there is a risk that tester avoids complex parts and tricky functions and tests the areas/program flows where they feel at home. It can also be due to habit like “I always save my values trough Archive menu” instead of using the shortcuts or other mechanisms that exists to save the application.”

  • Again with the condescending tone towards testers. People I have worked with that I think of as good testers have never had this approach. This to me is comparable with not reporting a critical bug because it is too much “paperwork”. People that think like this is not testers (at least not good ones). If you have these in your organization you have either made a mistake in your hiring procedure, you have not guided them properly or you have killed their motivation.

  • The issue with defaults and bias is on the other hand a real problem. Although in my mind it is maybe more related to testers that have worked with the product for a long while and have gotten some habits, not for new hungry testers. But it is a real problem and something to be aware of. Rotating testers between different areas can be a good way to minimize it.

 

“Det kan vara väldigt lätt att grotta ned sig i en viss del av en applikation om man hittar något som verkar avvikande eller helt enkelt drar uppmärksamhet till sig. När man kombinerar sessionsbaserad testning med UT och 80 % av tiden ägnas till denna ”intressanta del” så säger det sig själv att resterande delar inte kommer få lika mycket uppmärksamhet. Hade testledaren avsikten att applikationen skulle testas igenom på en mer övergripande nivå eller med annat fokus kan testet bli missvisande om inte tiden rapporteras på en väldigt låg detaljnivå.”
“It can be very easy to dig down in a certain part of the application if you find something that seems divergent or simply draws attention towards itself. When you combine session based testing with ET and 80% of the time is spent on this “interesting part” it is self explanatory that the rest of the parts won’t get as much attention. If the test lead had the intention that the application should be tested in a more general level or with a different focus the test can be misleading of the time isn’t reported on a very low detail level.”

  • I don’t understand this part. One of the reasons you run Session Based with time boxed sessions is to have a better view of where you should be spending your time and where you actually are spending your time. So what Session Based can do for you is to catch this problem when it occurs since it will be caught in the debriefs and hence give you a tool to deal with it. Without Session Based you might end up with spending too much time on one area but if you have a good test lead it shouldn’t happen.

  • If the test lead had the intention to test more broadly, the test lead should have said so. I guess the test lead have regular interactions with the testers and don’t just hand them the test instructions on stone tablets once a month.

 

“Som en lösning på problematiken med riskerna inom UT och manuella testfallsbaserade tester finns metoden Tour testing som utnyttjar fördelarna i de båda metoderna. Med UT hittas generellt fler kritiska avvikelser och lite udda saker, medan man i de manuella testerna minimerar riskerna att missa viktiga områden och centrala funktioner.”
“As a solution to the problems with risk within ET and manual test case based tests there is the method Tour testing which use the benefits in both methods. With ET more critical anomalies and some odd things are generally found, while you in the manual tests minimize the risks that you miss important areas and central functions.”

  • How does the manual test minimize the risks? The information and risk analysis you are basing your checks on should also be used when doing ET to decide where and what to test.

 

The Tour approach is one good way but don’t forget that this is only one way. There are many others, and different approach suits some products or people better and worse.

Drawing the system can be a really good exercise to find unclarities, questions and so on. Although don’t forget that it is often interesting to find out what happens between the districts/areas since that might also be between different responsible teams.

And just to make it clear, I have nothing against Tour testing. I agree with many of the points made in this article (especially in the benefits and conclusion part) that it is a good heuristic you can use to try and avoid missing parts of the product and that it can be a good way to visualize different areas of the product and create some interest in what the testers do. But it is also important to note that Tour testing is not something that is easy and anyone can do well. This also requires that the tester practice and constantly improves in order to become good at it.

23
Oct

Test vs Vacation

I have started to plan for my next vacation and it struck me how similar it is to testing. From planning the trip to the actual travel.
Here are a couple of points that sprung to mind.

  1. Scripted vs exploratory testing – Should you book hotels, excursions and other from home or should you wait with deciding how long to stay at each place when on site to be more flexible for unseen things (might be more to see than you first thought, things might be closed …).
  2. Preparations such as setup system, buy HW/SW find people with right competence – Do you need vaccination, apply for visa, ask boss for vacation and so on
  3. Securing resources e.g. HW, personnel – Hotels, excursions might be fully booked (especially during holidays)
  4. Learn as much you can about the system you are testing – Study the culture, fauna, maps, things to do, things/places to avoid…
  5. Learn the systems language – Learn the language (maybe not needed but can sometimes simplify things)
  6. Cost vs time – Is it worth to pay that extra to get direct flights, get a business ticket…
  7. Unforeseen things might happen which will affect your time-plan – Jepp exactly
  8. Sometimes you have a lot of time to plan the test but not much time to test (e.g. when renting a test lab) – If you are staying one or a couple of weeks you need to plan what to see when, but if you are staying half a year then maybe you can afford (time wise) to take it as it comes.
  9. Cost vs value (more resources, better tools, hire test experts…) – Do you want to do day trips, live in fancy hotels eat good food or live in tent and eat fallen fruit
  10. Outsourcing – Use of travel agency or book it yourself (stay home and travel through someones photos and videos, this I don’t recommend)
  11. Use of tools – Flight search engines, Tripadvisor, google maps, dictonary…
  12. Late changes can be expensive – Re-booking of flights, hotels, excursions are usually costly
  13. Session notes and screenshots helps you remember what you did during your tests – Diary is really good to remember what you did, and of course your photos will help
  14. Blink test – Buy a hop on hop off bus to ride around and stop at places that look interesting
  15. Communication is key (talking to developers, stakeholders) – So many hidden treasures can be found by talking to locals, people who have been there…
  16. Lessons learned can be really useful – Updating Tripadvisor and similar or talking to your friends about the trip can really help out if someone is planning on going
  17. You need to consider the risks – Well this is the same
  18. Priority – You usually don’t have time to see and do everything
  19. Ask stakeholders what is important and beware of shallow agreement – It can be bad if you book something your travel companions don’t want to be part of
  20. Budget & deadline – Which you of course also have when traveling
  21. Visualization – I have found that visualizing my time-plan and what is needed to do really helps me in my planning of the trip
  22. Exploratory Testing Tours – Explanation probably redundant
Well you can probably go on forever in the parallels. Which parallels do you see between testing and travels?
17
Oct

An unscripted unit test tool

,

Yesterday I ran in to a bit of a debugger bashing with a couple of people disowning the debugger with statements like “I noticed as soon as my students got access to a debugger [rather than just ‘print’ statements], their design very quickly deteriorated” and “I often feel that it’s much faster to reason around potential causes for bugs than to search for them in a debugger”. Now, these statements are undoubtedly true, but I am a huge fan of the debugger and will use it extensively in just about any programming task that I do. Admittedly, I do no longer do programing as my main profession, but I still code small tool in various languages, so this dialog got me thinking on why I find it useful.

I tried to think about how I use the debugger when I’m coding and then in struck me; I use it as a tool for unscripted unit testing. I have the feeling that many people, when they think of unit testing, they think of small pieces of code that check some other code. I also do this and find if very useful, especially as a way to constantly refactor and improve my design. However, these small snippets of code can be seen as test script and although there is value in these test scripts, there is much more to testing. Also, the code snippets do not really test the code, just check it – I want to test it.

What I do when I code is after every so few lines of new code, I have a micro test-session where I explore the new code where I can really test it beyond the (simple) checks of my scripted unit tests. For me, this adds invaluable information about the code that I have just implemented, that I feel I could not get otherwise.

As with most tools, their value depend on how you use them and with the debugger, it might not be a good tool for software design or for investigating bugs, but as a tool for unscripted unit testing, I find it to be great.

Sida 2 av 41234