by Igor Gubaidulin, January 4, 2017
You can’t run away from technical problems in usability testing. UX expert Igor Gubaidulin at Nortal shares his five main rules on why you should keep calm and carry on.
A few weeks ago, I carried out three days of usability tests of a mobile application in Lithuania. The testing was going smoothly and easily. However, during a few test sessions the participants got technical errors – the back-end of the mobile application went down. I later learnt that it had been planned by our partner’s developers, but I hadn’t been warned beforehand.
If you aren’t familiar with term usability testing, it means taking representative users of a product or service and seeing how they complete tasks while the observers watch, listen, take notes and get qualitative and quantitative data. The goal is to step out of the bubble that the makers of the product or service and the clients are in. They can then see what problems the users can and will run into, how common the issues are, how they proceed in their user journey and what they like about the product or service in general.
Back to the sudden failure of the app in Lithuania during the testing. This was not the first such ordeal I had come across, so I wasn’t fazed. I finished the testing as well as was possible. In any case, it was a stressful situation, so for someone with less experience, it could have been a real shock.
I have read many books about conducting usability testing but most of those books fail to mention what you should do if something is not in accordance with the predefined plan. So I decided to write this post and share some insights from my experiences.
First of all, don’t panic. If you’ve already started panicking, stop.
Users depend on you, so if you start to panic, they will follow your example and the situation will become even worse. You might even need to stop testing, which means that all the useful data of the situation will be lost and you as a representative of your company during this testing are going to get some negative points. Furthermore, if you are giving participants presents for testing, you’ll be giving away one for nothing.
It’s important to remember, users usually don’t know much about the usability testing scenario or what they should expect. I advise you to just pretend that the whole thing was planned.
In the Lithuanian example, the crash led to one participant not seeing any reason why he should update to the new solution he was testing, since it seemed to work worse than his old solution. I could have stopped then, since the test results were going to be much worse for all the wrong reasons. However, you can always turn a negative situation into a positive learning experience, so I just kept going.
The simple rule is that if the user gets an error, you can ask:
All this information is very useful. It lets you observe how users act in extreme or uncomfortable situations. Think of this sort of event as a gift to you as a tester. Take it, investigate it, and get everything out of it that you can.
Such errors help to check, if the user interface (UI) is ready to deal with errors. Some examples:
Murphy’s law says that anything that can go wrong, will go wrong. You need to always be prepared for the possibility that things may not go according to plan.
The testing methodology for the Lithuanian mobile app was created by the person who was responsible for the same testing in Estonia and unfortunately they didn’t have a plan B if something would deviate from the predefined plan. Luckily I had a little something prepared and took out 10+ testing questions.
In general, you can always have a clickable image prototype prepared, created in Invision, or wireframes, created in Balsamiq or Axure. Once I used a paper prototype in the form of printed screenshots for testing, when the tester app crashed. There’s no big difference what you have prepared for an extra case, but you must have something.
And this isn’t limited to back-end errors by any means. Don’t forget to have a plan B if the internet, software (like the browser or screen recording) or hardware fail you.
When the developers in Estonia decided to update the app version without notifying me, I was surprised. Perhaps I was naive in thinking that if developers know there’s a test going on, that should be enough to refrain from making any changes but, as this goes to show, it wasn’t.
Usually you will work on a project with a team and you need to rely on other people, so before testing ask for one of the technical people on your team to be on call when the plan derails. Don’t forget to discuss and set up all the activities that you might need during the testing like maintenance or a server restart.
I strongly recommend before every session day to inform developers and other technical staff about your plans, ask about their plans and adjust plans accordingly. Communication is key. So, don’t just have a plan B, have other people readily available to help you when the time comes.
But don’t let the user understand that something went wrong and you’re desperately trying to fix it. Ask your questions, listen to the answers, and have your chat with the person “on duty” in the background.
It might sound like a cliché, but each one of the situations mentioned above was a lesson, which I learned from. Turning an unforeseen event into a positive and educational experience depends on being ready for anything.
The case of the Lithuanian mobile app served as a great reminder to me that it didn’t matter who created the testing methodology. I was the one responsible for the testing success and I should have gone over with the team what plan B was in this case so we would have all been on the same page.
So, if and when something goes wrong, don’t panic, continue testing, inform your team and think what you can learn from it afterwards. And, of course, always have a plan B.