Last week I did a session for the web teams on usability testing. Unfortunately I only had one laptop and the room was too small to get people sat in groups of 2 or 3 to try out moderating a usability test first hand. So instead I gave an overview of user testing including:

  • Why, when and what to user test;
  • The different types of user testing and their pro’s and con’s;
  • What’s involved in planning and carrying out user testing;
  • Measuring usability;
  • Analysing the data.

I’m particularly interested in what user testing methods we can use to effectively benchmark user tasks on our website, before and after we make improvements. I’m also interested in what testing methods might be suitable given some of the challenges we face as a local authority web team, for example:

  • Limited resources (both budget and staff);
  • Tight schedules, usually tied in with development sprints;
  • Diverse target audiences with different goals and motives for using the site;
  • Users spread over a reasonably wide geographic area (often rural), making it harder to travel to participants;
  • A very wide variety of information and transactions on the site.

 

So I covered off three types of user testing…

Face to face testing

Face to face usability testing

Face to face testing (more commonly known as usability testing or usability evaluation) usually involves a moderator asking a participant to carrying out tasks using a prototype or live website, then observing their behaviour. Although usability testing is often conducted in testing ‘labs’, it is possible to do face to face testing very cheaply, in any location. All you need is  laptop or paper prototype, pen and paper. But it can be hard to ensure all tests are recorded consistently. There’s nothing worse than trying to moderate and frantically scribble notes at the same time!

You can record sessions, ideally using software like Morae or Silverbackapp to capture the participant’s mouse clicks along with a video of their facial expressions, an audio stream and metrics. In my experience letting a stakeholder or developer observe the testing, or sending them a link to a video clip of the session, can be a powerful communication tool. I’m hoping we can record our sessions and show clips to colleagues in different Council services to help them gain a better understanding of the problems customers face in finding information relevant to their goals on our website.

Remote testing

There are two approaches to remote testing – moderated and unmoderated. Personally I haven’t carried out any remote testing yet and have been a bit sceptical in the past because:

  • it requires reasonable skill and very careful planning (e.g. coordination of a software tool and facilitation of the session over the phone);
  • you can’t observe the participants’ body language or facial expressions, which can convey loads about their experiences of using your website;
  • it’s harder to put the participant at ease and build up a rapport.

But because of the cost of travelling to our customers and hassle of booking suitable rooms (especially at short notice), extensive face to face user testing is just not feasible on many of our projects. Remote testing offers the opportunity to test with more participants, more cheaply, and it provides metrics. So I do feel it’s time to bite the bullet and give it a go.

Remote, moderated testing

Remote moderated usability testing

Remote, moderated testing works in a similar way to a web video conference, but usually the moderator can see the participant’s screen and facilitates the session over the phone.

Quite often the participant will need to install some software at their end, which can make the process more complex to arrange and therefore potentially unsuitable for the type of participants (i.e. external customers) that we would be recruiting.

There are a number of different software tools that can be used. Some work in a similar way to face to face testing, enabling the moderator and observers to watch the participant’s mouse clicks and cursor movements on the screen.

 

Remote, unmoderated testing

Remote unmoderated testing

During unmoderated tests, the participant is required to carry out tasks on a live website or HTML prototype. A web-based tool generates task instructions and sits above or below the website in a separate frame. There are quite a few different tools available. Liz Bacon posted a very useful comparison of remote testing tools on the IxDA site (which I was reminded to dig out again this week thanks to a colleague). The tools can capture valuable metrics to measure efficiency (time on task) and effectiveness of tasks (task completion rate and error rate) as well as requesting the participant completes a survey and/or subjective rating questions after each task.

As well as identifying and learning how to use some of these useful tools and carry out remote tests, we need an easy method to recruit participants.  Hopefully by advertising on our website and in libraries, we’ll manage to get a few willing volunteers. I am hoping that remote testing will give us a much broader reach, but I don’t anticipate it will ever remove the need for face to face testing.

Over the next few weeks we’ll be carrying out remote and face to face user testing on our website, so I’ll be able to feedback on how it goes and what we learn.

In the meantime here are some good resources on remote testing:

User Focus article

Boxes and Arrows article

There’s also a new book coming out on the subject of remote testing soon.

JPros

LCons

Face to face

Can ask follow up questions
Facial expressions and reactions convey more to moderator
Stakeholders can observe
Unnaturalcontext, participants may feel uncomfortable
Participants keen to please you
Cost of room and facilities

Remote – moderated

Reach geographically dispersed participants
Reduced costs of travel expenses, room booking etc.

Needs practise to moderate and handle technology
Harder to establish rapport
Can’t see non-visual cues
User may need special software

Remote – unmoderated

Reach geographically dispersed participants
Context more natural to user
Larger sample to measure time on task, task completion rate
Needs careful preparation
Can’t see how user behaves – do they make a cup of tea or answer the phone in the middle? Realistic, but skews data

This content is published under the Attribution-Noncommercial-Share Alike 3.0 Unported license.

Share