Usability Testing Basics:
How does a basic user test go down?
Overview: Sure you know that you're supposed to go and test some users. But how? What is the actual flow of events in the user testing process? This document gives you a nice basic outline. Of course, you may want to add or modify steps in your particular scenario, but start with this model as your "default" process.
Phase 0: Preparation:
Before participants arrive, you'll need to have everything prepared and ready to go. Nothing is more embarrassing than have the people there and you still figuring out why something isn't working right! Here are the basic prep items:
- Get the lab setup: cameras in place, test monitoring station video and audio, adjust camera placement/framing, test recording and playback function. Make sure you are very familiar with how your usability laboratory is set up and how all equipment works!
- Informed Consent: A basic tenet of modern scientific practice is that you are not allowed to secret nefarious things to human subjects. And user testing participants are, in at least a peripheral sense, human subjects. Although this isn't a scientific experiment in the traditional
sense, it is appropriate to inform subjects of their privacy rights and be
explicit on how the data collected will be used; this is standard professional practice in this area. Here
is a prototype of an Informed Consent form that you can use. I have marked
the section that you need to edit to the specifics
of YOUR test/project in red --- for each such section, I describe in parentheses
what kind of info should go there; plus I have also left in some example text from forms
I have used in past to give you a concrete sample of the nature of what you need to fill in. Have these forms all ready to go; you can use one for each participant pair...they each sign on the line.
- Have you lab manuals all printed up, reviewed, and ready to go. Carefully review the guidelines on preparing a good lab manual to keep you on track!
- Recruit your participants. There are several steps here. (1) you'll need to find potential participants that fall into your target audience, then recruit them to help out; (2) Next, you'll need to find out a bit of relevant background info on each. Usually this is done with a short survey form. Just exactly what you ask will depend on what is relevant to your particular application. The idea is that you want info that will help you understand what relevant expertise they are bringing to the task, and this will help you not only pair appropriate people together, but also understand where different performance or breakdowns might be coming from. Common things to ask are contact info (name, cell, email, best times) for scheduling purposes, level of experience with technology overall, with similar applications, and with the application you are testing. Add in any other background info that you think might be relevant to performance on the app you are testing, e.g., if you're testing photoshop, you'd ask about previous photography experience; and (3) finally, you will pair up the participants and schedule them for a time slot in the lab.
Phase 1: When participants arrive.
Ok, so you have everything prepared and set up, and the participants are knocking at the lab door. Time to rock-n-roll! Here is the basic flow:
- Welcome the participants. Be extra nice!
- Make them each read and sign the Informed Consent form that you have ready.
- Explain a little about how it will go down: we'll put you in there in front of the app we're testing, you'll get a workbook of tasks to work through using the app, should take around X minutes...but there is no time limit, it's not a race, just take your time and work through things. It's super-important to emphasize that (1) YOU, the designer, is the dummy: you've taken a feeble attempt at designing a great app, but you really need their help, to give you a real end-user perspective; and (2) there is no "wrong" or "screwing up". Any difficulty or weirdness that happens is YOUR (the designers) fault...by just being patient and trying get through despite the poor interface design would really help us. The idea is to fight this psychopathology that users tend to have that they are "just too dumb". Make it clear that you are trying to learn from them.
- Ok, now take them into the lab room, get them seated, attach the lapel mics, make sure that everything (video, audio, your app) is up and working well. Make sure the camera didn't get bumped out of position. Start the recording rolling now, before you leave the room.
- Finally, give them the lab manual and tell them they can start. Make sure you emphasize that you want them to actually write down things in the lab manual as they go. Make sure they have a working pencil. Then leave the room, close the door. You're off and running.
- Now observe the test in progress. Take notes on locations (helps to note the minute/second within the recording (look at the recorder software screen) by each note to help you find the place later as you go back and do close analysis. You should be quiet and just observe. As much as you'd like to, DO NOT jump up and go in there to help them. There are only several conditions under which you should enter the room: when the test is over (obviously); when something really bad happens, e.g., something crashes or they knock over the camera; and when they get massively stuck and have been thrashing in a dead end for a long time. In this case, do not tell them how to do it or give hints...just move them on to the next task in the manual.
- When the test ends, get in there FAST, before they stand up. They will forget they are wired to lapel mics and you want to go detach those before they run off and break something. Get them unhooked, take them in the observation room (104 proper), and take care of loose ends. If you were having your users to a post-test or post-survey, this is where you'd fit it in. Otherwise, just thank them and send them off.
Phase 2: Analysis and write-up
This phase involved the deeper analysis of the test, to figure out where the breakdowns were and (!!) what it was about your UI that caused them. Some pointers:
- Pop your recordings on a USB stick or ftp them to your laptop. Mailing doesn't work well...they are too big.
- There is no need to wait with this stage until you have all your pairs done and tested. You can start right away after you have data from the first pair!
- Now do your analysis. Here's the basic process that you repeat for each pair/session:
- Review your written notes. If you made good real-time notes, including noting down the times at which events happen in mins:secs on the record, these can efficiently
direct you to specific problem episodes on the video records.
- Once you have briefly catalogued them all, you need to carefully analyze each incident. If you're doing this as a team, you can split this task (several incident each) across team members. However, I recommend that, instead, you split the work by assigning each team member a full review. This helps one person "be in tune" with the dynamics of a particular testing run...and to see possible connections spanning multiple breakdowns. Or better yet, just do them all together as a team...allowing you to better recognize issues that appear consistently across sessions/pairs of testers as well.
- Go through each incident carefully. You'll likely need to rewind and review problematic sections are number of times until you're sure what caused the confusion or interface failure. Write down your analysis of each incident carefully: first summarize what you're working on: note the session number, the minute/second count into the session, and briefly describe the overall problem/breakdown that you observed. Then go into your analysis: explain what you think the confusion/problem is, and what in the interface likely caused it. End by suggesting one or more solutions/UI modifications you might suggest to fix the problem. A good practice here is to use your video editor to snip out a brief clip of interaction that illustrates the problem; this will be useful for team review, as well as handy for an upcoming presentation of results.
- As you process the individual breakdowns, remain alert for deeper patterns: maybe there a several separate but similar incidents that you suddenly realize are really symptoms of the same underlying problem. For instance, maybe users have several episodes where they wander around, searching through the menus for an appropriate command; maybe it's less that each particular menu item is misplaced, maybe its more that the entire menu titles and organization don't fit the users mental model of how these controls should be organized. In this way, you can get to broader underlying problems, rather than tweaking around on superficial issues.
Whatever you come up
with should be well-supported by your evidence -- don't
just
try desperately to cram things into abstract categories.
- Come together as a team to discuss what each of you has found. Talk through each analysis, review some clips, try to recognize the same issue occurring in multiple pairs. Consider the potential solutions, and decide on what changes (if any) you are going to make to make things better. If there are multiple ones, prioritize based on severity.
- Go off and make the changes. This isn't pure science, it's practical; you don't need to keep "conditions unchanged throughout experiment"!! So if you see a major problem, especially one that several test pairs have duplicated, then go ahead and change the software midstream...may as well get good data for upcoming tests!
- Write up your results, prepare usability presentation.