Usability Testing Basics:
Communicating results of an Assessment testing round
Overview: Assessment testing is probably the most common sort of usability testing that you'll do after a project passes the prototype stage, and is also one of the most useful in terms of quickly refining the interface. Like most things, however, it's fairly useless if you can't communicate the results well, i.e., your insights on what went wrong and your suggestions for UI changes to address the problems. Whether you're doing the written report or a presentation, the goals are the same: you need to convince your audience that you did a thorough analysis and found some real issues (vs. just flukes/one-off problems), and then give a compelling analysis leading to potential fixes.
Let's start with a description of what a written report looks like; discussion of live verbal presentation of results is further down.
Written Usability Test Report
In many organizations, you'll be expected to do a written test report for each round/batch of testing that you do. Seeing as not all of the project/implementation team will have been there, you need to get everyone on the same page, then show what you found. Here's an outline of the basic pieces of the story you'll need to tell:
Introduction: Get everyone on the same page
- Project intro. If you're testing some element of a larger project, e.g., one part of a large complex application or one element in a suite of apps, then you'll want to briefly introduce and describe this larger context. So if I'm testing Microsoft Word, I'd talk briefly about the nature and goals of the Office suite. If I'm testing a filtering tool that's part of a larger image editing app, I'd introduce that overall app to help establish the context. If you are just testing one small standalone app, you can maybe say a few words about the overall domain (competing apps, number of users, potential market) to help us understand the audience.
- Intro the element/application that you've been testing. Begin with a sentence or two saying what the app is, who made it, first introduction, version evolution. A well-chosen screen shot of the app in action is critical here to help us understand what we're talking about. Then state it's main goal, along with a bullet list of the 4-6 key central functionalities from the user standpoint, i.e., key things the users want to do with this app.
Assessment Testing Intro
- Next intro your testing approach. Say some words about user-centered design, i.e., we are using it and sticking to the key notion of aligning our design closely to end-user needs throughout the process. Then you can intro assessment testing as a strong element in this approach, being based on putting real users in front of a prototype and analyzing them working on real tasks from the usage domain. Introduce constructive interaction as the variant of assessment testing that you've chosen and explain why.
- Goals --- Start with an overall statement of the goals of this testing round. Could you things like "General testing of the app as a whole for overall user satisfaction and efficient access to key features". Could be more specific like "Testing of the User Profile module after recent redesign by the implementation team". Then bullet out the 3-5 key "questions" that you hope to answer with this test; these should be aligned with and should be clearly answered by the series of exercises that you had users do in the lab manual. You (hopefully!) didn't randomly choose the tasks presented in the lab manual, so here you illuminate your rationale. Things like "Verify that users are able to use the new login screens effectively", "verify that users are able to apply the new psychic-aura filter package to images", "Assess whether new redesign of reactor pump control interface resolves Issue3.2.2 exposed in the UI2016-01-17 round of testing". Again, these are the same goals that drove development of your lab manual.
- Method -- Describe exactly how you went about the testing: How you recruited potential participants, how you screened them and/or discovered background info, how they were compensated (if applicable)...and then the whole process step-by-step: the circumstances the test took place in, what you told them in advance
of testing, what materials you gave them (allude to the lab manual
included as attachment), equipment setup, how long the test ran and how
data was recorded.
Testing Outcomes
- Test Outcomes Overview -- Begin with an overview of the testing outcomes: How many subjects pairs
were tested in how many blocks/sessions and over what period of time. Follow this up with a table listing (anonymously) overview info of the test sessions: date/time, sexes of participants, time it took them, and notes (where you put misc. notes, if any, on relevant weirdness on particular tests, e.g., "crashed sys twice") and some aggregate info about each (sexes, time it took
them, etc). End this subsection with a detailed description of the process by which your team analyzed the results.
- Results Overview -- Now you want to get into the real meat. Begin with a summary overview of the main "issues" that you found; these correspond to the "patterns of failure" that you were looking for in your analysis process. So, after a lead-in sentence or two, you could have a table where you briefly describe the problem area, give it a severity rating (minor, irritating, fatal...something like that), describe how many total breakdowns you observed in this problem area, and then how many pais (2/3, whatever) had problems with this area. Also give each area/issue a nice reference label for easy reference, e.g., Issue2016-01-15:002 where the 2016-xx-xx part is the label for your testing session.
- Red Meat: Right now readers have an overall idea of what problems there were and what areas, but no clear idea of exact issues. In this section, you'll work through each problem area giving at least one specific example of a representative breakdown you observed. An example means: A screenshot of the problematic screen, then a play-by-play of the breakdown that happened, followed by (if you have a clue) a brief thought on what is causing the breakdown, e.g., "it appears that users recognize the 'rotate' button quickly and understand its purpose, but tend to miss it when clicking, often inadvertantly clicking the 'full screen' button right next to it". Note that you are not recommending a solution here, just trying to speculate on the problem.
Conclusion and Recommendations
- Now you are ready to summarize the results and make your final recommendations. Give a nice little lead-in summarizing your overall thoughts: "Overall testing results indicate that the app/element is functioning quite well; a number of issues were discovered but all were minor". Or maybe you outcomes tended more towards "It appears that the overall metaphor and approach were not problematic, but we documented many moderate to severe usability problems that will significantly impact effectiveness and user satisfaction. Fortunately, our analysis suggests that a few relatively simple UI changes have a good possibly of improving outcomes".
- Next, go through each of the Issue areas one by one. I like to do these as 3-row tables: Issue description (copied from above), Outcomes Overview (copied from above, plus mention of severity/frequency), and then Suggested Improvements. In this latter section you bullet out the change or changes that you feel might improve the situation. If you have several possible ideas, list them all, indicating priority/likelihood of success; the implementors can choose which to try. In some cases, the implementors (also you, for this class) may have instantly made some of these changes in mid-testing. If so, add a Notes section and mention that, plus if it appeared effective in remaining tests.
- Finally, close the report with overall statements, plus a recommendation for future testing: if everything was good, you could say "proceed to next integration/release stage", if there were tiny issues, you could say "make minor changes, but further testing can wait until integrated with other elements", or something like "due to the significant problems found, we recommend significant modification to attempt to address the issues observed, following by further testing to verify efficacy of changes".
- Attachments: Always attach a (blank) copy of the lab manual you used for testing for reference at the back.
NOTE: As you might have noticed, many of the intro sections and information are formulaic; this is a report with a fairly standard structure that will apply to any similar testing process. This is why is pays to come up with a good "template" as you do your first report (Headers, section headings, tables, formatting)...and then you can simply edit this to replace/update "the meat" when writing up reports for future assessment tests!
Verbal Presentation of Usability Testing Results
Creating an effective live presentation of your testing results has essentially the same rhetorical goal as the written report: you have to set the stage, provide clear and convincing evidence of the breakdowns you discovered, and give your recommendations for moving forward. The difference with a verbal presentation is that you have different priorities and focus: where the written report needs to very carefully document everything (as the official record), you goal with the verbal presentation is to get to the meat quickly and efficiently. A huge advantage here is that you have much stronger multimedia support: you can easily throw up screen shots and short clips from your tests...pictures that can save 1000 words.
You have a total of 10 minutes or so; you have to be practiced, clear, and efficient. Here is an outline:
- Intro: Cover basically the intro material in the written version, but in abbreviated form. One slide overviewing project, app, market stats. Then a nice screenshot of the app in action showing key look at feel; here you can tick off main user-level tasks app needs to support. Then a slide that overviews your assessment testing.
- The Insights: Ok, the stage is set, let's get to it! Start with the summary table of issues found and walk through briefly. Then briefly cover each issue (or, if there are tons, a few of the most severe ones): Throw up a screenshot of the problematic UI and verbally describe problem using it. Then play a sample snippet of testers that illustrates the problem; we're talking <20 second clips usually, unless it's really really necessary to show more. You want to keep the flow going. Then move on to the next one.
- Wrapping up: Bullet out your recommendations. For each issue, have the one line title of the issue, then a sentence about the proposed fix(es), all of which you walk through quickly. Close with overall recommendation: looks good, signicant problems that need resolving, minor problems but can continue dev, etc.