Usability Testing. Oh, The Things You Can Learn
I can't say exactly when it happened. Somewhere along the way, the simple technique of usability testing became a validation process. Users were put in front of the screen, asked to perform tasks, and measured, like rats locating cheese in a maze. The results of such studies were statistics, such as 5 out of 13 subjects completed 75% of the tasks successfully and the average task time was 4 minutes, 22 seconds.
Because such studies required working products to test the subjects against, they were often deferred to the very end of the development cycle. The statistics became pass/fail criteria, however, when users failed to complete the tasks there was often little the developers could do about it, having spent their available schedule developing the product.
At best, the information would help support reps answer questions and guide training materials. At worst, (and all too often,) the report detailing the results would be filed, unread. The team would go on to create more designs, without any clear insights to guide them to improvements.
Preventing Usability Problems in the First Place
If you trace any usability problem to its inception -- the point where the problem was introduced into the design -- you'll find the same underlying cause: someone on the design team didn't have a key piece of information. Had they had that information, they would've made a different design decision. That design decision would, subsequently, have resulted in a different design -- one without the usability problem.
The most successful teams have learned that the best way to produce a usable product is to make informed decisions from the outset. They don't look at usability testing as a final validation tool. Instead, they see the technique as a method to learn the necessary information to create great designs in the first place.
What Teams Need To Learn
Usability tests can't tell you everything you need to know to make every decision right. However, if you pay attention to all the clues, you can learn a tremendous amount.
You'll learn about your users: What are their goals and how do they go about achieving them?
You'll also learn about your design: How well does it assist your users as they attempt to achieve their goals? Where does the design get in the way?
Moreover, you'll learn about something we don't see talked about very often -- your team: Which members come to the table with the knowledge and experience necessary to create great designs? What areas do you need to augment with more input from users? You can find out a lot about your team's strengths and weaknesses by looking for clues hidden throughout the testing process.
It Starts With Recruiting
Recruiting testing participants is a time-consuming process, which is why many teams look to hire a professional recruitment firm. However, teams that recruit their own participants find they can gain useful insights they can't get any other way.
To start the recruiting process, we need to first decide the type of user we'd like to see in our tests. Most of the time, the team isn't going to select the first folks that come along. Instead, we'll assemble the criteria for selecting participants.
To select the criteria, we need to describe our perceptions of who our users are and, just as importantly, who they are not. The design team at a computer manufacturer told us they were convinced their web-site users where people less confident in choosing their own computer configuration -- the more experienced shoppers would call the sales center and order directly. To reduce calls to the center, they wanted to focus their testing on the more experienced shoppers.
To recruit participants, the team divided a list of customers and each member interviewed prospective shoppers to see if they were in the experienced category or were the less confident type. To their surprise, they discovered the more experienced shoppers favored the site, while the less confident shoppers preferred the handholding of the sales representatives. Like many such discoveries, discoveries like these feel obvious once you say them aloud, but you'd be surprised (or perhaps you wouldn't) how easily we convince ourselves of things that turn out to be just false.
Talking to candidates in the recruitment process can help answer important questions, such as: What makes a non-user not use the product? Is it because they don't need it? Or is it because it's missing some key features to make it accessible to that user? Your design decisions will change based on the outcome to those questions.
There are long-term benefits to doing the recruiting yourself. We often recruit participants using key differentiating attributes, such as previous experience (Do they currently use our competitors product?) or regular activities (Will our product help them with something they do every day?). As we conduct more tests, we can start to map those attributes onto the behaviors we see. For example, do users with a particular experience use our designs differently from users who don't share that same experience?
Designing Tasks Can Be Informative
In the usability study, we'll need to assign the users tasks to perform. Sometimes, we construct these tasks to ensure the users hit certain features during the session. In more exploratory testing, we'll keep the tasks more "open-ended", to encourage users to show us their overall approach to problem solving.
No matter how we construct the tasks, we have to start the same way: we need to separate the results from the process. On a travel web site, the user needs to book a flight or check the rates. However, those aren't the user's goals. The user wants to get to a meeting or wants a great vacation. Finding an affordable flight is the process to achieving the results.
Good task design forces us to think about the results separately from the steps. Asking a user to plan their dream vacation may bring them down paths we don't expect, such as wanting to compare pictures of nearby beaches of hotels in multiple vacation spots before they decide what their destination is. It would be easy for the team to assume the user knows where they want to go when they sit down at the site, but they may be looking for more assistance in selecting their trip than the team realizes.
To design the task, the team needs to openly discuss what the user will require to start. In testing a tax preparation application, we needed all sorts of tax-related details. Preparing the tasks required us to talk about how "messy" many users tax info can be, helping us understand how we'd deal with incomplete or conflicting information.
Similarly, we'll need to talk about the outcome of the task. What does it mean to be done? This gives us a chance to talk about what the users will do after they use our design. For example, when purchasing a computer, users often needed to get spending approval from a superior, such as a manager or a spouse. In talking this through, our team realized we required the capabilities to print out or email the configuration before purchase, then come back to that configuration later for changes and the final sale.
Participating In The Test
In the process of creating the design, we find ourselves staring at it for long periods. This has the undesirable side effect of conditioning us to the elements, making us comfortable with things real users may not find so natural.
The biggest benefit of the usability test is it allows us to see our design through the users' eyes. When they sit in front of the design, they don't benefit from the hours of thought we've already given it. Therefore, our observation of their reaction to the design can tell us where our assumptions have led us down the wrong path.
We also learn where we got things right. While many final reports focus on the problems in the design, knowing all the places the design worked well is extremely important. It tells us when we can trust our gut and our experience.
Recently we had the opportunity to test a design with 72 participants. It was a large investment and it took months to complete the study. However, when we were done, we saw some fascinating patterns in how people approached complex problems. These patterns will go a long way to helping our team innovate new and powerful features in the future.
Learning from Analyzing
25 years ago, if you asked me what the purpose of analyzing test results were, I would've told you it was to take the things we learned during the test and put them in a form the team could easily use. In other words, you learned everything you were going to learn by the time you started the analysis.
However, my experience has told me this couldn't be farther from the truth. In many projects, we start the analysis without having a clue what we've learned thus far. It isn't until we start to list all the little details, often on stickies or index cards, do we really begin to see what we've uncovered.
Preparing the test results is akin to assembling a story. You need to organize the characters, discover the plot, and set the scene. Sometimes, the characters take on a life of their own and take you in a direction you never expected.
The same is true when analyzing the data. Sometimes, patterns emerge you couldn't see as the testing progressed. It wasn't until we had the stickies all over the wall that we noticed every user had used the same term to describe a sub-goal. It was a term we'd never used, so we noted it every time we heard it. What we didn't realize was we had actually heard it from every user. The design subsequently changed to include that term.
The most successful teams don't see usability testing as a concrete activity. They use it as an ongoing activity, through all phases of the project. From this, analyzing isn't just about looking at the results from this particular round. It's also about integrating what you learned this time with what you've learned in the past. Seeing the testing results as an ever-growing library of knowledge is an extremely effective approach.
via :uie
0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home