This blog has been moved to Redwerb.com.

Sunday, November 25, 2007

Qualitative vs Quantitative Results in Usability Testing

As regular readers might already know, I am taking a certificate program in User-Centered Design. This quarter I am taking a class on usability testing.

One of the things I've learned in the course is that there seems to be a debate on qualitative vs. quantitative results. For the sake of this post, I'll define them as thus:

Qualitative - descriptive, subjective. Not easy to provide objective measurement.

Quantitative - numerical, objective. In sufficient quantity can provide statistically significant facts.

As you can probably imagine, most people would like to see quantitative results. They are easy to understand and see just how important a particular issue might be (not to mention those cool graphs). However, due to budget and time constraints, it is extremely uncommon for a usability test to have a sufficient number of participants to qualify for statistical significance. Without statistical significance, are quantitative results useful?

image I have to say emphatically no. Quantitative results are completely useless within a usability test. The goal of usability testing should be to find flaws in the design (from the users perspective) of the software. These flaws should be tracked and prioritized the same as any other defect within the application (hopefully you use issue/bug tracking software).

Usability testing should focus on qualitative feedback. There are many techniques that you can use to elicit feedback from a user (for example, thinking out loud). Post study questionnaires can also be useful, but the questions should be open ended. You can ask a participant to rate the software, but only to lead them to the next question which should ask why they rated it the way they did (the actual rating is useless without statistical significance).

Furthermore, when reporting the results of usability tests, it is important not to imply any kind of statistical significance. For example, don't say that 20% of the participants failed to complete a task when you only had five participants. For all you know, that participant is the only person on the planet that would fail or perhaps the other four just got lucky. Even mentioning something like 1 out of 5 can be dangerous. Always make sure that people that are reading your report understand the limitations of the data.

No comments: