Measures & Guidance > Usability > Usability Interviews & Task-based Usability Testing
Usability issues are often best understood through a combination of interviews and task-based usability testing.
Critical Incident Technique Interviews
Sometimes, teams may be most interested in learning about usability issues that emerge only in complex, real-world situations, and that are hard to reproduce in usability evaluations, in the lab, or other contexts. For this, interviews that elicit details of past events can be most effective, despite being limited by people’s ability to recall information.
Example questions ask respondents to recall a time when they did a certain behavior. For example, “tell me about a time you used an app in your job.” This question prompt is slightly different than “tell me about the last time you used an app in your job.” A critical incident question variation could be “tell me about a particular time you used an app in your job where it did not help you accomplish your work.” Learn more about Critical Incident Interviewing from the Nielsen Normal Group.
Task-based usability testing
Usability evaluations often involve asking participants to complete one or more tasks using a product or according to a service. This could be using the baseline intervention/implementation strategy/app, using partial or complete prototypes of the redesign, or using the newly redesigned intervention, implementation strategy, or supporting artifacts. After each task, researchers might present them with a scale or ask follow-up questions, though if this interrupts the flow, you may save this until after all tasks are completed. You may refer to this example of task-based usability testing protocol from a UWAC project.
For tasks that involve collaboration (e.g., a session between a clinician and a patient), it may be necessary to have a researcher take on one of the roles. This increases internal reliability but decreases external validity.
Task design. Designing appropriate tasks requires practice and iteration. If a task is too unclear, you may instead uncover usability issues with your task design, not what you are studying! However, if the task design mirrors the language of what a participant must do too closely (e.g., if you tell them to click the button labeled “search”), the task is leading, and you may not uncover key usability issues.
Think aloud protocol. As we cannot read people’s minds, participants are often asked to think aloud while working toward tasks to help researchers learn as much as possible. This can help researchers learn what a participant is considering doing next and why, better understand their in-the-moment goals, and identify misconceptions. To incorporate think aloud in your interview guide, include instructions for the facilitator to give to the participant about the think aloud process. The facilitator should then demonstrate the technique with an unrelated task so that respondents understand it as best as possible. The participant may still forget (especially when concentrating hard on a task!), and it is often necessary for the facilitator to encouragingly remind participants to think aloud. Even with reminders, some respondents may find it distracting or it might not be contextually appropriate to speak before fully processing behavior. In these cases, it is not worth pushing to use the technique, and instead probe respondents on their task experience after they’ve completed their tasks. For example, you can ask a respondent to walk you through how they accomplished their task.1
Additionally, think aloud protocol is not well suited to tasks that require speaking (e.g., talk therapy, interacting with a voice assistant, etc). In these situations, an alternative is to record the task (e.g., video, screen recording, audio) and then play it back to participants, asking them to describe what they were thinking at the time. This retrospective think aloud is less reliable than in-the-moment think aloud, but sometimes it is the best compromise we can make.
Facilitation. Participants asked to complete tasks may feel like they are being evaluated, and this is especially the case if those tasks parallel anything they might have to do for certification in a therapy or related to their professional expertise. As a result, it is even more important for facilitators to remind participants that the intervention, implementation strategy, or artifact is being evaluated, not them.
When testing new designs (or existing designs with significant usability issues), it is also not uncommon for participants to have interactions that frustrate them. To an extent, it is valuable to allow this frustration to continue so you learn how the participant would navigate the barriers. If participants ask for help, the facilitator might at first turn it back around to them and ask, “what would you do if I were not here?” However, the facilitator should use their discretion in offering assists that keep the session moving or that help prevent frustration levels from becoming so great that the rest of the session is lost.
Although much task-based usability testing has historically been applied to digital technologies, the approach is quite relevant to complex psychosocial interventions such as client-facing interventions and implementation strategies. As one example, the Usability Evaluation for Evidence-Based Psychosocial Interventions (USE-EBPI) method specifies how “lab-based” user testing (one of the array of sub-methods specified within USE-EBPI) can be completed for interventions such as psychotherapies.
- Introduce purpose of study, what you’re hoping to observe and learn, and obtain consent
- Pre-test interview to ask questions about first impressions, demographics, experience with similar products
- Describe task 1
- Respondent performs task 1
- Describe subtask 1a
- Respondent performs subtask 1a
- Describe subtask 1b
- Respondent performs subtask 1b
- Post-task interview to debrief on what was observed during task and subtasks (and reduce cognitive load of recall)
- Describe task 2
- Respondent performs task 2
- Describe subtask 2a
- Respondent performs subtask 2a
- Describe subtask 2b
- Respondent performs subtask 2b
Box 2. Sample sequence for usability test
Sample Usability Questions
- Following a task: How would you describe your experience completing this task?
- What is one thing you would change about this intervention or product? Why?
- How did your experience compare to (a different intervention or product)?
- What are features that would encourage you to use this intervention or product?
Usability Evaluation Sessions that combine interviews with other methods
Using interviews alone to gather data may be limiting because of issues with recall and/or challenges with describing behavior. Interviews can be particularly insightful if they incorporate observation or demonstrations, as people’s ability to recall and articulate details of their use of a product or system is limited. Observation can involve you asking a respondent to complete or demonstrate tasks, and you ask the respondent questions based on what you see (see Box 1). Observation during interviews focuses on monitoring and recording people, behavior, artifacts, and environments. When environments or behaviors are defined, structured observation (such as using checklists to record behavior observed) is a good option (Hanington & Martin, 2019, 158).2 Unstructured observations can be more exploratory and leave the researcher open to seeing what you may not anticipate.
- Introduce purpose of study, what you’re hoping to observe and learn, and obtain consent
- Pre-observation interview to ask questions about first impressions or respondent’s typical day
- Observe respondent and take note of respondent’s behavior
- Post-observation interview to ask questions about what you observed
Box 1. Sample sequence of interview and observation
Observation can be similar to a cognitive walkthrough, which is a usability assessment method to systematically walk through sequential steps of a system or process from a user’s perspective to identify potential usability issues. Cognitive walkthroughs are usually conducted by domain experts, who may be part of the design team, and can be conducted one-on-one or in groups.
- Hanington, B., & Martin, B. (2019). Universal Methods of Design Expanded and Revised: 125 Ways to Research Complex Problems, Develop Innovative Ideas, and Design Effective Solutions. Rockport Publishers. ↩︎
- Rubin, J., & Chisnell, D. (2008). Handbook of usability testing: How to plan, design, and conduct effective tests. John Wiley & Sons. ↩︎
Measures & Guidance > Usability > Usability Interviews & Task-based Usability Testing