User Needs & Experience 

Measures & Guidance > User Needs & Experience   

Needs and experiences of direct/primary users (people who directly interact with the intervention and implementation strategy) and indirect/secondary users (people affected by the intervention and implementation strategy) are assessed differently based on project. Methods frequently used include: interviews, focus groups, observation, comparative testing (e.g., A/B testing), and co-design.

User Interviews: Understanding User Needs 

Interviews are one of the most common methods to collect data in HCD. Interviewing is used in HCD to understand perspectives and experiences of respondents. Well-designed interviews in HCD aim to gather data that drives the design process and helps make better design decisions (Beyer & Holtzblatt, 1998, 417).11 We provide guidance on conducting interviews and specific tips on interviewing as part of assessing usability, accessibility, and appropriateness. An example from UWAC 1.0 is available here.

Interviews may be used at all phases of the DDBT process, with different goals: 

  • Discover (formative): Understand who users are and their current ways of doing things, including things that work well and things that could be improved.
  • Design/build: Show participants prototypes and elicit reactions. It is also not uncommon to need to do some additional “discover” work when the team realizes it needs to know more to make an informed design decision.
  • Test (summative): Assess whether the (re)designed intervention and/or implementation strategies are achieving its design goals and understand people’s lived experiences with them.

Planning 

Aligning Interviews and Other Methods with Research Questions 

Any research method selected should align with your research questions, data needed, timeline, and available resources. One challenge researchers familiar with interviews may encounter when first using interviews to support design processes is ensuring that information generated from interviews will inform resulting design decisions. There are a great many things researchers are curious about, but when used in a design process you must prioritize the questions that will help make decisions. This includes: 

  • Who are your users? (This may include primary users, such as people directly interacting with your product, artifact, or service, as well as secondary users who interact with the product, artifact, or service through the actions of another party. For example, when designing a worksheet for an interventionist to use in a session, you may still need to engage with patients as well.)
  • What do your users know?
  • What do they want to do?
  • How do they do things? (And where? In what conditions?)
  • What successes do they experience?
  • What barriers do they face?

Note that interviews may be used fluidly with other methods. You might, for example, present a scale and then decide where to focus an interview based on responses to individual items on the scale or overall scores. Many interviews also intermix other activities, such as asking participants to demo a part of their work, give a tour of their workspace, or some other context relevant to the design challenge. They might also ask participants to engage with a prototype to complete tasks, before and/or after interview parts of a session. We will discuss this in more detail below.  

A caution about focus groups. Many research teams are tempted to use focus groups in place of interviews to reduce time and expenses. This may not be a good idea, as expenses such as participant compensation remain fixed, yet you are getting much less depth from any one participant. Focus groups can also further confound results due to effects such as a group think and social desirability bias. We also know that people from marginalized backgrounds are more likely to be further marginalized in focus groups. If you find yourself considering focus groups only for efficiency, we strongly urge you to reconsider and potentially instead invest in interviews.  

Where focus groups can shine is when you want participants to build on each other or to elicit, e.g., workflow details that live between different roles. Careful attention to power dynamics among potential participants when forming groups and to facilitation techniques that elicit attitudes and experiences from all participants while avoiding group think (e.g., by having participants write down notes about a prompt individually before sharing) is important.  

Identifying and Prioritizing Participants 

To recruit the right participants for your study, your team must come to a consensus on users and interested parties. This is necessary to determine appropriate recruitment criteria. With your team, brainstorm users and interested parties. Use existing data such as literature and other surveys to gather preliminary research. Often it is helpful begin with a broad and overly-inclusive preliminary user list and then narrow your focus (see Table 1 in Lyon et al., 2020 for an overview of this process). Once you have identified a set of potential users and interested parties, you can prioritize participants by considering which groups: 

  • Has the most diverse set of tasks?
  • Is the largest?
  • Is most important to help achieve intervention and/or product goals?
  • Has the most needs? Seems to be having the most trouble with the product?
  • Has the most to lose if the intervention and/or product does not work for them?

Another recommendation is to recruit participants based on behavioral criteria followed by demographic attributes important to your design (Goodman et al., 2012, 97).12 Behavioral criteria includes people that currently do (or would be interested in doing) what your product or service can provide. For example, with a mobile application for adolescent use, while adolescents may be the intended user, a caretaker that gives permission or phone time for an adolescent to download an application may be important to capture. When planning recruitment, you may also want to consider segmenting respondents by traits that could influence their response to a design solution. This could include traits like level of experience with competing/similar products or services. Finally, consider characteristics that you may want to avoid during recruitment (Goodman et al., 2012, 102).13 This could include people who you know well and who thus may bias the results. Participant eligibility and segmentation can be facilitated using a short screening survey that exclusively includes questions that will determine participant eligibility, ask for specific quantities related to behaviors, and are neutral in tone. Including some open-ended questions in screeners may help give a sense of whether an individual will give more detailed feedback during an interview or usability test (Goodman et al., 2012, 108).14 

Interview Types 

Individual vs. contextual: Individual interviews are “traditional” interviews where an interviewer asks questions and probes a single respondent.15 These types of interviews can be relatively straightforward to administer and conducted in-person or remotely. Contextual interviews are conducted in a respondent’s own environment and are a combination of observation and interviewing.16 A key advantage of contextual interviews is that it places a user in their own environment, which may create a more authentic depiction of a user’s everyday experiences. You may want to conduct interviews within an environment where you foresee a product or service to be used can both help improve participant recall and accuracy of relevant details during an interview since individuals aren’t always cognizant of their behavior (Beyer & Holtzblatt, 1998, 43).17 The interviewer asks questions based on a respondent’s behavior completing their own tasks. During usability studies, contextual interviews can be combined with assigned task scenarios. 

Structured vs. unstructured: Interviews can be structured, unstructured, or semi-structured. Structured interviews have a set script that is followed and may be easier for comparability and analysis. Unstructured interviews may be more conversational and increase comfort for participants but must be moderated well so that priority information is collected within the interview time (Hanington & Martin, 2019, 138).18 Interviews with both structured and unstructured sections give you the opportunity to ask some questions exactly the same for everyone, but space to follow up on topics of interest.  

Interview Guide 

For structured and unstructured interviews, an interview guide helps ensure that you ask questions that will answer your research questions. Your interview guide should primarily be open-ended questions, which will give you more of an opportunity to probe and generate richer data. While you should focus on creating open-ended questions, don’t develop questions that are too general or framed around what respondents “usually do” (an alternative to this would be asking about behavior during a specific reference period). As a starting point, brainstorm interview questions and map them to each research question they address. One resource to help write questions is Nikki Anderson’s Taxonomy of Cognitive Domain chart, which lists question verbs aligned with what you’d like to learn.19  

Iterate your initial list with an eye for questions that generate duplicate and/or unrelated information and reword questions that are leading or could be made more open-ended. Your interview guide should start with easier warmup questions, such as “tell me about yourself,” as you build rapport with the participant. More sensitive questions typically work better later in an interview. Sequence questions in a logical order and share your interview guide with colleagues for feedback. 

Conducting interviews 

We recommend the following considerations when conducting interviews: 

  • Be prepared
  • Express gratitude for respondent participation
  • Remind the participant that the intervention or implementation strategy is being evaluated, not them.
  • There are no right or wrong answers.
  • Practice active listening, don’t interrupt respondent and use body language or subtle prompts (nodding,
  • taking notes, to say more)
  • At the end of each interview, ask yourself if you understand what the respondent shared. Are there things you need clarification on?
  • Following interviews, interviewers and/or research team should reflect on each interview, how things went, and additional questions to consider: Is the data collected meeting research goals?
  • Review recordings and/or transcripts as data is collected. Leave enough time to analyze data.

Cognitive Walkthroughs for Implementation Strategies:  
A hybrid of usability evaluation and interviews  

Lyon et al. developed the Cognitive Walkthrough for Implementation Strategies (CWIS) to assess implementation strategy usability.20 CWIS has six steps; interviews can be conducted during step five as part of task testing (see Figure 2). As part of task testing, a facilitator presents a scenario and subtasks, respondents are invited to ask clarifying questions, and rate for each task the extent to which they personally expect successful at: 1) discovering the correction action as an option; 2) performing the correction action or response; and 3) receiving sufficient feedback receiving sufficient feedback to understand that you have performed the right action or that the task was successfully completed (see Figure 3). The facilitator subsequently asks respondents to explain their ratings, what might promote success, and what impedes accomplishing the task. This information can then be used to specify usability issues. 

A horizontally arranged flowchart illustrates six sequential steps in a process, each represented by a circular icon and corresponding label:

Target with an arrow: "Determine necessary strategy pre-conditions"

Hierarchical network icon: "Hierarchical task analysis"

Bar chart with line: "Task prioritization ratings"

Clipboard: "Top tasks -> testing scenarios"

Two user icons: "Group testing w/ representative users"

Document with magnifying glass: "Problem classification / prioritization"

All icons are blue-themed, connected by a light horizontal line, visually guiding the reader through the staged methodology.

Figure 2. The Cognitive Walkthrough for Implementation Strategies (CWIS): a pragmatic method for assessing implementation strategy usability 

Following all steps outlined in CWIS can be challenging for some projects, including interventions and implementation strategies that are complex and involve significant training to learn. Participant familiarity with an intervention or implementation strategy can also vary, which can make conducting a CWIS or task-based testing challenging. CWIS combines elements of cognitive walkthroughs and task-based testing.  

Like traditional cognitive walkthroughs, the focus of CWIS is on evaluating the usability of a system by proceeding through the interface in a logical, task-oriented sequence. Unlike traditional cognitive walkthroughs, which engage a team of expert reviewers, CWIS engages real, current or potential users.   

As with task-based testing, the focus on CWIS is about understanding how participants do our would interact with a system, service, or product. In task-based testing, researchers can observe if users successfully complete a task and what successes or barriers they encounter. In CWIS, participants may be unable to actually do the task, or all of the tasks, and so instead respond by critiquing a design.  

In practice, UWAC projects have combined elements of CWIS and task-based testing during the discover and design/build phases. For example, the PST-Aid and TF-CBT teams have each used different aspects of CWIS and task-based testing to explore usability.  

PST Aid: To assess usability of PST-Aid at the end of the Design/Build phase, we first created an implementation strategy map that visually depicted ideal use of PST-Aid for PST the intervention. We created separate maps for clinicians and patients that highlighted those users’ expected use of PST-Aid by therapy sessions. We prioritized tasks and subtasks based on usability challenges that had come up in the previous phases and functions critical to using PST-Aid. For each task and sub-task, we specified goals, a starting state, estimated time, and what would be considered “success.” We additionally drafted interview questions and probes that accompanied each task and sub-task. When we reviewed findings from the usability study, we classified issues based on the 12 categories of usability issues from UWAC 1.0 projects and prioritized usability issues for reporting and addressing in PST-Aid. We ran test sessions with individuals that had participated in other PST-Aid design activities. As a result, they may have seen the interface before, but in a limited capacity.  
 

TF-CBT: As part of the Discover phase, we are exploring usability issues with TF-CBT when used in school settings. A TF-CBT consultant identified core tasks to the intervention when implemented as-intended. The team conducted a hierarchical task analysis and rated tasks for prioritization. We developed initial tasks and test scenarios based on this prioritization. We also developed an intervention map illustrating implementation of TF-CBT to fidelity (similar to what was done with PST-Aid). A version of this map will be used for one activity during the test session to get general reactions to TF-CBT from clinicians. Additionally, this map supported sequencing of tasks for the protocol. After piloting the two protocols (one testing TF-CBT with clinicians, one testing TF-CBT with students), we realized that an understanding of fundamental skills taught early when implementing TF-CBT is essential to be able to accomplish subsequent tasks we had prioritized. Thus, even though we did not quantitatively prioritize these fundamental skills for the test session, we created tasks based on these skills for the protocols. We will run clinician test sessions with clinicians that have varying experience with TF-CBT (some fully trained, others having viewed the introduction TF-CBT training module). 

Measures & Guidance > User Needs & Experience