Usability Issue Reporting Format
Measures & Guidance > Usability > Usability Issue Reporting Format
Across the UW ALACRITY Center, we seek to identify recurring usability issues with evidence-based practices and the services and systems designed to support or implement them, and to identify effective patterns for addressing those issues.
The purpose of this guide is to support UWAC researchers in identifying and communicating usability issues for clinical interventions and implementation strategies (CIs/ISs) to the Center. This guide may also be useful to other researchers in systematically characterizing barriers and facilitators to usability of CIs/ISs.
For overall guidance on usability and methods for assessing it, please see our usability guide.
What are examples of usability issues in psychosocial interventions and implementation strategies?
During the first iteration of UWAC, we identified 12 categories of usability issues with mental health CIs/ISs based on a cross-project analysis. The table below lists all 12 usability issue categories with example issues.
These examples are based on the projects affiliated with the center, and the categories may not be exhaustive. In other words, while this set of categories may inform the design of usability studies, do not let it overly narrow the kinds of usability issues project teams look for and report.
| Usability Issue Category | Definition | Example |
|---|---|---|
| Complex and/or cognitively overwhelming | The intervention or implementation strategy is too overwhelming to the user or the interventionist. | In Problem Solving Treatment (PST), if the problem identification step results in problems that are too complex, patients and therapists can find the next steps intractable, causing the session to get stuck or the patient to disengage. |
Required time exceeds the available time | The intervention or implementation strategy demands more time than is available. | As designed, shared decision-making takes 30 minutes to complete the protocol. Clinicians do not have time in their current diagnostic to do all components of shared decision-making. |
Incompatibility with interventionist preference or practice | The intervention or implementation strategy is not compatible with how the interventionist prefers—or has been trained—to work and deliver interventions. | In PST and Engage, therapists want to know what the client can accomplish on a weekly basis, which can take them several weeks to understand. Therapists want to know the client’s context, skills and abilities to decide on a problem. Some therapists took several sessions (or a long portion of one session) to build client background or did not feel that PST focused enough on learning client background. When therapists do not know enough information about the client, the goal they set with the client might not be an adequate scope (might not fit the skills and abilities of the client), as a result clients may not accomplish the goals. |
| Incompatibility with existing workflow | The intervention or implementation strategy is not compatible with the interventionists’ existing workflows. | Within the integrative care model, case managers (CM) have a variety of tasks and cannot focus exclusively on supporting therapists in treatment (because they are also providing health education and resources to patients). As a result, CMs are not always used at the top of their license and therapists tend to take on tasks and cases that the CM could perform instead. This leads to inefficiencies as therapists handle more cases at lower levels of severity. |
| Insufficient customization to clients or recipients | The intervention or implementation strategy cannot be tailored to client/recipient needs or does not provide enough guidance for interventionists and clients/recipients to customize it. | In the original comprehensive self-management intervention, the content is presented as a written workbook and taught during in-person sessions. Because access to skill demonstrations was limited to synchronous delivery by a provider, users were unable to review skill demonstrations at their own pace and were frustrated with the intervention, causing them to disengage. |
| Intervention buy-in (value) | Intervention or implementation strategy does not sufficiently build client/recipient buy-in for its value. | The overall therapy perception is not positive for some clients and some therapists. Therapists and clients do not think therapy is helpful to clients. Some clients feel PST is childish or not a therapy. Some therapists feel like talk therapy is sometimes more appropriate, or that PST encourages clients to be avoidant. Avoidance can lead to clients not working on important problems and can lead to clients not returning to PST therapy due to unmet expectations of wanting talk therapy. |
| Interventionist buy-in (trust) | The intervention or implementation strategy does not build the client’s/recipient’s trust in the interventionist | A core component of promoting first relationships (PFR) is that mothers are videotaped with their infant and that recordings are reviewed and used to evaluate and provide feedback about their interaction. Patients may refuse video recording for personal reasons, such as trauma or simply feeling uncomfortable. As a result PFR cannot be completed as designed, leading to missing a core element of the intervention. |
| Overreliance on technology | Intervention or implementation strategy relies on technology that creates barriers for some clinicians or recipients or that is not available to all clients or recipients. | The online format of the shared decision making tool may not work for some patients if they’re not comfortable with technology, limiting their ability to participate in the process. |
| Requires unavailable infrastructure | Intervention or implementation strategy requires physical, systemic, or organizational infrastructures that are not available. | RUBIES is designed to be used with Tier 3 students diagnosed with autism in addition to other supports educators are legally required to use as stated in students’ individualized education programs. RUBIES needs to be integrated with other supports. This additive requirement creates burden and leads to evidence-based practice fatigue, which makes it difficult to implement and integrate with Multi-Tiered System of Supports in schools. |
| Inadequate scaffolding for client/recipient | This involves a lack of preparation and support for the client/recipient. The intervention or implementation strategy lacks support for the client/recipient to understand and succeed in the required activities of the intervention. | Some of the core concepts of PST–distinguishing between a problem, goal and a solution–are unclear to patients. Consequently, they may not feel confident using PST on their own in other areas of life or after treatment ends. |
| Inadequate training and scaffolding for interventionists | The intervention or implementation strategy’s training and scaffolding do not provide enough initial and/or ongoing support to deliver the invention as designed or to know how to respond to emergent challenges. | If clinicians do not have enough training and practice with shared decision making, time and other pressures in the clinic cause them to fall back on what they know and omit shared decision making. |
| Lack of support for necessary communication | The intervention or implementation strategy requires but does not sufficiently facilitate communication between interventionist and client/recipient. | The comprehensive self-management intervention lacks mechanisms for clinicians to be aware of client progress, and so they are unable to notice and adapt when treatment is unsuccessful for a patient. |
Usability issues can be identified across all stages of UWAC’s Discover, Design/Build, and Test (DDBT) framework:
- Discover: issues with unadapted CIs/ISs and mechanisms for delivering them
- Design/Build, Test: new issues occurring with redesigns that are responsive to previously identified issues ; new understanding of previously identified issues (e.g., by evaluating design refinements, you learn that the issue is not as originally thought)
For UWAC teams: What usability issues should I report and when?
All UWAC projects are required to identify and address critical usability issues for un-adapted and adapted CIs/ISs. Teams can reach out to Methods Core to request help with deciding which issues to report to the Center.
How do I decide what usability issues to report?
All UWAC projects are required to identify and address critical usability issues for un-adapted and adapted CIs/ISs. Most teams will identity more issues than they ultimately report to the center. This section discusses how to decide what to report.
For UWAC projects, teams should report usability issues with CIs/ISs that are inherent to the CI/IS design (rather than minor details in the supporting materials) or that interfere with someone’s ability to complete the intervention. For example, if an issue makes a CI/IS so complicated that some people cannot or will not do it, this should definitely be reported! In contrast, a usability issue of a button being too small, resulting in users failing to notice it, is less important to report to UWAC, unless this issue stops a user from accomplishing a CI/IS task.
It can be more challenging to decide whether to report issues in the design/build and test phases. In the process of brainstorming and developing prototype designs, it is common for countless usability issues to be introduced and corrected through the iterative design process. In general, teams do not need to report issues that are addressed. However, if the team believes sharing the issue would be informative for other projects and prevent similar issues, then please report it! Additionally, some issues may not be resolved at the end of design/build or only identified in test, and teams should report these issues. Finally, during design/build or test phases, it is not uncommon to learn that an issue is a bit different than was thought during a previous phase: please refine and update this issue based on your evolving understanding.
We recognize the question of “is this usability issue instructive about the kinds of problems that occur with CIs/ISs?” is subjective. We include a few examples below, but – overall – if you are unsure, we would rather that you share the issue so that others may learn from it.
| Issue | Decision | Rationale |
|---|---|---|
| When using the application, internet would be necessary to access and interact with the application. In rural areas, internet can be spotty or drop out, which could leading to not being able to use the application for therapy. | Report | During the redesign process, the team decided to move from a paper worksheet based design to an Internet-connected app. This creates the potential of people not being able to engage with the treatment. Despite this design choice being made with awareness that it might create this issue, the team decided to report it after design/build and to continue to monitor it in the test phase. |
When looking at the Review section of the worksheet, the second (most recent) action plan disappeared with the current design. The design does not give equal importance or more importance to the most recent action plan causing the user to want to skip over it. | Do not report | This was an issue introduced, and subsequently addressed, during the redesign process. In discussion, the team decided the issue is not particularly informative regarding issues with clinical interventions and usability strategies and is instead a fairly low-level issue with the interface. Consequently, we decided not to report it. |
| When patients use the app’s scheduling feature, the design team assumed it was only needed for some activities, it doesn’t include an opportunity to schedule Action plan list items, relying on recall can result in incomplete action plans. | Do not report | We introduced the scheduling feature in the design/build phase, and the first design of it created this issue. We subsequently addressed it. We decided not to report it as it had been addressed and was based on our (incorrect) assumptions about the therapy. |
| When a session worksheet is complete, the redesign team assumed that giving access to the worksheet is enough to support action plan completion and communication between users. However, it was not, which resulted in limited between-session support. | Report | This was an issue introduced by moving from paper worksheets to technology support. With paper worksheets, the client takes the worksheets home with them, which signifies they should continue to use it. However, when co-creating the action plan in technology, the worksheet exists online, and without an explicit handoff of a physical worksheet to the client, they may not realize they should use it between sessions or remember to access it. We decided to report this as other teams creating technologies to support CIs/ISs could similarly, inadvertently remove the communication that people use continue using/referring to the completed worksheet. |
The design of the Patient Activity dashboard assumes the therapist will want to look at any documents that have been modified or added by the patient. However, a therapist may want to view their patient’s documents in a different view or organized differently. | Report | The redesign process added this dashboard view – something that does not exist in the paper worksheet view – to support clinicians. However, while it supported one workflow, the design of the system then constrained clinicians to using that workflow, even though it may not be compatible with how they want to work. This seemed like something that could happen in other redesign projects (adding supports but inadvertently reducing customization in the process), and so we decided to report it. |
We encourage teams to keep track of all usability issues identified, including those not reported to the Center. You can revisit this log throughout the project to track the status of addressing issues and prioritize issues. These notes may also help remind you to monitor issues in the future.
When do I report usability issues?
UWAC teams should report new issues at the end of the each DDBT stage in RedCap, as relevant to your project. UWAC Methods Core will provide feedback on submitted usability issues and work with teams by asking clarifying questions. The methods core is also availably to consult on plans for addressing usability issues.
We expect that teams will identify most issues in the discover phase. During the design / build phase or test phases, teams may identify new issues or learn new information that leads to revising a previously reported issue.
Teams that have completed a discover phase before beginning their UWAC-funded project should report issues identified in that work.
How can I describe usability issues?
Description of usability issues commonly includes the following elements: (1) Description, (2) Severity, (3) Scope, (4) Complexity, and (5) Evidence (Table 3). These issues may also be linked to (6) known research (e.g., previous documentation in other studies). Teams may also find it useful to describe next steps (e.g., implementing a redesign if one is known, further research if needed, or developing and evaluating prototypes of potential fixes), though for UWAC teams, we ask about redesigns in a separate questionnaire in the design/build phase.
We use these elements as a template for UWAC teams to report issues, though we recommend them for all projects seeking to identify and address usability issues, as they support description and understanding of an issue as well as prioritization of which issues to address. These elements are presented in the table below and then described in greater detail. Elements are based on common ways of reporting usability issues in human centered design of software systems, but we have customized them to better reflect the kinds of issues we will identify in UWAC projects.
| Description | A concise summary of what’s going wrong. Aim for 1-2 complete sentences, using the following structure (explained further below): When [PRECURSOR(S)], the [COMPONENT] is / has / is experienced as / results in / etc. [PROBLEM] which [CONSEQUENCE]. Do not include or imply a proposed solution in the issue description; describe problems in a neutral way that is generative for a full range of potential design solutions. |
| Severity | For each identified usability issue, assign a severity rating using categories adapted from Dumas & Redish 1999: Level 0 (catastrophic or dangerous; causes harm; high risk); Level 1 (prevents completion of a task); Level 2 (creates significant delay and frustration); Level 3 (has a minor effect on usability); Level 4 (subtle problem, points to a future enhancement). We recommend that, when appropriate, multiple team members independently rate each issue using these categories (see below for additional guidance). |
| Scope | Usability issues can be considered on a spectrum from local (i.e., confined to one user group or component of an intervention/strategy) to global (i.e., experienced by most/all users and pervasive across components). For this section, articulate: (1) Prevalence of users encountering the problem (and to which user groups they below) and if certain use groups are affected. (2) Components (including content elements, structures, artifacts, and parameters) affected |
| Complexity | Complexity refers to how straightforward (or not) it is to address an issue. An issue might have low complexity if you understand the root cause of the problem and solutions are known (e.g., rewording a worksheet prompt avoids a misunderstanding). An issue may have higher complexity if the root cause is not understood (i.e., more research is needed), if addressing the issue is likely to likely to cause other, downstream problems (i.e., there are interaction effects between the component of the intervention with the issue and other components or the health system as a whole), or if the solution is not well understood. We recommend writing this qualitatively, e.g., “This issue has [low/medium/high] complexity, because…”. Often the because is more important than the actual rating. |
| Evidence | Describe the qualitative and/or quantitative data that provided evidence for the usability issue and which support–and provide further understanding of–the description, severity, scope, and complexity indicated above. If possible, specify: (1) Whether the usability issue was independently observed (e.g., usability testing) versus reported by a user (e.g., in an interview). (2) Whether the usability issue was experienced by a user versus anticipated based on a hypothetical situation. |
Related research | If you have seen this kind of issue before in related research, including theoretical frameworks or models that could help us understand what’s going on, we’d appreciate a citation! Similarly, if this is an example of a common heuristic, e.g., Nielsen’s 10, please make that connection. If you don’t see connections, don’t worry — that’s part of our job in the methods core! |
Recommended next steps or How your team solved the problem | If you have not verified a solution, you might recommend that one or more alternative designs be evaluated, or you might recommend that the team work on redesigns. Sometimes you have to recommend further study/usability testing to better understand the issue. If you have tested and verified that a redesign fixes this problem, you might recommend that be implemented. It’s possible you aren’t sure what the next steps are, in which case you might seek consultation from the UWAC Methods Core. Alternatively, the issue might be sufficiently minor that you don’t plan to address it. |
Below, we present guidance on each of these areas. Overall, the guidance aims to help researchers and intervention designers prioritize issues and understand them well so they can plan their next steps.
(1) Description
When [PRECURSOR], the [COMPONENT] is / has / is experienced as / results in / etc. [PROBLEM] which [CONSEQUENCE].
COMPONENTS of the intervention should be detailed using the same structure reflected in the DDBT intake form, including (1) content elements (discrete techniques), (2) structures (processes that guide the selection and delivery of content), (3) artifacts (tangible, digital, or visual materials), or (4) parameters (static properties that define and constrain the intervention or service “space”).
Examples:
When treating depression in community settings, clinicians experience the need to exactly follow this seven step process each time as tedious and burdensome, which results in clinician exhaustion or boredom.
PST Aid
When clinicians are engaged in live consultation (precursor), the approach to case discussions (component [content element]) assumes that clinicians know how to do a concise case presentation (problem). When it’s not concise, discussion overflows into other consultation activities (consequence).
When writing your description, avoid the following pitfalls:
- Don’t start with [THE INTERVENTION]…
- Instead, start with [COMPONENT OF THE INTERVENTION]
- So, instead of “Problem Solving Therapy was experienced as….” start with “The problem solving processes introduced in PST…” (content element) or “The number of sessions required for PST…” (parameter)
- Don’t focus just on one consequence if there are multiple consequences. Various consequences affect different stakeholders differently.
- Don’t be vague in problems or consequences
- Avoid vague language about “difficulties,” “problems,” etc..If you don’t have the information to be more specific, more user evaluations may be needed and, should be recommended as a next step.
- “This takes too much time” is too vague because you don’t know what to do next or what the consequences of it taking too long are. If could mean:
- “The structure of the therapy doesn’t allow enough time to get to know to my patients”
- “Takes more sessions than I have with my patients, since they don’t see me regularly”
- “Takes longer than the session hour, so clinicians and patients don’t get everything done in a session”
- “Takes longer than the session hour, so clinicians often have to take over parts of the therapy that my clients really should be leading”
- Don’t presuppose or imply a solution. For example, “When mental health services are offered only in-person, potential patients are uncomfortable presenting themselves for care, which results in people not accessing care from which they may benefit” implies that the problem is only in person care and that other modalities are necessary. While other modalities may be a great way to address this problem, reframing this issue as “Potential patients are uncomfortable presenting themselves for in-person care because of concerns about seeing people to whom they do not want to disclose their mental health concerns at or near the clinic” leaves the solution space more open, such as designs that create greater privacy when accessing in-person care.
- Don’t accept at face value reports from one stakeholder group about usability issues that might be experienced by another group. A stakeholder group (e.g., clinicians) often can describe experiences of other stakeholders (e.g., patients), but they are often only able to see an incomplete picture or it may be based on biases. You will ultimately want to test any assumptions about how a particular user group will respond by engaging in user research with that group.
Additional guidance:
- If indicated, you may reference how the information was obtained in the description of the usability issue (e.g., “clinicians reported… ”)
- Especially later in a redesign process, usability issues may be described in a way that is comparative (i.e., “X is less XXXXX than Y because XXXXXX”). Be specific about the comparison.
(2) Severity
Severity helps prioritize the fixing of problems and allocation of resources to fixing them. There are various scales, some developed more for interfaces, others that apply more broadly. Example scales include:
- 3 point: disaster, serious, cosmetic
- 5 point: catastrophic, major, medium, minor, cosmetic
We generally recommend Dumas and Redish’s four-level scale, with a modification to account for how some usability issues can cause harm:
- Level 0 – catastrophic; causes harm; high risk
- Level 1 – prevents completion of a task
- Level 2 – creates significant delay and frustration
- Level 3 – has a minor effect on usability
- Level 4 – subtle problem, points to a future enhancement
It offers a reasonable level of precision, without being overwhelming. Additionally, the descriptors are broad apply not to just to screen-based interfaces.
Because there are multiple scales, we recommend reporting the full descriptor or a short name (e.g., “4 – subtle”) rather than just numbers (e.g., “4”).
We recommend that multiple team members rate each issue, if the team has sufficient expertise and familiarity with the data to do so. If there are disagreements about severity, resolve them through team discussion.
(3) Scope
To what extent is the problem present in the product or service? This can refer to prevalence across the system, service, or artifact as well as the prevalence of how many people it affects.
- Local – isolated to one page or section; for a particular stakeholder group
- Global – throughout the interface or experience; for all participants
We’d love to know number of users and from which groups across your usability testing.
(4) Complexity
How difficult is the problem to understand or reproduce? How easy is it to fix? Generally, more complex problems will take more time and resources to fix, often starting with more detailed study of what is going on.
A low complexity problem is easily explicable. You know what is happening and why. You likely have an idea for how to fix it and that fix seems feasible.
A high complexity problem may have unclear causes. People may be making mistakes and you don’t know exactly why or when. You may not understand how to fix it. Alternatively, the problem may be clear but the solution is either unclear or may require large scale redesign of the intervention (e.g., reordering of all steps in a service) or even rethinking the health system.
(5) Evidence
As a minimum, we ask that you provide exemplary evidence. The goal of this evidence is to help others understand the problem, and the standard should be that it is persuasive and informative to someone who has to redesign the intervention as a result.
Often, the most informative evidence combines data from different sources: e.g., qualitative and quantitative data from usability testing, quantitative data about the number of patients who complete versus drop out of the treatment after certain number of sessions.
As you decide which evidence to include, consider the other categories. Together, the set of evidence should illustrate the component, the issue, the consequences, their severity, and the complexity.
For center reporting requirements, we do not need all of your evidence on an issue here. That said, if you find it easier to present all the evidence, we’re happy to work with it.
(6) Related research
Does this problem connects to other research? If so, we would appreciate pointers to the literature. If not, that’s where our work as the center methods core will pick up.
(7) Next steps / redesign
As projects move from stage-to-stage, we anticipate that you may revisit and revise your next steps. For example,
- at the end of discover, you might identify an issue but also know that, to address it, you need to do further detailed discovery as part of your design work.
- during test, you may confirm that a proposed solution works. Alternatively, you might identify new information that indicates the usability issue was not what your team first thought.
Please keep us updated as you learn!
Measures & Guidance > Usability > Usability Issue Reporting Format
