Engagement 

Measures & Guidance > Engagement

Assessed through the UWAC Engagement Measure, observation, self-report, and telemetry (for digital interventions and implementation strategies).

Adapted from the Patient Responsiveness Scale (Moullin, J. C., Sabater-Hernández, D., García-Corpas, J. P., Kenny, P., & Benrimoj, S. I. (2016). Development and testing of two implementation tools to measure components of professional pharmacy service fidelity. Journal of evaluation in clinical practice, 22(3), 369–377. https://doi.org/10.1111/jep.12496 

Definition

Degree of user participation and enthusiasm for the aspects of clinical interventions and implementation strategies that require user involvement (Doherty et al.).

Rationale

Researchers and practitioners have increasingly focused on engagement following the emergence of user experience within HCD, which has broadened and more explicitly incorporated less tangible factors such as emotion (Bardzell et al.). However, there is little consensus across disciplines (Kelders et al.). Although more than 100 unique definitions of engagement have been identified (Doherty et al.), a recent scoping review focused on eHealth confirmed that behavioral, cognitive, and affective components were the most common (Kelders et al.). In mental health, failures of engagement at multiple levels (e.g., individual, system) have been identified as reasons why many digital therapeutics fail to be consistently implemented or sustained or to produce their intended effects (Graham et al.).

Measuring & Understanding Engagement

Engagement is fundamentally complex, multidimensional, and challenging to measure. There are two different approaches to measuring engagement: subjectivity-oriented and objectivity-oriented (Doherty et al.). Subjectivity-oriented measures are self-reported and include (in order of most documentation of use): questionnaires, behavior logging, observation, task outcomes, and interviews. Objectivity-oriented measures are devoid of researcher involvement such as behavior logging, psychophysiological measurements, or telemetry. In industry, objectivity-oriented telemetry measures like user data—logs, time, number of interactions, and frequency of logins—are commonly used. Other examples of telemetry data could be behavior data captured in an app or data generated from devices that track health metrics.

A meta-analysis of engagement conceptualizations and measurement in digital technologies (including, but not limited to, health) found that the majority of engagement measurement approaches take the form of questionnaires (e.g., User Engagement Scale; O’Brien et al.), behavior logging, or observations (Doherty et al.). Engagement is frequently evaluated based on downloads or time spent interacting with a product (e.g., number of visits to a website, minutes/hours of use), but this metric is often a proxy for determining whether the design is experienced as compelling, appealing, or useful (O’Brien et al.). In a particularly sophisticated example of this approach, Taki et al. developed an engagement index to assess the strengths and weaknesses of website and app features based on page views, frequency of use, engagement with push notifications, time between interactions, and subjective satisfaction. Although it did not originate in HCD, the Patient Responsiveness Scale (Moullin et al.) measures two factors—participation and enthusiasm—that map well onto some aspects of engagement. Although this scale focuses only on providers’ perception of patient engagement, it has recently been adapted for administration to implementers and other users to report on their experiences with a variety of public health interventions and implementation strategies (Lyon et al.). In addition, given that complex conceptualizations of engagement generally extend beyond the quantity of interaction to the quality of experience, mixed methods are important for assessing this construct.

UWAC Reporting Requirements

To measure engagement, UWAC’s proposed approach includes both quantitative and qualitative measures. To support center-wide science, UWAC-funded teams are required to use the UWAC Engagement Measure (UWAC-EM) to quantitatively measure engagement. The UWAC-EM is adapted from the patient responsiveness scale (Moullin et al.). We developed the UWAC-EM based on considering responsiveness as a proxy for service effectiveness. The scale includes 10 items, which will likely need to be adapted based on study. Qualitative measurements of engagement may vary by project, but could include subjectivity-oriented (observation, self-report) and objectivity-oriented (telemetry) approaches. UWAC’s proposed approach of contextual observations stems from using observation to assess teacher delivery of an anti-bullying program and corresponding student responsiveness (Goncy et al.). This study defined student responsiveness as student engagement and following rules, which researchers measured by rating two items during observations. Observes rated the following two items on a scale of 1 = not at all, 2 = somewhat, and 3 = extensively; UWAC staff can advise on adapting the items for your project:

  • “Students were actively engaged in meeting [i.e., on tasks; participating actively by responding and asking questions; and looking at teacher]”
    • Suggested adaptation for UWAC projects: “Users were actively engaged in [intervention/strategy]”
  • “Students followed classroom meeting rules”
    • Suggested adaptation for UWAC projects: “Users adhere to expected activities and modifications to  [intervention/strategy]”

It is especially challenging to assess engagement during intermittent activities that are conducted between researcher contact points. For example, during the first phase of UWAC, critical components of interventions were to happen between sessions (e.g., follow a plan & track how it went, practice skills, track behaviors and moods). It is not practical to rely on observation to assess these behaviors, and even if a researcher watched for this behavior, the behavior may be subject to social desirability bias. To assess fidelity of paper-based interventions, you could use subjectivity-oriented measures (paper logs, surveys) at different time intervals with follow-up interviews, although this approach is not perfect:

  • You could collect paper logs that respondents must fill out. However, these may be unreliable since respondents may fill out a week’s worth of logs right before turning them in.
  • You could administer a daily survey or integrate a diary-study approach by submitting a picture, voice recording, or some other type of documentation. However, the process of collecting these measures is its own intervention, which could bias your results.

To assess fidelity of digital-based interventions, objectivity-oriented measures may be possible such as app logs to see user activities. This can simulate observation and can be done without adding extra reminders. Comparing engagement between a paper-based and digital-based intervention is difficult given measuring engagement for a paper-based intervention is imperfect. Likely the best option would be to interview both respondents that receive the paper-based intervention and digital-based intervention to compare responses, but interpret the results with caution given respondents receiving the digital-based intervention may be more truthful with themselves compared to the respondents receiving the paper-based intervention.

UWAC-EM (Adult Version)

Response scale: 1 = Strongly disagree, 2=Disagree, 3=Neutral, 4=Agree, 5=Strongly agree  

Note: Item #1 is item 1 of the IUS/SUS/ISUS. You only need this item once per data collection. 

Based on your experiences during this session…    

  1. I think I would like to use the [intervention/strategy] [frequently/a lot]. (Item 1 from SUS)  
  2. I [would] actively participate[d] in this [intervention/strategy]. (pro/retrospective based on DDBT phase) 
  3. I found the [intervention/strategy] engaging. 
  4. I would recommend this [intervention/strategy] to others. 

NOTE: For youth participants: 4. “I would suggest the [intervention/strategy] to kids like me” 

Note: Please add to the above 4 items any of the 10 below that are relevant to your study. 

Note to researcher: your project may need to modify the wording of this measure to be appropriate for your study. Please reach out to the methods core for consultation.  
Users will request the [intervention/strategy].      
Users will be proactive in asking questions about the [intervention/strategy].      
Users will readily provide information relevant to the [intervention/strategy].      
Users will actively participate during meetings about the [intervention/strategy].      
Users will collaborate in decisions about the [intervention/strategy].       
Users will do the expected activities of the [intervention/strategy].      
When the plans for the [intervention/strategy] are modified, users will adhere to them.      
When education is provided, users will adhere to the [intervention/strategy].      
When the [intervention/strategy] is active, users will come to scheduled meetings.      
10 Through other people (e.g., colleagues, friends), users will speak positively about the [intervention/strategy].      

UWAC-EM (Child/Youth Version)  

😠 😞 😐 🙂 😁

Response scale: 1 = Strongly disagree, 2=Disagree, 3=Neutral, 4=Agree, 5=Strongly agree  

Based on what we just talked about

  1. I think I would like to use [the thing].  
  2. I would participate in [the thing].  
  3. I found [the thing] engaging.   
  4. I would suggest [the thing] to other kids.
  5. Bonus: I would do the expected activities of [the thing]  

Scoring the UWAC-EM

We recommend you calculate the average ratings for each item and an average aggregate score.

  • To calculate the average rating for each item, sum all respondents’ scores for that item, and divide by the number of respondents.
  • To calculate an average aggregate score, take the average of all respondents’ average scores.

back to Measures & Guidance