Assessed using a UWAC-adapted survey of the Transdisciplinary Tobacco Use Research Center’s measure of Team Collaboration and Transdisciplinary Integration. This survey focuses on team member’s satisfaction with collaboration, impact of collaboration, trust, and respect.
Note: the community participation in research questions are the modified ladder of participation measure. Naylor, Wharf-Higgins, Blair, Green, & O’Connor (2002)
Thank you for your participation in the UW ALACRITY Center’s end-of-phase survey.
This survey features two measures of the levels of collaboration participation among your team. Your participation is confidential. Please recognize that your team will receive a simple report of the number of participants and average scores for each item. We will not provide any quotes to your team from your open-ended item responses, but we will summarize comments from all team members. We encourage your team to have a conversation about these results in order to improve your project. Please keep this in mind when responding and use the “choose not to respond” option or leave open ended items blank if you are concerned about this sharing of summary information.
Which project are you reporting on?
- ◻ RUBIES-IS
- ◻ BRISC
- ◻ TF-CBT
- ◻ PST-AID
- ◻ Other (describe):
Which DDBT phase are you reporting on? This will be the phase your project team recently completed.
- ◻ Discover
- ◻ Design/Build
- ◻ Test
Please evaluate the collaboration within your project by indicating if the collaboration is (1) inadequate, (2) poor, (3) satisfactory, (4) good, or (5) excellent (or -999 choose not to respond).
1. Acceptance of new ideas
2. Communication among collaborators
3. Ability to capitalize on the strengths of different researchers
4. Organization or structure of collaborative teams
5. Resolution of conflicts among collaborators }
6. Ability to accommodate different working styles of collaborators
7. Involvement of collaborators from outside the center
8. Involvement of collaborators from diverse disciplines
9. Productivity of collaboration meetings
10. Productivity in developing new products (e.g., papers, proposals, courses)
11. Overall productivity of collaboration
Please rate your views about collaboration with respect to your project by indicating if you (1) strongly disagree, (2) somewhat agree, (3) not sure, (4) somewhat agree, or (5) strongly agree with the statement, or (-999) choose not to respond.
12. In general, collaboration has improved your productivity.
13. In general, collaboration has improved the quality of your work.
14. Collaboration has posed a significant time burden in your work.
15. You are comfortable showing limits or gaps in your knowledge to those with whom you collaborate.
16. In general, you feel that you can trust the colleagues with whom you collaborate.
17. In general, you find that your collaborators are open to criticism.
18. In general, you respect your collaborators.
In the question below, “academic researchers” refers to the academic study team while “community partners” refers to service providers and clients who were study participants and/or design team members (including therapists, supervisors, teachers, patients, clients, students, caregivers/parents).
What is your role on the project?
- ◻ Academic researcher
- ◻ Community partner
Please rate the extent to which community partners and academic researchers were involved in design team decision making in this phase of the project. [INCLUDE -999 “Choose not to answer” in all options below]. Remember that your individual responses will be kept confidential. Reports back to your team will only include mean scores for the overall team.
0 | 1 | 2 | 3 | 4 | 999 | |
How were decisions made about identification of design and usability issues? | Community partners were not involved in decisions about design and usability issues | Academic researchers presented pre-identified design and usability issues, community partner input sought only once or twice | Community partners offered advice and ongoing advisory input on identifying design and usability issues, but decision-making rests with academic researchers | There was equal decision making on identification of design and usability issues between academic researchers and community partners | Community partners controlled decision making about design and usability issues while academic researchers provided advice | Choose not to answer |
Please provide a specific example that influenced your rating | ||||||
How were decisions made about design goals and activities? | Community partners were not involved in decisions about design goals and activities | Academic researchers determined design goals and activities; community input on design activities was sought briefly | Community partners offered advice and ongoing advisory input when determining design goals and activities, but decision-making rested with academic researchers | There was equal decision making on determining design goals and activities by academic researchers and community partners | Community partners controlled decision making on determining design goals and activities, while academic researchers provided advice | Choose not to answer |
Please provide a specific example that influenced your rating | ||||||
Who developed the design methods? | Design methods were conducted by academic researchers without any community partner involvement, even as research participants | Design methods were conducted by academic researchers, with community partners as research participants | Design methods were conducted by academic researchers with advice and input provided by community partners | Design methods were co-conducted by academic researchers and community partners | Design methods were developed and conducted by community partners, while academic researchers provided advice and input | Choose not to answer |
Please provide a specific example that influenced your rating | ||||||
What indicators were used to determine the success of the design efforts during this phase? | No indicators were used, success of the design efforts was not evaluated | Primary indicators were improved health or educational outcomes | Primary indicator was community relevant redesign | Primary indicator was enhanced capabilities for redesign team members | Primary indicator was participant empowerment | Choose not to answer |
Please provide a specific example that influenced your rating | ||||||
How sustainable do you believe these program design efforts will be after ALACRITY funding ends? | Not applicable, design efforts were not completed | The program design will die at completion of ALACRITY funding | A few residual impacts of the program design will continue after ALACRITY funding | The program redesign will be sustained when ALACRITY research funding ceases | The program redesign will lead to the initiation of new programs | Choose not to answer |
Please provide a specific example that influenced your rating |
Definition: The absolute number, proportion, and representativeness of settings and recipients who received the implementation strategy.
IS Receipt
Setting – same as intervention adoption |
Staff Exclusions (% or reasons) |
Percent of staff invited that participate |
Characteristics of staff participants vs. Non-participating staff or typical staff |
Use of qualitative methods to understand staff participation |