Change and Implementation in Practice: Intervention Testing, Piloting and Staging Video 2
Articles,  Blog

Change and Implementation in Practice: Intervention Testing, Piloting and Staging Video 2


NARRATOR: Thanks
for joining us today. Welcome to the Center
for States video series on the essential functions for intervention
testing, piloting, and staging. This series is produced by The Capacity
Building Center for States and funded by The
Children’s Bureau. The Intervention Testing,
Piloting, and Staging brief is intended to help
child welfare agency leaders,
managers, and stakeholders take a structured approach to plan for and
execute the rollout of a new program
or intervention. It does this by
providing step-by-step guidance to conduct usability
testing where needed, determine the need for piloting a new program or intervention and guidance to complete it, and finally,
guidance on planning for staging and
scaling up implementation of the new
program or intervention. Intervention testing,
piloting, and staging is broken down into three parts and 10 essential
functions or tasks, and it’s
important to remember that not all agencies will complete all of the tasks in the brief, depending on
their circumstances, and while the
steps are presented in a linear way, in practice, often steps will overlap and be revisited as
teams test, pilot and stage. Part One is Usability Testing, and includes creating a
usability testing plan, and conducting
usability testing, analyzing results,
and making adjustments. Part Two is
Piloting, and includes determining the
approach and developing a plan, identifying and
recruiting sites, conducting the pilot,
and assessing results, reviewing progress,
and making adjustments. Part Three is
Staging and Scaling Up, including
developing and refining plans for staging the
intervention and scaling up, identifying
sites and sequencing, building
capacity and scaling up, and then reviewing
progress and benchmarks. In this module, we will focus on the first essential
function in the brief, creating a
usability testing plan. Let’s get started. As a reminder, usability testing allows teams to try out critical intervention components and implementation procedures to see how they work. It may be particularly helpful to test tools,
forms, and processes that are being used or
adapted for the first time so they can be refined
and finalized, as needed. Making a plan for
usability testing will help teams set a foundation for learning how
the processes or tools will function in
the agency context, and to be intentional
about the what, who, how, and when of the testing. As an initial
test, usability testing provides insights
into the user experience with the process or tool. It also helps a team determine whether
modifications are needed. This early
testing can pave the way for implementation
and later evaluation. Teams may want to
consider usability testing for intervention core
components or activities. For example,
assessment practices. Implementation activities, particularly new
ways of doing things. For example,
fidelity assessment tools. Implementation supports. For example, new
coaching guidelines, and data collection processes. For example,
completing data entry forms. To guide usability testing, teams should create
plans that describe the testing, including
the purpose and goals, so, what does the team
hope to get out of testing? The focus and scope of the test. What the team is
actually testing. For example, one
curriculum module, or are there a
variety of processes and tools to be tested? The approach, so
how the process or tool will be tested. The participants. What groups, and
how many participants will test the process or tool, or how many
cases will be reviewed and how they will be selected. The data and
measures to be assessed, including both
quantitative data, for example, the counted numbers of parents contacted,
and qualitative data, for example,
caseworker responses to prepared questions
about their experience. The data collection approaches. For example, will the team use focus groups,
surveys, or case reviews, or a combination of these? The criteria for revision of the process or activity, so what will
help the team decide what to revise
and how to revise it? The individuals or groups responsible for
testing, reporting results, collecting data,
making decisions, and acting on the results, and the timeframes and schedules for the testing period. It helps teams to consider the what, who, how and when while creating the testing plan. What aspects of the intervention or implementation process will benefit from
usability testing? What processes or tools are being
introduced for the first time, or are being used with
new target populations? What is the purpose
of the usability test? What is the scope
of the usability test? What data will be
collected and how? Who? Who will
participate in the testing? Who will be responsible
for collecting the data? Who is responsible
for conducting the test, and who is responsible
for acting on the results? How? How will the
processes or tools be tested? How will data be collected? Will there be focus groups,
surveys, or case reviews? How will
revisions be determined? Will there be
multiple testing cycles? And when? When will the
testing be conducted? What is the schedule for testing, adapting
and retesting if needed? Let’s take a look at an example. During the 2012 Permanency
Innovations Initiative, the Fostering Readiness
and Permanency Project outlined their
usability testing plan. The project comprised of three unique interventions that were part of
the overall project, the 3-5-7 Model, family finding, and a Child Advocate and
Recruitment Expert of CARE team. Then they developed their plan for usability testing by identifying five components that they wanted to test, including the viability of
the collaborative CARE team, assessing the capacity
of quality assurance tools to measure fidelity, assessing the
viability of the CARE model with the youth
target population, determining the
viability of the approach used for clinical supervision for youth advocates, and determining the viability of the process
selected for case mining. The usability testing plan included the scope, purpose and timeframe for testing, and who would be
participating in the testing. What components
were being tested, including the data measurements to collect and
review for each component. The method of
testing for each component. For example, surveys,
focus groups or other methods. The team’s plan for
quality assurance crosswalks. And who would be responsible for collecting the
data on each component, as well as the criteria that the team
determined would help them make decisions about
usability or revisions. It sounds like the team
has thought out the plan strategically and proactively and are prepared to
move forward with testing. Let’s take a
moment to check in on what you’ve
learned about developing a usability testing plan. Why do teams create a
usability testing plan? To set a
foundation for learning about how selected processes or tools will work in the agency context, and identify critical
components for the testing. How do teams create a
usability testing plan? By deciding as a team
what processes or tools would benefit from testing, and outlining the key components of testing for each. What can help teams create a usability testing plan? Thinking through the
what, who, how and when of the processes or
tools that need to be tested. Now take this a step further by reviewing the
reflection questions for “Create a
usability testing plan” in your Intervention Testing,
Piloting, and Staging workbook to connect what you’ve learned to your own experience. Up next is
“Conduct Usability Testing, Analyze Results,
and Make Adjustments,” the third module in this series, and the second function in usability testing for
change and implementation. MALE: This video was created by the Capacity
Building Center for States, funded by the U.S. Department of Health and Human Services Administration for
Children and Families Children’s Bureau under contract
HHSP233201400033C. The content of this video does not necessarily reflect the official views of
the Children’s Bureau.

Leave a Reply

Your email address will not be published. Required fields are marked *