Multilevel Interventions: State of the Science in Global Mental Health
Articles,  Blog

Multilevel Interventions: State of the Science in Global Mental Health


Multilevel Interventions, State of the Science
and Global Mental Health SEPT 10, 2018>>WEBINAR OPERATOR: Please stand by. Your
program is about to begin. Good day, hello and thank you for joining the National Institute
of Mental Health, Office for Research on Disparities and Global Mental Health, 2018 Webinar Series.
This presentation is Multilevel Interventions: State of the Science and Global Mental Health.
Please note, all lines are in a listen only mode. If you would like to ask a question
during today’s presentation, you may do so at any time through the Q&A pod located in
the lower right-hand corner of your screen. As a reminder, this call is also being recorded.
It’s now my pleasure to turn over the call to Ishmael Amarreh. Please go ahead.
>>ISHMAEL AMARREH: Thank you and thank you all for joining us today. And I am pleased
to present at this second installment in a series of webinars on implementation science
research and global mental health. And this presentation will be and delivered by two
esteemed implementation science researchers and Dr. Bryan Weiner and Dr. Shannon Dorsey
from the University of Washington. The first part of the webinar will cover the general
subject of implementation science and theories and methods for multilevel intervention. The
second part will provide us details of the specific research projects to illustrate the
application of implementation science and multi-level studies in global mental health.
As I mentioned, the two colleagues who are presenting this, and we are so grateful for
them to present this webinar for us, are Dr. Bryan Weiner, and he is a Professor at the
Department of Global Health in the Department of Health Services at the University of Washington.
Dr. Weiner is a Professor in the Department of Global Mental Health and he directs the
Implementation Science Program in the Department of Global Health Services and Strategic, and
the Strategic Hire Implementation Science for the School of Public Health. Dr. Weiner
has over 20 years of experience where he examined a wide range of innovation including quality
improvement practices, care management practices and patient safety practices and clinical
system information as well as evidence based clinical practices for cancer and diabetes.
His research has advanced implementation science by creating new knowledge about the organizational
determinant of effective implementation introducing and developing theories needs and improving
the state of measurement in the field. The second presenter, Dr. Shannon Dorsey,
is an Associate Professor in the same department, Global Health and an Associate Professor of
Psychology at the University of Washington Dr. Dorsey’s research is on evidence-based
interventions, for children and adolescents with particular focus on dissemination and
implementation of evidence-based treatments domestically and internationally. Her work
has often focused on trauma and focused cognitive behavioral therapy with hyper research design.
Dr. Dorsey is also involved in common element and evidence-based intervention modular training
initiatives on research both in Washington State and internationally in low-and -middle
income countries with colleagues at Johns Hopkins and other institutions around the
country She is also involved in CETA feasibility studies both in the southern and northern
Iraq and also in Thailand and Colombia, Zambia and Ethiopia.
It is my pleasure to turn over this presentation to Dr. Weiner for the first part of the implementation
and Shannon who will be joining him later.>>BRYAN WEINER: Thank you. It is a delight
to speak with you today about multilevel implementation interventions. Just got to make a few opening
remarks. The first is that I’m an organizational psychologist by training. And so, I do have
psychology in my background, but I am new to the field of global mental health and global
health generally. I have spent much of my career looking at the adoption, the implementation
and sustainment of evidence-based practices and programs in healthcare delivery in the
United States context. So, I’m thrilled to have my colleague, Shannon Dorsey, join me
in this presentation and help me make this discussion relevant, and to focus on the global
mental health space. The second comment I just want to make very
briefly is that I am both over prepared and underprepared. I’m over prepared in the sense
that I was anticipating a one-hour webinar and prepared far too many slides to cover.
And I’m underprepared in that I now learned that this is a 90-minute webinar and I have
not actually practiced or rehearsed a whole bunch of my slides toward the end. So hopefully
I will keep my prepared remarks to the first 30 minutes or so, leaving some subjects undiscussed
and that we can come back around to later, but I thought that would be the best way to
move forward. So, without further ado, let me start by saying
that there’s a great deal of interest in deploying and evaluating multilevel implementation interventions
to enhance the implementation and outcomes of evidence-based practices. The National
Cancer Institute, for example has several open funding opportunities and a few that
recently closed, for example the cancer moonshot initiatives that specifically called for multilevel
interventions. Likewise, the National Heart, Lung and Blood Institute has issued three
or four calls over the past two or three years for research involving multilevel interventions,
including an RFA in global health. This upsurge in interest has highlighted the need for multilevel
implementation interventions. The possibilities that such intervention will hold our improving
health and well-being, and the challenges that designing, deploying and evaluating such
interventions introduce. What I would like to do for a few minutes
or so is just discuss a few key issues in the design of multilevel interventions. And
then Shannon will walk us through an example from her own work. Having then set the stage
we will sort of open it up for dialogue with you which undoubtedly will be the most interesting
part of our time together. I’d like to start this discussion by highlighting some design
issues and if time permits, we can talk about deploying and evaluating multilevel interventions
and to set the stage I would like to introduce some words of wisdom from Rumi, the great
Sufi mystic and poet. You think because you understand one you must also understand two,
because one and one make two, but you must also understand “and.”
I chose this saying because there is something different about multilevel interventions.
They are more or should be more than just the sum of their parts.
So, we know that implementation barriers and facilitators operate at multiple levels of
analysis, yet systematic reviews indicate that multifaceted implementation interventions
don’t work any better than single faceted ones. Many of these multifaceted implementation
interventions involve implementation strategies operating at different levels, for example
physician practices and patients. So, I’m extending this critique to multilevel implementation
interventions. The findings of the systematic reviews seem contrary to our intuition. Multilevel
or multifaceted interventions should work better because they target more implementation
barriers and facilitators. So then why don’t they? I would argue that this occurs for two
reasons. First, all too often researchers select implementation interventions without
carefully assessing first whether those interventions target the key determinants of the implementation
problem in the specific settings in which the interventions will be deployed. Instead,
implementation interventions are selected a priori by the researchers as they write
up the grant applications. Second, researchers often combine interventions
at different levels without carefully considering how the determinants at different levels interact
or combine to produce the problem observed. The key to designing effective multilevel
interventions, in my opinion, is to select and combine interventions that work together
to produce complementary or synergistic effects. To do that, you need to understand the interdependence
of the determinants of the problem that the various interventions target.
Unfortunately, there’s been very little discussion about how, when or why interventions at different
levels produce complementary or synergistic effects. Consequently, multilevel intervention
designers can find little practical advice for deciding which interventions to combine
and why. From a paper that I wrote a few years back
with some colleagues at UNC Chapel Hill and the National Cancer Institute, we explored
how interventions at different levels could be combined to produce, mentoring or synergistic
effects. Using a causal modeling framework, we focused on two causal relations, mediation
and moderation and described five strategies for increasing complementarity or synergy
among interventions operating at different levels. When I say complementary effects,
I mean 2+2=4 effects. When I say synergistic effects, I mean 2+2=5 effects. That is,
what we seek when we combine interventions at multiple levels we want them to work together
to interact, that is to interact in ways that produce complementary or synergistic effects
on the outcomes of interest. Now before I describe the five strategies
I just want to take a quick step back and review a couple of key concepts to refresh
everyone’s memory and get us all started on the same page.
So, we all know what mediators and moderators are, but these two concepts are often confused
so it’s worth taking just a moment to review. A mediator is a variable that explains or
accounts for the relation between a dependent variable and independent variable. Our particular
interest here are mediators that reflect the mechanisms of action or the processes or pathways
through which a cause is linked to an effect. Technically there is a difference between
mediators and mechanisms such that all mechanisms are mediators, but not all mediators are mechanisms.
However, for the sake of this discussion I’m going to treat mediators as mechanisms and
not worry about this distinction. So here is an illustration where the effect
of some independent variable on the dependent variable is truly mediated by a third variable.
Full mediation occurs every so often, but partial mediation seems to be more common.
And I suspect that many cases of partial mediation are actually cases of parallel mediation,
meaning that two or more variables mediate the effect of the independent variable on
the dependent variable. More complex forms of mediation are possible,
such as serial mediation. And as I will note in a few minutes, multilevel interventions
can be designed with serial mediation in mind. A moderator as most folks know is a variable
that changes the strength and direction of the relationship between two variables. Often
a moderator is some sort of individual difference or contextual condition that influences the
strength or direction of the relationship between the cause and effect. Here is an illustration
of a very simple case of moderation where the variable inference of the strength between
the dependent variable and independent variable. Of course, more complex forms of moderation
are possible, such as multiple moderation. And even moderated moderation.
Far more interesting, I would argue are cases of moderated mediation. Moderated mediation
occurs when the mediation relations are contingent upon the level of a moderator. Note that there
are two forms of moderated mediation. In the first, the moderating variable influences
the strength or direction of the association between the independent variable and the mediating
variable. The second is where the moderating variable influences the strength or direction
of the relation between the mediating variable and the dependent variable. Of course, more
complex forms of moderated mediation are possible, with multiple moderators and multiple mediators
and so forth, but for today’s purposes, these two forms of moderated mediation, the ones
that are depicted in this slide are interesting because they suggest ways in which interventions
at multiple levels could be combined to produce complementary or synergistic effects.
So, one more subject to review. When we talk about multilevel interventions we need to
be sure that we mean the same thing by the word level. What’s a level? There are at least
two ways to think about this. The first and probably more intuitive way is to define levels
in terms of settings. A setting refers to a social unit that is targeted for intervention.
So, when you see calls for multilevel interventions in funding opportunity announcements, the
levels are usually defined in terms of settings. An alternative way of thinking about levels
is to define levels in terms of the determinants or the causes that are targeted by an intervention.
But I would argue that when thinking mechanistically about interventions, it’s helpful to focus
on the level of the determinant of the implementation problem, rather than the level of the social
system. These two ways of talking about levels can
get confusing. The distinction is subtle, but it’s important. So, audit and feedback
I think provides a useful illustration. Although we don’t have a firm grasp or theory about
how audit and feedback work, control theory suggests that the presentation of information
indicating that there’s a discrepancy between actual and expected or desired performance
motivates a behavioral response on the part of a target audience to reduce that discrepancy.
Thus, from a control theory perspective, audit and feedback represents an intrapersonal-level
intervention. Although it’s often implemented in organizational settings, audit and feedback
targets intrapersonal determinants of provider behavior. For example, the knowledge that
a performance discrepancy exists and that then could trigger a motivation to reduce
that discrepancy. Now it is possible that audit feedback works
or could work in other ways by targeting other determinants at other levels of influence,
for example audit feedback could produce changes in physician behavior. For example, by changing
organizational culture or interpersonal relationships in the workplace. Whether audit feedback in
fact works in these ways is not known. Absent more theory and research, it is reasonable
to regard audit and feedback as intrapersonal-level intervention, one that targets provider motivation
to reduce discrepancies. So, by way of example, here’s a list of possible barriers or determinants
at multiple levels influencing the provision of mental health care. Here also a list of
implementation strategies that target those determinates. Shannon pointed out to me that
in low- and-middle income countries there is often a chronic shortage of mental health
care professionals. So, it’s common to engage other healthcare professionals or even other
professionals or even community members to provide mental health care. Obviously, we
want that care to be evidence-based. And that’s implicit in the examples that I will be providing.
So now I’m going to walk you through the five strategies fairly quickly and we can come
back to this slide or others if there’s time or interest.
So, with all that in mind, let’s talk about the five generic strategies for designing
a multilevel intervention. The first one I call the accumulation strategy. So, in this
strategy is the figure depicts, interventions at different levels produce a cumulative impact
on a common mediating pathway or set of pathways. Each intervention is not conditioned on the
others. They exhibit what we organizational scientists call pooled interdependence, meaning
that each intervention makes a discrete contribution to the outcome without being dependent on
the other interventions. Let’s take an example where the aim is to
engage school teachers in the provision of evidence-based mental health care for children
who have experienced trauma. So, this slide depicts an accumulation strategy
in which a multilevel intervention addresses organizational, interpersonal and intrapersonal
determinants influencing teachers’ motivation to provide mental health care for their students.
Each strategy makes an independent contribution to teachers’ motivation. For example, school
administrators could make providing mental health care to students who have experienced
trauma and organizational priority for the school and put policies and practices into
place that influence teachers’ motivation. An opinion leader like a highly respected
teacher could positively influence social norms and professional role identity for teachers
to provide mental health care to students in need and finally, clinical supervision
by mental health professionals might increase teachers’ self-efficacy to provide mental
health care to their students and thereby increase their motivation to do so as well.
Note that I am making assumptions about the mechanisms by which the implementation strategy
influences the outcomes of interest. Something I will be saying more about later is that
we don’t really have a good theory or mechanistic understanding of all the implementation strategies
that we try. Also note that I’ve assumed only a single mediated pathway. These are simplifying
assumptions. I’m making them just to keep the exposition simple so that the logic of
the design doesn’t get lost in complexity. In the amplification strategy, the effect
of one or more interventions is conditioned on another intervention. One intervention
increases the primary audience’s sensitivity to or receptivity to the other interventions.
This model exemplifies a form of moderated mediation, meaning that one intervention amplifies
the magnitude or the effect of the other interventions on the mediating pathway or process.
Continuing the example, audit and feedback could be added to boost the signal of management
support or opinion leaders for teachers’ provision of mental health care to students who have
experienced trauma. This combination would be warranted if theory or research indicated
that teachers are more responsive to management support and opinion leaders if they have credible,
individually tailored information that signals a discrepancy between desired and actual performance.
In other words, the effective management support in opinion leaders depends on contextual factors.
In this example the contextual factor is a visible performance gap that indicates a need
for improvement. For example, audit and feedback showing that a teacher is doing a poorer job
than her peers in meeting her students’ mental health needs could increase that teacher’s
receptivity to or responsiveness to management support and opinion leaders.
As with the amplification strategy, in the facilitation strategy the effect of one or
more interventions is also conditioned on another intervention. However, instead of
boosting the signal, the conditional intervention in the facilitation strategy clears the mediating
pathway for the other interventions to produce the desired outcomes. To continue this example,
this slide depicts a multilevel intervention in which a clinical reminder is added… I’m
sorry, are people having difficulty hearing me? All right. I will continue on. Hopefully
this volume is coming through okay. To continue this example, this slide depicts
a multilevel intervention in which a clinical reminder is added to facilitate the motivating
effect produced by management support and opinion leaders. So, this approach would be
warranted if theory or research indicated that management support and opinion leaders
increase teachers’ motivation to provide mental health care to students who have experienced
trauma, but in order to translate that increased motivation into action, the teachers need
a cue or reminder. The facilitation strategy is especially useful when the interdependence
of multiple determinants is defined by necessary but not sufficient conditions. In the above
example the increased motivation that management support and opinion leaders stimulate might
be necessary but not sufficient to produce, excuse me promote mental health care provision
absent some sort of cue or reminder. Conversely, the action cue or reminder itself might not
be sufficient to promote mental health conditions but might be necessary to address in conjunction
with these other determinants. In the cascade strategy, an intervention at
one level affects the desired outcomes in and through one or more interventions at other
levels of influence. This demonstrates what we in organization science would refer to
as sequential interdependence, meaning that the outputs of an intervention at one level
become the inputs of an intervention at another level. And this design reminds me of serial
mediation. So, in this example, policy development at
the ministerial level increases school administrator’s motivation to meet children’s mental health
needs at school. Increased management support, in turn, signals to teachers that meeting
children’s mental health needs is an organizational priority. Highly respected teachers in the
school respond to that increased organizational priority placed on meeting children’s mental
health needs by becoming champions of the idea. Through their opinion leadership, they
positively influence social and professional norms operating in the school, redefining
what teachers consider to be enrolled behavior as teachers. This, in turn, increases their
motivation to provide mental health care to their students. The cascade approach would
be warranted if theory or research indicated that determinants at different levels of influence
interrelate primarily through mediation rather than moderation. By linking multiple mediating
processes into an integrated causal pathway cascading intervention create a circuit through
which the effects of these interventions combine and flow. Although the word cascade implies
the flow from higher level of influence to lower levels of influence, this approach can
be applied in reverse. Often intervention at one level run up against technical, resource,
authority or other sorts of constraints that can only be addressed at higher levels of
influence. Community development and community empowerment models of intervention often work
in in a sort of bottom-up fashion. In the convergent strategy, interventions
at different levels mutually reinforce each other by alternating, excuse me by altering
patterns of interaction between two or more target audiences. This is referred in organization
science to reciprocal interdependence, meaning the outcomes of some interventions become
the inputs for other interventions and vice versa. So, in this example, management support
and opinion leaders increase teachers’ motivation to provide mental health care to students
who have experienced trauma. A stigma reduction campaign conducted in the community positively
influences students� knowledge and attitudes about mental health care, increasing their
motivation to seek such care. Or to be receptive to such care when it is offered. The teacher
directed, and student directed interventions mutually reinforce each other to promote a
different kind of student-teacher interaction, the result of which is the provision of evidence-based
care. The convergence strategy would be warranted if researched showed that coordinated behavior
change by different interdependent parties is necessary to produce a desired outcome.
The chronic care model for example posits that high-quality chronic disease care depends
on supportive evidence-based interactions between an informed and activated patient
and a prepared and proactive practice team. So, although this example focuses on altering
the patterns of interaction between individuals, the convergent strategy could be used to alter
patterns of interaction among groups, organizations, or communities.
The discussion that I’ve had so far of the strategies is illustrative, it’s not exhaustive.
There are surely other ways to combine interventions. Even these five strategies could be combined
in some interesting ways. I want to say that while it was fun to think about the causal
logic for combining interventions at different levels, I admit that these five strategies
are largely heuristic at this point. To really do this, we need multilevel theories that
explain how determinants at multiple levels interact to produce health and other outcomes.
We have lots of single level theories, but very few cross-level ones. Likewise, we need
cross level research that examines the interdependence of variables at multiple levels of influence.
We have statistical models for examining cross level moderated mediation for example, but
you don’t see a lot of cross level research in part because it’s really hard to do. Also,
you need large sample sizes in many cases. And finally, as I mentioned earlier, we really
don’t have a good grasp about how implementation strategies work. We really need to investigate
a mechanisms of implementation strategies. Something that I’m hoping to do with the grant
applications that I’ve worked on in the past year or so get funded. So, I’m going to end
this portion of my remarks and turn it over to Shannon on this point. First that multilevel
interventions are costly, complex, time-consuming, and demanding on community members providers
and patients. So, it’s not enough to hope that multilevel interventions will produce
2+2=5 effects, or even 2+2=4 effects. We need to make sure that multilevel interventions
are designed to produce such effects. This means that we need to carefully think through
the logic of how these intervention components are going to work together to produce complementary
or synergistic effects. And really to close this part of my remarks, I just want to remind
you of the quote from Rumi the Sufi mystic and poet. “You think that because you understand
one that you must also understand two because one and one make two. But you must also understand
‘and'”. And so, with that I’m going to go ahead and turn this over to Shannon.
>>SHANNON DORSEY: Thank you, Bryan. So, what I wanted to do is talk to you just a little
bit about a project we have called BASIC, which is NIMH-funded and has
a variety of interventions at different levels and talk a little bit about how we have thought
about trying to apply the multilevel intervention work that Bryan is talking about. So, I will
go ahead and move into that now. So, our team has had about a 10 year’s history
of focusing on trauma-focused cognitive behavioral therapy in eastern and southern Africa. So,
we’ve had a series of effectiveness trials and studies examining acceptability, feasibility,
and those have been in Bungoma in Kenya and Moshi Tanzania and then with my colleagues
at Johns Hopkins mostly led by Laura Murray and Paul Bolton in Lusaka in Zambia. And so,
this ten-year history of work has provided strong evidence that traum- focused CBT is
effective, that there’s high level of acceptability both from families and the lay counselors
that we’ve trained to provide it. And what we’re now trying to move into is really understanding
what it might take to scale up and sustain this intervention. So that’s what I’m going
to talk about today. So, we really focused on the scale up and
sustainment questions in this new study called BASIC, Building and Sustaining Interventions
for Children. Kenya launched a mental health policy in May of 2016, and we were writing
this grant at exactly that time to really try to capitalize on the prior work in Kenya
and our collaborations with Ace Africa in western Kenya and also to build on the launch
of the mental health policy. It’s kind of the perfect context for thinking about what
would it take for ministries, organizations and leaders to support effective delivery
of mental health interventions, because Kenya was moving in a direction to launch a policy,
we had an effective intervention, and we had a local NGO that had capacity and expertise
in the intervention. What we were interested in really looking
at is, what makes an enabling context for mental health delivery? And our partners at
ACE Africa, you can see their logo there from one of the schools and some of their prior
work, but our partners in Bungoma in western Kenya, they had identified that if this mental
health intervention or a mental health intervention were to would be scaled up that it would be
delivered, given that it was child-focused either by the education sector or by teachers
building on some of their prior work where teachers did programs for hygiene and livelihood.
Or they thought it would be delivered through health extension using community health volunteers
(CHVs) who are the last mile of healthcare in Kenya and are individuals already embedded
in the community and were individuals that ACE Africa was already working with to deliver
other health and orphan-focused programs. What’s interesting though, is that these two
contexts are actually quite different in terms of their challenges and existing supports.
So, the study goal on which Bryan is a co-investigator is really to understand what makes an enabling
context in both these sectors. So, for example just to give you a sense of
how these are a little different, in our pilot work before writing this R01 application and
developing the grant ideas, ACE Africa did a pilot training, training teachers and community
health volunteers in the intervention. The teachers grasped the intervention more quickly,
probably not surprisingly, given their professional training and development already as teachers,
but they had a number of competing demands that probably surprises no one, including
teaching numerous subjects, large class sizes. They were on strike when we were writing this
grant because it was perceived the pay was not high enough and indeed was low pay to
make a living in this position. And they also have limited time given their teaching roles
and other obligations and they have a high need for leadership support to be able to
provide mental health care because of the organizational structure.
For CHVs, they have less formal education than the teachers, it took them a little longer
to grasp some of the intervention concepts because those were newer. They require a stipend
to cover their work, if they were going to do mental health delivery, but they have a
lot more freedom during their week and they operate more independently than teachers with
a little less of a formal organizational structure. So, they may need less leadership support,
or rather a lack of leadership support might not be as big a barrier as it would be for
teachers. So, building on some of Bryan’s work we were
using his organizational theory of effectiveness which many of you may be familiar with and
we were interested in looking at the organizational effectiveness in these two sectors and what
interventions or implementation policies and practices might be needed at each site for
them to effectively deliver mental health services. And in the green box you will see
some of these implementation policies and practices that we believed might be necessary
for building a positive implementation climate that would then lead to implementation effectiveness.
So, from our prior work we knew that clinical training in the intervention and ongoing clinical
supervision was necessary. We imagined they would need some resource provision, and then
we wanted to study and see whether persuasive communication from leadership and management
support and specifically workload adjustment for teachers might be necessary so that they
would actually have the time to deliver the intervention. We guessed that given some of
the differences I’ve talked to you about then what we imagined would overlap so that they
would both need training and supervision whether you were delivering mental health services
in the education sector, via teacher delivery or in the health sector via CHV delivery.
But we thought there might be some unique interventions that are needed given some of
the differences. We did imagine that many of the prior findings about determinants of
effective implementation both domestically and globally would also be determinants in
these contexts, but we were sure that those of us in the US at Duke and UW and even at
ACE Africa might not know exactly what would be necessary for schools and communities to
actually deliver mental health care. And so, given the differences in context,
we really wanted to study both these planned interventions, training and supervision which
I will talk about, and also understand what might emerge in terms of interventions at
different levels to support effective mental health delivery in education via the schools
and teachers and communities via the CHVs. So just to show you a few of the planned interventions
and at which level we thought these were acting, because remember Bryan talked about you could
think about levels of the setting or levels of what are you targeting. So, I have these
different interventions separated by levels here, and by levels I do mean what we are
targeting. Bryan talked about five strategies and I will try to walk you through the interventions
we selected and then their strategies through which they operate. So, at the macro policy
level, like most of the global health studies that most of us do, without ministry support
and circulars from the ministry initiating the site participation, we imagine none of
this would start. So, we saw the ministry engagement and their use of circulars to support
engagement as a necessary cascade, and the strategy would be cascade because without
that policy level intervention that you wouldn’t actually start that organization.
Likewise, for leader buy in. So, head teachers in the schools, CHEWs’s community health workers
in health that oversaw the community health volunteers in the area, without their buy-in,
nothing would happen. And so, these two interventions operate at different levels, but we imagined
it was a cascade strategy. And then there were other planned interventions that we proposed
and theorized worked at the intrapersonal level, so targeting teacher or CHV motivation
and knowledge to be able to deliver the intervention. And we thought all these operated via the
accumulation strategy. If you remember Bryan showing the way accumulation works, it’s three
interventions basically targeting the same pathway, so we thought that teachers and CHV
would need in-person and active training in the intervention. It was led by experienced
Kenyan lay counselors in the language of the counselor’s choice and certificates were provided
to reward, incentivize, and encourage motivation and then clinical supervision led by the same
experienced Kenyan lay counselors. We thought all three of these interventions accumulated
to increase counselor motivation to deliver the intervention and to provide mental health
services. What we did after the first 10 schools, and
the first 10 communities surrounding the schools delivered trauma-focused CBT, the clinical
intervention, we did qualitative interviews to try to understand the other interventions
that sites did so that we can understand what else was making an enabling context for mental
health, because the 10 schools and the 10 villages were all successful. They all delivered
TF CBT. They all delivered it with acceptable or high levels of fidelity.
So, what I wanted to show you is the other interventions that were site driven or emergent,
and that we learned about through these qualitative interviews, and I also want to show you, I’ve
listed a couple of them here, how we have been thinking about the five strategies that
Bryan just walked us through and how these emergent interventions at multiple levels
are working complementary or synergistically with each other and with a planned intervention.
I wouldn’t say we are totally sure, or that we’ve got the strategy for how they work exactly
right just yet, but it’s been an incredibly valuable activity to think through the different
strategies, interventions that were planned, the emergent interventions and how these might
operate synergistically or complementary. And I’ll share what we’ve learned, and our
goal is that the next 30 sites that deliver so the next 30 schools and the next 30 villages
that community health volunteers will deliver within. What we’ve learned from these first
10 sites will go into informing implementation coaching for interventions moving forward.
And we think that engaging in this activity and thinking about how these interventions
work together really matters especially in a global mental health context where resources
may be limited. So one interesting thing that probably surprises
no one on this call, community leaders did engage in a lot of community outreach and
we saw this as being a convergence strategy. Leaders tended to go to the communities and
go do individual conversations, they also presented about the intervention at board
of management meetings, and the community health extension workers (CHEWs) presented
at different community and health meetings like the chief’s baraza, and what they were
trying to do is combat stigma and build community engagement so that as our teachers were building
their knowledge and motivation, there would be students, families, and children to engage
in the intervention. The other thing that we saw is that both the
head teachers and the CHEWs did a lot of persuasive communication both individually to the people
chosen as counselors and to individuals at their site which really signaled their support
for this intervention and their support for delivering mental health care. And this really
had a nice organizational effect in that teachers were willing to take on some of the work of
other teachers to support the teachers who were doing the counseling and it also had
an intrapersonal effect in that it made the teachers more motivated and the community
health volunteers more motivated to deliver the mental health intervention. On the school-side
however, it may have had an amplification effect. If you remember amplification was
the second strategy that Bryan showed, and it is a little more of where the intervention
is a moderator may boost the signal of other intervention. So, in schools when the leader
attended supervision because they are all at the same site, they are at the same schools,
when the leader attended supervision or observed groups it not only contributed to the motivation,
but it really boosted the signal. They were more likely to engage in supervision. Our
supervisors from ACE Africa said they were more likely to try hard in supervision and
to practice more because leader support not only contributed to this accumulation of supervision,
training, and the other work that we did, but it really amplified the signal. They were
more motivated to work hard. And then one other difference across the sites
that I just want to mention because that’s what’s been interesting is thinking in these
two different contexts, what are the different interventions or the strategies in which they
operate different. We really saw that of course individuals need resources to provide a mental
health intervention. At a minimum they just need a space to provide it. And at best, they
have chalkboards, writing materials, and what we saw is really amazing, inter-Ministerial
collaboration for the schools and the head teachers were providing resources not only
for their teachers to deliver, which facilitated their ability, cleared their ability and cleared
their way to do it, the head teachers also provided workload adjustments but we saw this
neat inter- ministerial collaboration where once schools learned that CHVs were also delivering
the intervention they offered rooms in the schools, they provided chalk, they provided
writing materials and they provided the things that not only the teachers needed to deliver
but the CHVs might need to deliver. They also did other things like talk to teachers within
their school to ensure that they released the children to receive the intervention by
the CHVs. And so, we saw this neat inter-ministerial collaboration and that this kind of seemed
to function at facilitation strategy. So, we�re still figuring out exactly how these
interventions work together. We�re just in the first year of the study, once we implement
with the additional 30 schools and other 30 sites we will have more data that will contribute
to our understanding of exactly how these strategies operate and our ability to test
it. So, Bryan talked about this a little bit that you need large sample sizes. In our study
we may not be able to test all of our hypotheses about how the strategies operate, but we’re
using qualitative comparison analyses and can look at conjunctural causality and we
can also do multilevel models to look at some of our strategies and the different levels,
because we have 40 sites in each and we have over 1000 kids and 120 teachers and 120 CHVs.
So, we will be able to look at and test some of our theories about how these strategies
are working. So that was the last remark I wanted to make,
and point, and I will just maybe turn it over to Ish to open up questions for either Bryan’s
presentation or about BASIC. If you want more information we have a nice podcast here and
we have some information on our website.>>ISHMAEL AMARREH: Thank you, Shannon, Thank
you Bryan. We’ll open it up for questions please if you have any questions use the portal
and you could write in the questions and I will, take all these questions and ask them
if there are more than one question. We’re sorry about some of the technical difficulties
with the voice, but hopefully you guys could hear us better now, and we’ll try fixing it
in the recording when we publish this recording on our website.
>>WEBINAR OPERATOR: Once again to ask a question the Q&A pod is located in the right-hand corner
on the bottom of your screen.>>ISHMAEL AMARREH: I don’t see any questions.
I will give people maybe a couple more seconds to ask questions. And maybe I will start with
a question. Bryan, if you can hear me I would like your thought on something that you alluded
to at the end of your presentation, which was, how do we investigate the mechanisms
of implementation strategies or the causal mechanism of implementation strategies? And
you said that you are working on this and this is part of your research. Can you then
say something about that? Because that’s a question that comes up a lot in the implementation
science world in global mental health.>>BRYAN WEINER: Sure, I can tell you a few
words. Hopefully my sound quality is better than perhaps it was earlier. So, it is, like
any new field, implementation science is still developing, and I think the pace of research
sort of outstrips the pace of development of some of the basic building blocks of a
science. That’s true with respect to theory development and measurement, but I would also
argue it’s true for our mechanistic understandings of the interventions, that is, the implementation
strategies that we are deploying and evaluating. For many years throughout the 90s and 00s,
implementation strategies were tested in randomized controlled trials using what Brian Mittman,
a well-regarded implementation scientist, referred to as the “empirical treatment approach,”
or what I would consider to be sort of the black box approach to doing trials, where
we try a strategy and see if it works look at the average treatment affect and then try
it in other settings. Hopefully, in our systematic reviews are able to say something about its
overall effectiveness. But there were more than 100 trials of audit and feedback, just
to pick an example, that occurred in a systematic review of it before interventionists or implementation
scientists stopped to ask themselves, how exactly does audit and feedback work? What’s
the theory of the program or the theory of change, the mechanisms by which audit and
feedback work? Because if we understood that we might do a better job matching the use
of audit and feedback as a strategy to the particular implementation problems and the
context in which we are intervening. So that understanding of both the determinants that
are operating as well as the mechanisms of the strategies, both those pieces are critical
for a really good match there and the effectiveness of that strategy will be dependent upon that
match. How we go about doing that, it starts with
an explication of the causal theory, the theory of change, or the identification of the mechanistic
pathways by which implementation strategies work. There’s not a clearly set procedure
for going about doing that, although in a recent publication, Cara Lewis, a colleague
of mine at KPWHRI, that’s Kaiser Permanente Washington Research Institute, and some of
her colleagues there and I published a paper in which we outlined a procedure, a process
for developing causal models or causal pathway diagrams. I should say causal pathway diagrams
for implementation strategies. Most of this relies on existing theory and research to
try and explicate what those mechanisms might be, and then the use of reliable and valid
measures in studies with the appropriate study designs to tease out whether or not the strategies
are operating according to those theoretically posited mechanisms. There is a systematic
review and soon to be a second one that… was recently published looking at mechanisms
of implementation strategies in mental health, that was by Nate Williams I think was published
in 2017. He found 16 or so randomized control trials where mechanisms were examined and
unfortunately found none of the hypothesized mechanisms for the implementation strategies
were supported by empirical evidence. So, there’s clearly a lot of work to be done.
And it’s also the case that many of the implementation strategies that you find in the ERIC compilation,
that is, the Expert Recommendations for Implementing Change compilation of strategies that Byron
Powell and many others have been working on for many years, they have been described the
mechanisms for those strategies are also unclear. So, I think a good deal of this starts with
theory development, developing causal pathways diagrams for these implementation strategies
to identify or elucidate those mechanisms and then the use of supported measures reliable
and valid measures and study designs to investigate them.
>>ISHMAEL AMARREH: Thank you. Thank you, Bryan. And Shannon, I see you have a question
that you wanted to ask Bryan. But I will let you ask that question.
>>SHANNON DORSEY: I was thinking about how we were kind of talking through combining
these strategies, one freeing thing for me for anyone else on the call as Bryan was saying,
that the strategies don’t have to operate independently. They might mix or match and
might go together. So it was a very useful activity for me to try to think through all
of our interventions and their levels and how they might interact and I didn’t know
if Bryan had any other words of wisdom, or thoughts for people who are also engaging
this activity when it�s not just accumulation or amplification or cascade–how you might
encourage people to think through these strategies in their own work.
>>BRYAN WEINER: I can’t offer you any corrections or criticisms about the efforts or sort of
map them, and even if I did, I wouldn’t do that on a webinar, Shannon.
Bryan Weiner[Laughter]>>BRYAN WEINER: But I think the more important
point that I would make and that you have illustrated in your part of the presentation,
is the importance of really trying to think through how and why these implementation strategies
are going to combine to produce the outcomes of interest. As I said earlier, it’s not enough
to select implementation strategies at each individual level based on, say, the theory
and the evidence for them and the alignment with the determinants. You also have to think
about the combination of strategies. That’s the whole point of the Rumi quote. And I think
this is important because we have a fairly good history, a long history in health promotion
and community intervention, where we see what look like good ideas but in retrospect actually
seem to be kitchen-sink approaches to deploying multilevel interventions that raise questions
about exactly why these particular ones were chosen and how they’re supposed to work together.
I think those are the type of questions that I would encourage people to ask. Not only
because I think they help advance our knowledge about this but expect the effort you are likely
to see the sort of complementary or synergistic effects. But also, because they have implications
for the time and timing and sequencing with which we deploy them as well. Something I
didn�t spend much time talking about although there are some prepared slides and my speaker
notes will hopefully be available for everyone participating in the webinar. But as you sort
of started thinking through and describing how these different pieces of the basic intervention,
those implementation strategies might work together, both the ones that you intended
and were designed as well as those that emerged spontaneously in the two sectors. They have
implications for how you sequence them and how long each one might take for not only
deployment, but also for generating some of those positive effects on mechanisms or outcomes.
So, thinking through the causal logic here I think is important also for in the practical
consideration, not only selecting which ones you combine and why, but also how you are
going to deploy them and what order and how much time it might take them for them to materialize
their effects.>>ISHMAEL AMARREH: Thank you, Bryan and thank
you Shannon and I also wanted to reiterate that we will provide the participants of this
webinar the slides from Dr. Weiner and Dr. Dorsey. If that is something that you need
please email me at [email protected], or you can find my information on the webinar
page that you signed up for this webinar on and I’d be happy to send you some copies of
those presentations. So, we have about another maybe 20 minutes
left on our time here. I don’t see questions coming from participants. Unless Bryan and
Shannon, you have anything else to add, I know that Bryan you have a couple slides that
you didn’t go through, so if you want to go through those, I think we still have time.
But if we… if you don’t, we will share those slides with the participants and we will just
cut it short and give you back a half an hour of your life today.
[Laughter]>>BRYAN WEINER: Well, I really should have
prepared a little bit more and rehearsed a little bit for the slides that I didn’t present.
So, I’m a little hesitant to do that just because it’s probably at going to be pretty
choppy but I’d be delighted to have these slides be made available, including the speaker
notes I typed if people would find that to be helpful. I did have a few remarks about
how to consider time and timing in the concept of multilevel interventions. I just would
note, let me just pull back up one slide that might be relevant here.
>>WEBINAR OPERATOR: One moment.>>BRYAN WEINER: Yeah. This would be slide
27 or I can go ahead and jump to them to indicate that Jeffrey Alexander. So, the paper I alluded
to where we outlined the five strategies was published in a JNCI monograph (Journal of
the National Cancer Institute) in 2012., Unfortunately, it is not a paper that’s indexed in PUBMED
because the monograph series is not part of PUBMED, but hopefully folks would find this
if they are looking for it. In that special issue there was also a very thoughtful paper
by Jeffrey Alexander and one of his colleagues on the effective time in multilevel interventions,
and… see if I can get to it, but there we go. This look at time as just more than clock
time as frequency, duration and sequencing, but there’s also concept of time in terms
of temporality or how things unfold in time, their relation of time in contextual events
and then speed with which things take place. They noted that time is really experienced
in multiple ways in multilevel interventions, which is depicted variously on this slide.
And this one I thought was the most interesting one. There are a whole host of questions about
how time plays into multilevel interpretation interventions that we really don’t have a
lot of theory and experience to draw upon. And so, these are largely questions that are
addressed intuitively on a trial by error basis by multilevel interventionists. When
should the intervention be initiated, how long will it take to implement these pieces
of it, how long will it take for the effects to become observable and how long will those
effects last. How should the intervention components at different levels be sequenced,
how frequently and for how long should the intervention targets be exposed to those intervention
components. At what pace of intervention delivered, exposure occur, what’s feasible or optimal,
and how will the interventions interact with the targeted audience’s life course or disease
progression or in the case of organizational settings in which they are deployed, their
activity cycles over time. So, there are a whole host of issues about time and multilevel
interventions that we really don’t have a firm grasp of, but I thought Jeffrey Alexander
and his colleague in the JNCI monograph paper in 2012 did a nice job discussing. I will
stop with that and just say there are other slides in here which folks are welcome to
review at their leisure around evaluating multilevel interventions and just note that
there are different kinds of designs that we might be interested in, which align with
the five strategies, and then there are specific study designs both quasi-experimental and
experimental for teasing out these different options. Depending on which of these objectives
we might have.>>ISHMAEL AMARREH: Thank you for those words
of wisdom. And Dr. Dorsey, would you like to have a closing remark?
>>SHANNON DORSEY: No, I think hopefully I will have lots more to say once we have the
other sites initiated and we can start to look at how some of these do things do operate
and test our strategies but for now I think what I have provided is sufficient.
>>ISHMAEL AMARREH: Thank you. Thank you both. Thank you again and we are so grateful that
you are and agreed to present this and we learned a lot from this presentation. I learned
a lot and the take away that I take from this is that implementation science is not really
a very easy science to conduct, and thinking through your study design ahead of time and
thinking through all the components that come together to make a study design is very important.
So thank you again. And we will share these slides with everybody else and thank you all
for attending and if anybody wants these slides please reach out to me.
>>SHANNON DORSEY: Great, thank you Ish for inviting us and we appreciate your hosting.
>>WEBINAR OPERATOR: This concludes today’s program. Thanks for your participation. You
may now disconnect and have a great day.

Leave a Reply

Your email address will not be published. Required fields are marked *