Close

"But HOW does it work?" Innovating family planning interventions

Helen Burchett has a background in health promotion and social science research, and uses these skills as an active researcher and teacher at LSHTM. She has a particular research interest in the interplay between context, interventions, and people, to understand how or why interventions may work (or not work). On a crisp autumn morning, Helen and I caught up for a virtual coffee to discuss her latest project using innovative methods to explore family planning interventions for adolescents in low and middle income countries.
Helen Burchett headshot

"Covid-19 hasn’t removed the need for contraceptives. People are still having sex – and either using or not using contraception. The virus doesn’t change that so we need to keep this work going."

Q1. A tricky one to kick off – can you describe the project in one sentence?

This project aims to look at the upstream interventions to encourage contraceptive use in adolescents in low- and middle-income countries. It’s trying to understand how they work and why some interventions are more effective than others.

Q2. And this is part of the CEDIL initiative – can you briefly cover what that is?

Yes – so CEDIL stands for the Centre for Excellence in Development Impact and Learning. The website has a lot of detail so I’d advise people to look at that – but in summary, it was funded by DFID to support innovations in impact evaluations and evidence synthesis in international development. Projects range from being a year to 3 years or more, and there are currently 25 projects.

Q3. Before getting into the details, I’m interested to know what led to this work. Is it part of an ongoing stream of work – or was there a particular ‘eureka’ moment?

It stems from my interest in applicability. When can research findings from one setting or context be useful in another setting? And why is there variation in the effectiveness of interventions?

Answering these involves a deep understanding on how an intervention works.

In recent years, I’ve been talking about this issue of applicability and methods with Dylan Kneale from UCL. I’d come to the realisation that in order to understand if findings can be applied to a new setting, we need to understand how an intervention works, and taking that to a higher level, you need mid-range theory.

Then the CEDIL call came out, and one of the streams was about transferability. In essence, it was about mid-range theory and transferability – so I thought, ‘If I don’t apply for this, what am I doing? This is a call that’s speaking to me’.

Q4. Is this closely linked to implementation research?

The point is, we’re not looking at the activity itself, we’re looking at the higher level – why the activity works.

I’d say there’s three key things when trying to understand why an intervention works and why there’s a variation in effectiveness, aside from  the content of the intervention. First, the context it’s being done in. Secondly, the implementation. And finally there’s how it is experienced.

The implementation and experience are going to be shaped by the content and the context – in fact they’re all interlinked in a way. You might get the exact same intervention being done in two places, but in one place the context is different - so it’s not being implemented. Or it might be being experienced differently in one setting so its not effective.

What we’re doing is looking at the mid-range theory, a slightly higher level.

And the example I always use to explain this is around bulling in schools (which, as a caveat, I know nothing about). But let’s assume that there’s an intervention to reduce bullying in schools, and the way this aims to work is to ‘increase the self esteem of the students’. The mid-range theory is that increasing self-esteem leads to a reduction in bullying. But this can take different forms in different schools. In School-A, it might be that everyone is given a badge to wear that says: ‘I’m great’. In School-B, the badges might not work and instead it might be that fist-bumps with the teacher work. But the point is, we’re not looking at the activity itself, we’re looking at the higher level – why the activity works.

So if we know that we need to boost-self esteem to reduce bulling, and we can do this via X, Y, or Z. What the actual activity is will vary depending on the context.

Q4. And what do we already know about use of contraceptives in adolescents. Why is this a priortity for research?

We know that reducing adolescent child-bearing is a global priority nd that contraception is one of the ways of doing this.

There is a lot of evidence on the effectiveness of interventions to increase the use of contraception. But they’re typically targeting the supply of contraception or the individual-level factors that shape behaviour – that’s knowledge, attitudes etc.

But there are also upstream factors that really shape contraceptive use. By upstream factors I mean gender equality, fertility norms, economic empowerment, and participation in education – particularly for adolescent girls. These things will influence individual attitudes towards and behaviours related to contraception and we need to know: How can we try and intervene and shape these?

 I.e. How do we reduce gender inequalities? How do we shift fertility norms so that it’s not expected that adolescent girls have babies? And how do we increase economic empowerment and education (particularly for adolescent girls)?

And then if we do that, what effect will it have on contraceptive use and how do interventions achieve this? So that’s the gap we’re trying to fill, as we’re not sure how clear the evidence is there. We want to develop something that can be transferred to different settings.

Q5. Is there an unmet need for contraceptives in adolescents currently?

There’s less focus on how to increase desire to avoid, limit, space or delay childbearing. And much less work on trying to increase adolescents’ agency – the belief that they are in control.

So this is an interesting question. As we’re addressing more than just this. If you conceptualise contraceptive demand as having three parts:

  1. The desire to avoid, space, delay or limit childbearing
  2. The desire to use contraception
  3. Having agency to use family planning

Currently, a lot of the interventions try to change individuals’ attitudes, knowledge, and behaviour – focussing on the desire to use contraception. But there’s less focus on how to increase desire to avoid, limit, space or delay childbearing. And much less work on trying to increase adolescents’ agency – the belief that they are in control, that they are able to use family planning. Young people might feel that they’re not in a position to use contraception, that they need permission, and that they’re constrained. It might not even be on their radar that it’s an option for them because the social norms and their lack of empowerment mean they just expect to have babies.

So how does this link to mid-range theory? Because you might decide to address interventions at each of these three parts. Individualistic interventions focus on trying to tell adolescents about contraception, how to use them, and supplying contraception. Mid-range theory helps us understand how to develop effective interventions that address all of these parts.

One of the core MSc modules I teach, Foundations for Health Promotion, discusses this – that having knowledge itself doesn’t necessarily change behaviour. There are wider issues that shape people’s behaviours and it’s important to address these.

The CEDIL team from the International Centre for Reproductive Health in Mozambique.
The CEDIL team from the International Centre for Reproductive Health in Mozambique.

Q6. Can you talk through some of the specific upstream factors you aim to look at?

It’s things like inequalities, norms, empowerment.

So, for example, the interventions might be cash transfers to keep girls in schools for longer. Or savings and loans schemes for adolescents to give them economic empowerment. Or some kind of campaign to shift the norms around teenage pregnancy and encourage spacing between children etc.

Q8. From what I understand, this is an innovative new analysis method. What does it involve and why is it new?

The main analysis method is QCA, or Qualitative Comparitative Analysis. It’s not that new in itself, but it is new for development and has only started being used in health relatively recently. It’s a way of looking in-depth at a small number of cases to try and explore the different combinations of factors that can lead to interventions being effective or ineffective, or harmful. For us, those cases are intervention evaluations.

We’re also using another method called intervention component analysis. It’s a way of exploring the nuts and bolts of the intervention - what it involved, how it was delivered, implementation and those kind of things. These factors are usually poorly reported in trials in the published literature – those bits get squeezed out with journals’ restricted word counts. But it’s also a historical issue. Interventions began in medicine where there’s much less context, ‘I gave Drug A at Dose Y using Technique X and patient got better/didn’t get better’ – the results are the most important bit.

But now we talk about this ‘black box’ of interventions where you don’t actually know what was going on. And we can make some assumptions but if you wanted to replicate that intervention, it’s quite tricky. So this method aims to pull out those nuances and the therories that underpinned the intervention: How it was implemented? What challenges might they have faced with implementation that might otherwise have been missed?

And it’s vital to understand what are the key things that are needed in order for an intervention to work. Often, interventions can be taken to a new place and won’t be implemented in the same way, or it’s experienced in a different way. These things all need unpicking as they’re generally neglected.

Q9. So in practice – what does this research look like day-to-day? What will you and the team be spending your time doing?

There could be a whole range of different ways that you reach the same outcome. Because that’s the nature of messy real life world.

QCA

For QCA, we’ll look at a set of characteristics that will typically be related to the content of the intervention, its implementation, and/or its context. We’ll have a long list and code everything dependent on whether that characteristic is present or absent, i.e. was the setting it took place in rural or not rural, was it a high intensity or low intensity intervention etc.

We start with a long list, then bring it down to a short list, we keep testing and refining until we find some clear combinations of factors that lead to interventions being effective, or being ineffective/harmful.

A great thing about QCA is that it recognises that there isn’t just one route to effectiveness (or ineffectiveness). It’s not simply that you have to do all of these steps, and if you miss one, you won’t get the outcome that you’re after. There could be a whole range of different ways that you reach the same outcome. Because that’s the nature of messy real life world.

And because QCA isn’t based on a large number of cases, or in our case intervention evaluations, we’re really able to get a good, detailed understanding of them. So we’re constantly reflecting back and making sure we’re aware of what we understand from those studies and that it makes sense.

ICA

With the intervention component analysis, we’ll be looking at the whole paper in depth. Rather than just analysing methods and results, we’ll analyse the introduction section to see if there are any insights into the theories and concepts that underpin the intervention. It might not be explicitly stated as “Intervention X was based on theory Y” but it may be there in the introduction. In the discussion, the authors might comment, sometimes in passing, as to implementation or process issues that might explain the effect or lack of effect, so we’ll capture that.

Q9. And what are the benefits of doing this type of analysis?

It’s a great way to look at this number of cases. If you were doing something that was purely qualitative or anthropological, you’d have a much smaller number of cases that you’d look into in more depth. But that wouldn’t necessarily be representative.

On the flip side, if you’re looking at hundreds or thousands of cases you do some quantitative or statistical analysis. But you don’t understand each of the cases in depth. So you do the analysis but you don’t necessarily understand why things work or not.

It’s a good method of trying to understand this heterogeneity in effects. And not just looking at it from a purely quantitative perspective, but keeping the in-depth qualitative understanding in there.

Q10. If everything goes to plan, what do you hope you will have achieved in 5 years’ time?

In terms of family planning, I would like to think that there would be greater recognition of the importance of upstream factors and this would be part of the debate around contraceptives and thinking about these wider issues.

But in terms of the research more broadly, it would great to see people using these methods and them becoming more established standard methods.

Q11. How has Covid-19 impacted this project, and why is it vital that we don’t neglect areas of research like this in the pandemic?

Its vital because sexual and reproductive health issues still exist – Covid-19 hasn’t removed the need for contraceptives. People are still having sex – and either using or not using contraception. The virus doesn’t change that so we need to keep this work going.

Currently we’re looking at what research has already been done and then we’ll narrow it down to one subset. Part of that will be consulting our advisory group and having in the back of our minds: ‘what sort of interventions could be relevant during the pandemic?’ We’ll consider if there are certain digital aspects we could focus on more, for instance.

 

Q12 Finally – this project is a collaboration. Can you tell me more about your partners and their role?

I’m hoping it will be the first of many collaborations with all involved!

I’m working with the International Centre for Reproductive Health in Mozambique. Their aim is to broadly improve sexual and reproductive health with particular attention to sexually transmitted infections (STIs), cervical cancer, maternal and child health, family planning, contraception and gender-based violence.

I work with a team based there and the director is an LSHTM alumni. The team haven’t been involved in a systematic review or evidence synthesis before, so I’m helping to take them through the steps involved in a review. And we’re drawing on their hub of expertise and experience in research and providing services for adolescents around contraceptive services and broader sexual and reproductive health issues and rights.

Basically – they know their stuff and I’m really just there to take them through the steps of a review.

My hope is that they can then apply this learning to other areas of research and take it forward, and we can have future collaborations. I’d love it if in future they’re leading a research bid and they sub-contract me to help with some of the analysis – but very much with them leading.

And I already mentioned Dylan Kneale at UCL, who it is great to finally be working with on this issue. He’s helping with the methodological aspects and the actual QCA analysis.

I’m hoping it will be the first of many collaborations with all involved!

Short Courses

LSHTM's short courses provide opportunities to study specialised topics across a broad range of public and global health fields. From AMR to vaccines, travel medicine to clinical trials, and modelling to malaria, refresh your skills and join one of our short courses today.