BlogBy Lawrence Haddad
The past 4-5 years have seen a flurry of Randomised Controlled Trials (RCTs) and Systematic Reviews come into the international development space.
In general, I think this has been a good thing. It certainly was odd that there were so few of each prior to 2008.
However, pursued in an unthinking way, we know these methods can be unhelpful.
RCTs which spend a lot of money to figure out if something works in one place at one time do not tell us much about why they work and whether they will work elsewhere.
And Systematic Reviews tend to look at only one outcome of an intervention, even if multiple outcomes have been explored by those studies.
A mechanistic application of these processes can lead to a kind of multicollinearity in the evidence. Multicollinearity is an econometric term for lines fit to data when all the data are highly correlated. This makes the estimates very unstable (think of a table top being balanced on 4 legs which are all situated in a line, i.e. correlated).
Is the pendulum swinging back to the centre? Based on very unsystematic and uncontrolled evidence, my intuition tells me "yes".
In the past few months 2 examples have come my way.
The first is a paper by Chris Bonell et. al. 2012 in Social Science and Medicine which argues for Realist RCTs: that is, RCTs which have many arms, which focus on mid level programme variables, operate in several sites, and draw on complementary qualitative analysis. The authors argue that the extra costs involved in multiple answer are more than recovered by the resources not spent on single answer RCTs. I agree with all this, but having been involved in many funder-researcher negotiations, the outcome frequently seen is a stripped down RCT and so I am under no illusions about how difficult this is to do.
The second is a paper lead authored by my IDS colleague Michael Loevinsohn and others, which is a "re-review" of a systematic review by Waddington and Snilstveit on the effectiveness of water, sanitation and hygiene interventions on diarrhoea prevention. They review a subset of the papers in the original review which described local context and which interventions allowed for greater agency of individuals in how they use or apply the interventions. While finding no fault with the original study, the authors find that the protocol used led to many impact pathways and additional insights being missed. They argue that this is because systematic review protocols tend to narrow down the terrain along disciplinary, outcome and intervention lines, presumably sometimes to make the study manageable, but perhaps often just out of habit.
I found these two papers to be interesting twists on two approaches that have become quickly entrenched in development evidence generation.
I like the two papers because they do not throw the baby out with the bathwater--they recognise the usefulness of the two approaches--and they try (successfully in my view) to adapt them for the complex world we live in.