Science, Finance and Innovation

How better use of evidence and more transparent reporting can save lives

5 min

by

Hugh Sharma Waddington

A common criticism of impact evaluations like randomized trials is that they do not measure what matters for decision-making. This column shows how evidence synthesis can build on, and go beyond, the data from trials to help overcome this problem, using the example of childhood mortality in water, sanitation, and hygiene (WASH) promotion. The findings provide the first estimates of WASH-related mortality for the ‘global burden of disease’, and suggest likely benefits from better use of existing evidence on other topics. They also have important implications for reporting, by showing what evidence trial researchers can provide to help save lives.

Policy interventions to promote better practices related to water, sanitation, and hygiene (WASH) have been evaluated in numerous studies. But is the evidence being collected, reported, and used in the most effective way for the ultimate objective of saving lives?

By far the most common outcome measured in WASH impact evaluations is morbidity, especially illness in childhood due to diarrhea. We know this because we have collected data from all 500 studies of the impacts of WASH promotional approaches, in a WASH evidence map.

But most of the global burden of infectious disease (GBD) is due to mortality in childhood, the two biggest single causes of which are diarrhea and respiratory infections.

For example, as Table 1 shows, 90% of the GBD for diarrhea is due to deaths among the under-five age group, while for respiratory infections, it is 99%. This makes sense when you take into account the fact that years of life lost – the product of the number of deaths and the life expectancy at age of death – will be greatest for young children; whereas, outside of disease outbreaks, each child is only likely to have a few spells of illness each year.

Every year, there are an estimated 1.2 million deaths globally from diarrhea and respiratory infection, caused by inadequate WASH.

Table 1: The global burden of infectious disease is mainly due to mortality

Cause

Years of life lost due to mortality (per 100,000)

Years living with disability due to morbidity (per 100,000)

Acute respiratory infection

1,300

10

Diarrhea

960

100

Source: data from GBD 2016.

The data collected in impact evaluations such as randomized trials provide crucial evidence on how programs can go the final mile to ensure access to, and use of, improved WASH facilities, and therefore prevent infection and death. But single studies are often unable to measure the right outcomes or demonstrate effects, as happened in recent, large-scale trials in Bangladesh, Kenya, and Zimbabwe.

It appears, therefore, that most impact evaluations are measuring the wrong thing. They collect data on carer-reported illness because it is easier to measure. But data on reported illnesses are known to be biased: children’s carers may misrepresent illness to get more of the intervention (or perhaps to make the enumerators go away). In contrast, reported mortality is unbiased because study participants simply do not misreport the death of a child.

So why do studies focus on morbidity rather than mortality? In part, because it is difficult to design intervention trials with large enough sample sizes to detect statistically precise effects on mortality. This is where evidence synthesis is needed, in particular statistical meta-analysis, which pools data from multiple studies to improve statistical power.

We have used an innovative approach to estimate mortality data from studies by harvesting data from participant flow diagrams. According to CONSORT (see Figure 1), participant flows should be transparently reported in trials, to enable verification of the methods used. For example, a common source of bias in trials is caused by differential losses to follow-up out of the study (attrition). How much attrition there is, and the reasons for it – for example, participants’ deaths – should be known.

Figure 1: How to report participant flow diagrams transparently

Sourcehttp://www.consort-statement.org/consort-statement/flow-diagram

We harvested data from the 41 trials in the WASH evidence map that reported participant flows, to estimate all-cause mortality as a result of improved WASH. A further nine trials reported diarrhea mortality.

Pooling the data using meta-analysis, we found a 15% reduction in all-cause mortality, and a 50% reduction in diarrhea mortality, directly because of WASH improvements. Even though no single study could detect a significant effect on mortality, the evidence synthesis was conclusive, demonstrating how meta-analysis not only builds on but can also go beyond trials to generate crucial evidence for policy.

Interventions enabling domestic hygiene were the most consistently associated with reductions in all-cause mortality, whereas interventions promoting community-wide sanitation and hygiene were the most consistently associated with diarrhea mortality.

These findings are consistent with what is known about mechanisms of infectious disease transmission: hand hygiene is a barrier to respiratory illness and diarrhea, which are the two main components of all-cause mortality in childhood in low-income settings; at the same time, community-wide sanitation halts the spread of diarrhea from open defecation.

Hence, transparent reporting is not only crucial for accountability, but it is also important for learning, by enabling the right outcomes to be measured and decisions taken to save lives.

There have been standards for trial reporting since at least the 1990s in health and 2010 in social science, and many authors and journals do now report this information. But there are lags in practices across the research communities producing WASH trials: participant flows have been reported in around half of health trials, but are still almost entirely missing in the social sciences like economics (see Figure 2).

While social science journals are more likely to require that data sets are publicly available, from which losses to follow-up can be calculated, this would not enable other sources of bias in studies to be evaluated (an example is individual selection bias in cluster-randomized trials).

Figure 2: Reporting of participant flows can be easily improved

The implications are clear: For policy-makers, use systematic evidence collected from all relevant studies, not single studies to inform decisions. For commissioners, use evidence maps as the starting point for policy-relevant research synthesis, not the end product. For research communities, require reporting of participant flows from enrolment to final follow-up, by study arm, together with reasons for attrition.

We plan an ‘author collaborative’ to collect participant flow data in order to present more comprehensive mortality estimates. I would be delighted to hear from any trial researchers who want to participate in this exercise to make better use of existing data to save lives.

 

Hugh Sharma Waddington
Assistant professor, LSHTM and London International Development Centre