News story

DFID Research: HIV-related evaluations- synthesizing the evidence

A new report looks at how to collate evidence effectively from a wide range of HIV/AIDS studies

This was published under the 2010 to 2015 Conservative and Liberal Democrat coalition government
The Chitungwiza Children’s psychosocial support group for children from families living with HIV/AIDS. Picture: Elizabeth Glaser Pediatric AIDS Foundation.

The Chitungwiza Children’s psychosocial support group for children from families living with HIV/AIDS. Picture: Elizabeth Glaser Pediatric AIDS Foundation.

In 2011, The Lancet published an important article by Schwartländer and others which proposed an improved investment approach for an effective response to HIV/AIDS. This approach has profound implications for funders of the international response to HIV, including DFID.

However, in order to make investment decisions under this framework, funders need sound evidence about the effectiveness and efficiency of particular interventions. Many funders expect such evidence to come from evaluations but, to date, evidence from HIV-related evaluations has been quite limited; where available, evidence has tended to appear in a fragmented and sporadic manner rather than in a systematic or coordinated way.

In 2012 an exercise was conducted by the Department for International Development to provide a synthesis of evidence generated from completed HIV-related evaluations. Building on the earlier stocktaking report in 2011, the HIV/AIDS Evaluations Synthesis Report 2012 identified 308 ‘evaluations’ which could be defined as programmatic evaluations for the purpose of the exercise. The documents covered a wide range of evaluations with more than half of all the reports relating to ‘classic’ programme/project evaluations or non-controlled/qualitative studies, and relatively few relating to randomised controlled trials.

The report shows that a great deal of evidence is being generated through HIV-related evaluations and that this can be collated effectively through a synthesis process. If this were done on a regular basis, it could provide the evidence needed for funders to make appropriate HIV-related investment decisions.

Further, although there are agreed standards for evaluations in international development, these are often not met, e.g. in terms of quality of reports or the inclusion of specific recommendations. These gaps are particularly acute in the area of HIV prevention. This report highlights that discussion is needed to determine which types of evaluation evidence are useful for determining policy and practice.

The full report can be accessed here.

Published 21 August 2013