For the past two decades, evaluation of development assistance has relied on criteria of the OECD Development Assistance Committee (DAC) to assess the merit of development interventions in a structured and comparable way. In thousands of evaluations each year, evaluators thus assess the relevance, effectiveness, efficiency, impact and sustainability of projects and programs. Despite the near universal application of the DAC criteria, research on their use and usefulness remains scant. Existing guidance and evaluation research particularly ignores the question of how to assess the relevance of development interventions. This study seeks to fill this gap.
More precisely, this research asked whether and how evaluation can assess the relevance of development interventions in a credible and rigorous way so as to allow policymakers and practitioners to deliver the best assistance possible. It draws from the literature of institutional economics, a systematic review of evaluation literature, key informant interviews and a structured assessment of a random sample of 105 evaluation reports from German, European and multilateral development organizations.
The review of evaluation reports shows that the assessment of relevance in development evaluations is being conducted in a superficial and incomplete manner. Moreover, it confirms many weak spots of current evaluation systems and policies: the incentives that evaluators face to produce confirming information and their lack of independence from commissioning organizations; evaluators’ limited skillsets and experience with regard to context analysis; overburdening of terms of references by commissioning agencies; and lacking use of participatory methods to capture the full range of perspectives from intended beneficiaries of development interventions. These factors all help explain the arbitrary treatment of relevance in development evaluations. On top of that, the linear and rigid manner in which large-scale development projects are typically planned further undermines any serious discussion of relevance once they are implemented.
Based on this analysis, Sagmeister discusses how evaluation can provide better answers to the question of relevance. It identifies good practice examples from a review of evaluation practice and then suggests practical ways forward.
First, the DAC definition of relevance should be adapted to allow clearer judgments and comparisons of relevant and not-so-relevant projects and programs. Currently, as reference points for relevance, it lumps together “beneficiaries’ requirements, country needs, global priorities and partners’ and donors’ policies”.
Second, the link of relevance and other evaluation criteria should be reviewed to allow adequate prioritization of relevance over matters of efficiency, effectiveness and sustainability.
Third, evaluators need to be clear about the evaluability of relevance and transparent about limitations, for example in the case of contradictory policies and objectives.
Fourth, evaluations of relevance need to explicitly take into account alternative interventions and interventions of other donors.
Finally, the qualitative nature of evaluating relevance should not prevent the evaluation community, commissioning agencies and the general public to demand more rigorous assessments.