Evaluating P/CVE: Institutional Structures in International Comparison
Preventing and countering violent extremism (P/CVE) is an emerging field with a wealth of valuable experience but without proven recipes for success. Evaluation – the systematic and objective assessment of ongoing or completed activities – helps P/CVE actors to learn from this experience in ways that are immediately useful for current and future projects, which complements the equally pressing need for applied research. The relative youth of the P/CVE field and its intervention approaches makes evaluation particularly important to help funders invest effectively and avoid adverse effects.
A key part of making evaluations as useful as possible is to develop effective institutional structures that shape who influences how evaluations are targeted, commissioned, funded, and conducted, and to what extent their results are used to inform future action. As part of the PrEval project, this study surveys international examples of evaluation structures to inform German P/CVE policy.
Focus and Approach
At the center of our inquiry is the vast field of primary, secondary and tertiary P/CVE activities, including civic education, implemented by non-governmental, usually non-profit organizations (NGOs) that largely depend on government funding. Many of these NGOs work closely together with a variety of public authorities in the education, health, social, and security sectors. It is within this challenging, multi-stakeholder context that systematic evaluation has developed in a way that is at least partly accessible to open source research. The study focuses on three elements of institutional structures that are particularly relevant for evaluation: (1) the formal rules by which P/CVE policy and funding bodies guide their implementing partners; (2) the evaluation capabilities among funders, implementers and evaluators; as well as (3) the social norms that influence whether and how evaluations are conducted and utilized (i.e., evaluation culture). From an initial mapping of OECD countries, we used a set of selection criteria including track record, the scope of actors and approaches in both P/CVE and its evaluation, to identify four case country studies: Canada, Finland, the Netherlands, and the United Kingdom (UK).
We conducted the case studies between November 2020 and May 2021 using both public and confidential primary sources, secondary literature and 46 semi-structured interviews with experts and stakeholder representatives from the respective countries. In addition, we conducted background interviews with stakeholders in Germany to validate the study’s conclusions and recommendations for German decision-makers within policy, funding and implementing institutions.
Key Findings
As in Germany, we found that P/CVE actors across Canada, Finland, the Netherlands, and the UK struggle in similar ways with designing and implementing systematic, useful evaluations that maximize opportunities for learning, and thus progress toward greater impact. Across these cases, the demand for evaluations as learning tools is strong among funders and implementers, which is appropriate for an emerging field. However, the extent to which this demand is being met (or not) differs significantly between the four country cases. How these differences relate to each country’s systemic choices about evaluation structures – formal rules, investment in evaluation capabilities and the existence of an evaluation culture, as introduced above – provides useful insights for P/CVE evaluation policy anywhere.
Based on these findings, the study identifies six areas in which decision-makers at the policy level, within funding bodies and implementing organizations can use structural levers to advance evaluation. The first three of these areas mirror the three sets of systemic choices just mentioned; they are about shaping an overall system of constructive P/CVE evaluation practice. The other three areas focus on designing individual evaluations in effective ways. To incentivize and enable a strong evaluation practice, decision-makers need to address all six areas.
1. Build a constructive evaluation culture by prioritizing trust.
The case studies demonstrate that to maximize the utility of the significant investments that evaluation requires, P/CVE actors need an environment of trust in which owning up to mistakes does not result in undue punishment. This works only if all stakeholders share the willingness to learn, and if those who cannot be incentivized to do so by funders – the funders themselves – show that willingness, as well. While a constructive evaluation culture depends on all stakeholders, these findings suggest that funders hold a special responsibility for building it.
To create such a culture of trust– which is well established in the Netherlands, but less so in the United Kingdom (UK) – German P/CVE funders should tackle four systemic issues:
- Ensure a minimum level of financial security for implementers. This requires both more long-term funding opportunities and for all funding to align with the government’s strategic priorities.
- Provide implementers with both formal and ad-hoc access to key decisions about funding, including but not limited to decisions about government-mandated external evaluations. Advisory bodies for this purpose need to be inclusive across all sectors and stakeholder groups involved in the P/CVE field.
- Protect implementers from the undue consequences of failure. If implementers of pilot programs receive critical evaluation results, they may need incentives to adjust – but those can only work if coupled with protection from defunding or bankruptcy.
- Walk the walk. Funders need to not only demand transparency and learning from implementers, but also set a good example by openly discussing their own lessons in portfolio management and evaluation.
Tackling these four issues in a way that communicates transparency and openness is key to building the foundation for well-functioning evaluation structures.
2. Design formal rules to enable differentiated evaluation strategies.
Funding-related legal or administrative obligations are a strong structural instrument to make sure that evaluations are undertaken in the first place. These obligations are used in both the UK and the Netherlands, the latter of which has successfully instituted specific requirements for different programs, grantees or intermediaries. German P/CVE funders can use existing administrative obligations on reporting to attach demands for scientific evaluations. However, an unspecific obligation alone, even if binding, cannot ensure quality, scientific evaluation standards, or the uptake of evaluation findings.
Funders should therefore use the distinct administrative instruments at the level of individual funding schemes or even individual grants to define targeted evaluation requirements as binding for their grantees. Using these tools, funders can bind or empower implementers to particular ways of using or supporting evaluations according to the funder’s evaluation strategy. This can help both funders and implementers to consider evaluations early on in portfolio design, program and project cycles. Of course, any demands on implementers need to be matched by the necessary financial resources.
3. Invest in capabilities for managing and conducting evaluations, and using their results.
For evaluations to achieve both their learning and accountability goals, it is crucial that the cultural norm-building and formal rules that build a demand for evaluation are met by the necessary supply of people and organizations to manage, conduct and use the results of these evaluations, as well. These three supply functions need to be well organized – i.e., effectively placed within institutions – and supported in a way that allows various actors to uphold professional standards and put the overall investment in evaluation to good use.
In terms of organization, the case study evidence indicates that decentralized responsibilities for commissioning evaluations and organizing uptake allow different actors, whether policy actors, funding bodies or implementers, to pursue distinct evaluation strategies to meet their specific needs. At the same time, the suppliers’ market of evaluators also requires attention on the part of P/CVE funders: the necessary mix of capable, independent evaluators or evaluation consultancies must be built and maintained. Funders alone hold the financial power to do so. A mapping of capacity needs in this regard is forthcoming from the PrEval project. However, P/CVE actors come in many sizes, and building the same level of capability in small and large organizations alike would be duplicative and inefficient. To realize the potential of evaluations as learning tools, we found that capability centers often have a large positive effect. In Canada and the Netherlands, knowledge hubs that can also serve as help desks contribute to a healthy and well-functioning evaluation culture. Centrally provided toolkits, training, counseling, exchange, and peer learning opportunities were key elements of making evaluation more learning-focused. This centralized support approach applies to funders and implementing organizations in managing evaluations and their associated uptake, as well as to the evaluators conducting evaluations, be they professional consultants, academics or P/CVE practitioners who engage in peer evaluation.
4. Define evaluation plans and build evaluable portfolios.
Across the four case studies, we found that evaluations were often launched as an afterthought for projects, programs or policies which lacked concrete goals or theories of change. Conversely, evaluations produce much more relevant and useful results when applied to P/CVE activities that were designed with clearly defined goals, theories of change and a plan for when and how evaluation should help with learning. Evaluability does not exclude any type of activity, or privilege some P/CVE approaches over others: evaluability only requires clarity about goals and observable metrics (which could be qualitative or perception-based).
German P/CVE funders, in particular, but also implementers, should develop evaluation strategies that set specific learning goals for evaluation and, most importantly, ensure that projects and programs are designed to match the chosen evaluation strategy. They should consider a variety of factors in formulating an evaluation strategy, such as the balance between individual project evaluations and larger program or portfolio evaluations. If only certain projects are to be evaluated, they should set specific selection criteria. They need to choose in advance what approach (e.g., process or outcome evaluation) and timing make sense for a specific evaluation, and how transparently they will distribute the results. Funders, in particular, should consider how they want to assess long-term effects, and how to combine the evaluation process with other, more long-term research.
5. Ensure independence, impartiality and quality in evaluations.
Funders of P/CVE evaluation need to guarantee the independence of evaluators from those managing evaluations (and their results), as well as from those under evaluation. At the level of evaluators, efforts are currently being made in all four case study countries to increase the independence and impartiality of evaluations by creating more external or mixed (internal-external) evaluation teams.
Experts consulted across the four case study countries suggested that evaluation teams should consist of a mix of evaluation specialists, subject-matter experts in P/CVE and former P/CVE practitioners. Those managing evaluations at the government level or in implementing organizations should be sufficiently independent from those implementing P/CVE activities to avoid evaluation results being (ab)used for political purposes.
Within governmental structures, evaluation independence can be achieved by creating different reporting lines. For implementing organizations, such impartiality can only be cultivated if funders communicate it as a priority and allow for sufficient financial and staff capacity. Additionally, those funding evaluations need to design sophisticated quality assurance mechanisms for the evaluation process – something that was absent in all of our cases studies. Making quality assurance a criterion when choosing evaluators can set the right incentives for those implementing evaluations.
6. Establish state-of-the-art uptake procedures.
Implementing the lessons learned from evaluations remains a challenge across country cases. In Canada and Finland, institutional follow-up mechanisms for evaluation results at the program and project-level are missing completely. In the UK and the Netherlands, the existing mechanisms are widely criticized as ineffective.
To support the uptake of evaluation results, P/CVE actors could easily implement two standard instruments: dedicated steering groups and formal requirements for a management response process. Steering groups establish constant communication between evaluators and other stakeholders, including the future recipients of the evaluation’s recommendations. This creates a learning process that functions during the course of an evaluation – not only once there is a draft report. A management response is a formal reaction to an evaluation report in which the recipient institutions commit themselves to voluntary follow-up actions, and hold themselves publicly accountable to their commitments.
German P/CVE actors – policy actors, funding agencies and implementing NGOs alike – should adopt such state-of-the-art instruments of professional evaluation to ensure that their investments into data and knowledge also yield learning and progress in terms of preventing and countering violent extremism.
Read the full study in English and German.
This study is part of PrEval (“Evaluation Designs for Prevention Measures – multi-method approaches for impact assessment and quality assurance in extremism prevention and the intersections with violence prevention and civic education”), a project coordinated by the Peace Research Institute Frankfurt (PRIF). We gratefully acknowledge support from the German Federal Ministry of the Interior, Construction and Community.
More information on the project and network can be found on the PrEval website (in German).