Study

Building Evaluation Capacity to Improve Extremism Prevention

Best Practices for Developing Toolkits

Bressan Ebbecke 2024 PCVE Eval Toolkits Pr Eval
People gather at the Place de la Bourse in Brussles, Belgium, to pay tribute to the victims of the March 2016 terrorist attacks.  | Photo: Alexandros Michailidis / Shutterstock
26 Sep 2024
Download
STUDY

There is a growing consensus that activities which aim to prevent and counter violent extremism (P/​CVE) should build on evidence and lessons gleaned from past experiences. The key to this vision of a learning-focused approach are high-quality evaluations: if conducted correctly and on a regular basis, such systematic assessments can help those involved in extremism prevention understand what does and does not work – and enable them to design their programs accordingly.

However, a lack of evaluation experience and expertise among stakeholders in the P/​CVE field is hampering efforts to build a more solid foundation of evidence on extremism prevention. While targeted evaluation support is increasingly accessible to practitioners across P/​CVE contexts, particularly in the form of toolkits, there is little knowledge about what makes such capacity-building instruments effective and how those who create them can maximize their uptake and use. 

Building on insights of a previous study on the state of P/​CVE evaluation efforts in 14 countries, this study specifically examines experiences with evaluation toolkits as learning resources for extremism prevention. Based on our analysis of available P/​CVE evaluation toolkits as well as interviews with toolkit developers and funders, we present best practices and recommendations for those who want to develop and implement effective toolkits to promote a more informed evaluation practice in extremism prevention. Our recommendations relate to all major aspects of the toolkit life cycle’, including: relevance and user-centric design; accessibility and language; user testing and feedback; effective outreach and communication; as well as assessments of toolkit use and impact. 

We address the following recommendations to anyone seeking to build evaluation capacity in the P/​CVE field and beyond.

As a tool developer, you should:

  • Involve target audiences from the start of development to ensure toolkit designs align with their learning and language needs;
  • Test draft designs with end users before a toolkit’s final implementation, allowing sufficient time and resources to incorporate feedback;
  • Create engaging communication strategies to disseminate your toolkit to its intended audience;
  • Integrate a plan to monitor dissemination and allocate resources for ongoing outreach and updates to keep toolkit content relevant.

    As a funder of evaluation support instruments for P/​CVE, you should:

    • Assess capacity needs to identify the most suitable capacity-building tools and determine if existing resources can be adapted before creating new ones;
    • Embed toolkits into an evaluation capacity support system, which combines various tools and addresses structural barriers like (dis-)incentives for evaluation;
    • Ensure that the use of capacity-building instruments can be monitored and evaluated to contribute to an evidence base on how to successfully build evaluation capacity. 

    For more details on how to develop impactful and user-friendly evaluation toolkits, download the full study.


    This research was funded by the Federal Ministry of the Interior as part of the project​“Evaluation and Quality Management in Extremism Prevention, Democracy Promotion and Civic Education: Analysis, Monitoring, Dialogue (PrEval).”

    More information on PrEval can be found here and on the project’s website.