Challenges and solutions for evaluating programmes to prevent and combat extreme violence

Challenges and solutions for evaluating programmes to prevent and combat extreme violence

Today, there are few who dispute the importance of evaluation as an element that promotes better practices in all fields of intervention. So it's not surprising that the relevance of evaluation is also widely recognised in the interventions and practices of professionals working in Preventing and/or Combating Violent Radicalisation (P/CVE).

With the variety of programmes and approaches in this area and the new challenges that have arisen, it is crucial to understand what works and what doesn't, as well as how interventions promote change and impact on the problems they aim to combat. In this specific area, evaluations are confronted with the rapid and complex nature of extremist contexts, de-radicalisation dynamics, socio-political developments and changes in the funding structures of the interventions themselves.

In light of the above, evaluation approaches must do justice to the varied characteristics of target groups as well as the diversity of stakeholders involved in the planning and implementation of P/CVE measures, and are indispensable for producing informed conclusions about good practice and for strengthening professional practices throughout the life cycle of programmes and projects.

For all these reasons, when the challenge of designing an evaluation guide for professionals working in the field of P/CVE was launched, the question that immediately arose was how to achieve an approach that would be accessible to non-specialists in evaluation, and that would consider and provide solutions to the technical and practical challenges facing the evaluation of results and/or impacts. In fact, it's important to point out that despite the specific context and area of intervention in which this guide arises, the technical issues and solutions proposed are applicable to many other areas, not to mention all of them.

In the Guide, we have identified the main challenges facing the design of evaluations for P/CVE programmes and projects. On the one hand, we discuss analytical challenges such as the "impossibility of measuring a negative" in an attempt to show that radicalisation doesn't happen because a certain intervention has taken place, or the difficulties of accessing target audiences; but we also find discussion of a wide range of more practical challenges facing those who want to design and implement evaluation processes.

Practical challenges such as the availability and reliability of data, difficulties in activating stakeholder participation, lack of resources or the difficulty of developing robust baselines, as well as the lack of reliable and up-to-date data (such as official statistics on success/recidivism rates) against which to triangulate the results of evaluations. However, just as relevant as the analytical or practical challenges are the ethical issues that must be present when designing evaluations and which are discussed in depth in the Guide.

Since this is a guide whose users will mainly be intervention technicians, ways of responding methodologically and technically to all challenges, whatever their nature, are presented. The guide therefore presents a proposal for an evaluation construction process that enables many of the challenges presented to be overcome and a design centred on three types of results/impacts in the area of P/CVE:

- Attitudes

- Behaviours and Practices

- Relationships and Socialisation Networks

We also find in the Guide a central and useful discussion in any area of intervention, when we think about evaluating results and/or impacts, that of establishing well-founded and robust relationships/nexuses of causality.

Three main groups of methodological strategies are presented: counterfactual approach, consistency of evidence with the causal relationship and exclusion of alternative explanations. These options are explained in terms of their principles and practical implementation possibilities.

At the end of the Guide we can find a process and steps for building evaluation plans that we can use whatever methodological option is selected for the evaluations, as well as a set of recommendations for improving their quality and maximising their potential to positively influence future decision-making.

Drawing up this guide was a very interesting process because of the challenge of creating a document that was both accessible to non-specialists in evaluation, but also robust and technically sound, allowing P/CVE experts, but also others, to design better evaluations. Since its launch, we have had the opportunity to discuss many of its contents with practitioners from various countries and to see that, while it is obviously not a complete answer, or one that contains the answers to all the problems that practitioners face in their interventions, it is a useful contribution that has acted as a catalyst for new reflections and even evaluation practices in organisations and teams working in the area of preventing and combating violent radicalisation.

You can find the guide here.