by Chen Reis
Last week Nicholas Kristof*, the popular NYT columnist, created a storm on twitter and facebook with his column “Professors, We Need You! “ which, among other points, decried the irrelevance of much social science research to policy-making. There have been a number of responses from academics on Twitter, Facebook, and in blogs with many pointing out that they and a significant number of their colleagues are actively working to produce policy relevant research.
Kristof makes some valid points about the obscurity of much social science research and the inaccessibility of the jargon. But he does not mention an important reality: that even relevant, good quality, and well communicated research often fails to have much impact on public dialog and policy. Some of the challenges may be inherent to the nature of policy-making itself, but the discrepancy is often seen when research findings do not conform to preconceived notions or agenda of policymakers. When research demonstrates that pre-existing ’solutions’ are not applicable, it is likely to be ignored as well. This too is true both in the US national system and internationally. For example, even though the data suggest that most of the gender-based violence even in humanitarian settings is perpetrated by intimate partners, most of the focus in processes aimed at ending impunity and preventing violence remains on combatant perpetrated sexual violence.
Even in areas for which there is more of an evidence base, it is not clear how and whether the evidence is used. ALNAP, the Active Learning Network for Accountability and Performance in Humanitarian Action, is working to identify the quality and use of evidence available for the humanitarian sector.
The problem is not only that existing evidence is often ignored, but also that there is also little recognition or mention of the need for data on what works, even in key high level statements and commitments. The lack of evidence about what works speaks to not only the complexity of research in crisis settings but also to the lack of resources available for robust program monitoring and evaluation. When it comes to prevention of and response to sexual violence in conflict, and to evaluation of humanitarian programming in general, it is only fairly recently that there has been a move to identify evidence of what works. Humanitarian non-governmental organizations like the International Rescue Committee (IRC) are working with academic institutions to evaluate interventions for sexual violence in humanitarian settings. There are also initiatives to support the generation of evidence for action, such as the Research for Health in Humanitarian Crises (R2HC) initiative of the ELRHA.
It will be interesting to see whether this push for evidence-based action is reflected in the UK hosted Global Summit to End Sexual Violence in Conflict scheduled for this June. I hope that support for building the evidence base and for using the evidence to inform policy and programming plays a greater and more integrated part of the global efforts to prevent and respond to sexual violence in humanitarian settings.