Caitlin Blaser-Mapitsa, Takunda Chirau and Matodzi Amisi
National evaluation policies are one way of demonstrating a willingness in government to promote the use of evidence in a systemic way. Our recently published Evidence & Policy article, ‘Policies for evidence: a comparative analysis of Africa’s national evaluation policy landscape‘, explores the relationship between evaluation policies and evaluation systems. We have found that policies are one piece of the puzzle acting to strengthen undertaking of evaluations, evidence use, and build evaluation practice in Africa.
This blog post is based on the Evidence & Policy article, ‘A new measure to understand the role of science in US Congress: lessons learned from the Legislative Use of Research Survey (LURS)’
Elizabeth C. Long, Rebecca L. Smith, Jennifer T. Scott, Brittany Gay, Cagla Giray, Shannon Guillot-Wright and Daniel M. Crowley
Want to conduct surveys with national-level policymakers about their research use, but not sure how? We at the Research-to-Policy Collaboration offer a new measurement protocol to understand the role of science in national-level policymaking and provide lessons we learned based on our experiences surveying congressional staff in the US.
This blog post is based on the Evidence & Policy article, ‘Understanding knowledge brokerage and its transformative potential: a Bourdieusian perspective‘.
Graham Martin, Sarah Chew and Natalie Armstrong
Some problems in society result from institutions’ traditional tendency to work in isolation from one another. An example is the slothful pace at which evidence from healthcare research reaches practice: some estimates suggest it can typically take as long as seventeen years. Increasing collaboration between institutions is the obvious remedy, but ‘If you think competition is hard, you should try collaboration’.
The institutional fields of research and practice have very different structures and value systems. This means that getting them to collaborate requires some external impetus. Recently, knowledge (brokering a range of activities designed to link the producers and users of knowledge by, for example, encouraging new relationships, devising new ways of working together, and helping to move knowledge across boundaries) has been promoted as a way of enabling collaboration and even bringing about changes in the working relationships of institutions. Knowledge brokerage has become a role in its own right, but its popularity as a remedy outstrips evidence for its efficacy.
This blog post is based on the Evidence & Policy article, ‘Creating an action plan to advance knowledge translation in a domestic violence research network: a deliberative dialogue‘.
Jacqui Cameron, Cathy Humphreys, Anita Kothari and Kelsey Hegarty
Addressing domestic violence is not like some public health strategies that can be addressed with a straightforward prevention strategy. Although there are well over sixty different models of knowledge translation (KT) in the literature, a recent review of KT found the voices of survivors and diverse populations were often absent in KT examples.
To address this gap, we asked the following two questions of a domestic violence research network:
- Is there a consensus regarding a coherent knowledge translation framework for a domestic violence research network?
- What are the key actions that a domestic violence research network could take to enhance knowledge translation?
Leire Rincón García
Does scientifically-backed information capture the attention of policymakers? To test this, I conducted a field experiment embedded in a real-life advocacy initiative targeted to members of the European Parliament in April 2018. As described in my Evidence & Policy article, ‘The silver bullet reversed: the impact of empirical evidence on policymaker attention’, results indicate that ideas-based information, rather than empirical information, gathers more attention from policymakers. More precisely, it is the announcement of ideas rather the actual information which manages to capture policymaker interest. Crucially, these findings hold across political groups, policy support and gender.
Jennifer Watling Neal, Zachary P. Neal and Brian Brutzman
Brokers, intermediaries and boundary spanners facilitate communication between researchers and practitioners but are these various terms simply different labels for the same role? We spent the last year reviewing published articles in health, education and the environment to explore how each of these terms is defined. In short, we found that, when these terms are used, most of the time they aren’t defined. But, when these terms are defined, there are key differences in what they mean.
There’s increasing recognition that brokers, intermediaries and boundary spanners play a key role in connecting researchers and practitioners. However, inconsistencies in whether and how brokers, intermediaries and boundary spanners are defined make it hard to understand, evaluate and leverage these roles.
What makes experts legitimate in the eyes of policymakers? Even though this is one of the foundational questions of the interdisciplinary scholarship on evidence and policy, the answer is neither straightforward nor simple. Expert legitimacy is driven by seeming contradictions – experts have to be responsive to policymakers’ needs but, at the same time, they cannot be too close to politics. They have to provide advice which is strongly grounded in science but if their advice is too complex it risks being ignored or being perceived too ‘detached’ and ‘academic’. Experts are legitimate when they are insiders and outsiders at the same time. This dynamic has become particularly evident in the ongoing pandemic, where government advisors have had to represent (and at times defend) science whilst at the same time accounting for what policy directions are ‘doable’ – publicly and politically acceptable and economically feasible.
Rebecca S. Natow
Qualitative research has the potential to be of great value in policymaking. By examining stakeholders’ lived experiences, providing rich detail about policy contexts, and offering nuanced insights about the processes through which programmes are implemented, qualitative research can supply useful information that is not easily, if at all, obtainable through surveys and other quantitative methods. However, policymakers consistently express a preference for quantitative research. This is particularly true for randomised controlled trials (RCTs), which have been called the ‘gold standard’ of evaluation methods.
Sarah Ball and Joram Feitsma
One of the major trends within the contemporary policy scene is ‘the use of behavioural insights (BI)’ to improve policymaking. All around the world, from Qatar to England and Japan, ‘Behavioural Insights Teams’ (or ‘BITs’), ‘Nudge advisers’ and ‘Chief Behavioural Officers’ now inhabit government, seeking to infuse it with state-of-the-art knowledge and methods from the behavioural sciences. The more specific signature traits of this BI agenda appear to be its focus on new behavioural economics, nudge techniques and Randomized Controlled Trials (RCTs). The COVID-19 crisis hasn’t hampered the behavioural momentum – quite the contrary: in the absence of a distributed vaccine, halting the spread of the coronavirus has very much been a behaviour change challenge, with BI being in great demand. The recent launch of dedicated ‘COVID-19 Teams’ and ‘Corona Behavioural Units’ within the UK’s and Dutch policy scene didn’t come as a surprise, and only confirmed that behavioural government is here to stay.
Intriguingly enough, though, one question about the new institutional praxis of ‘using BI’ remains not yet convincingly answered: What is it, really?