Rebecca S. Natow
Qualitative research has the potential to be of great value in policymaking. By examining stakeholders’ lived experiences, providing rich detail about policy contexts, and offering nuanced insights about the processes through which programmes are implemented, qualitative research can supply useful information that is not easily, if at all, obtainable through surveys and other quantitative methods. However, policymakers consistently express a preference for quantitative research. This is particularly true for randomised controlled trials (RCTs), which have been called the ‘gold standard’ of evaluation methods.
Theresa Canova Norton
‘An e-mail never made me change the way I do things’, a colleague once said. Implicit in this statement is the idea that passively receiving information alone is unlikely to motivate change. How might this observation inform the way we approach disseminating the best available evidence? This is what we explore in our Evidence & Policy article, ‘Maybe we can turn the tide’: an explanatory mixed-methods study to understand how knowledge brokers mobilise health evidence in low- and middle-income countries’.
Knowledge brokers are intermediaries who provide a potentially vital role galvanising change. Studies of knowledge brokers have mostly taken place in high-income countries, so we know much less about knowledge brokers in LMICs. To help address this gap, a global health focused research team conducted three studies following up with knowledge broker participants of international conferences in 2012, 2013 and 2015. The aim was to identify whether evidence from the conferences was shared with others and led to actions such as changes in health policy and practice, and what factors influenced decisions to share and act on evidence.
I’ve learnt a few things in the few weeks since my Evidence & Policy debate article about using participatory budgeting for research funding decisions has been published. This article emerged from my PhD research about tradeoffs in deliberative public engagement with science. It argues that using participatory budgeting public engagement methods to make research funding decisions would further the international shift towards public participation in governance.
More controversially, my article argues that this would be a better way to reform research funding than lotteries, which others’ research indicates would be better than current norms. Norms are changing though – one of the things I’ve learnt more about since publishing this article is how the Health Research Council of New Zealand has been using a lottery to allocate some grants. They have been doing that for long enough to publish a peer-reviewed paper about it.
Emilia Aiello, Claire Donovan, Elena Duque, Serena Fabrizio, Ramon Flecha, Poul Holm, Silvia Molina, Esther Oliver and Emanuela Reale
Scientific research has the potential to improve people’s lives, but the translation of scientific evidence into social impact is not always easy. According to the Expert report of the European Commission ‘Monitoring the impact of EU Framework Programmes’, ‘social impact is the improvement of society and citizens in relation to their own goals (like the United Nations Sustainable Development Goals)’. How can social science and humanities research achieve this?
Governments and society increasingly demand that scientific research demonstrates social impact and benefit. In this context, scientists are encouraged to reach out to their communities, share their research and its impact on people’s everyday lives, listen to communities and consider their research from the perspective of the people they serve. Social Sciences and Humanities (SSH) research has been challenged in this regard and has been at risk of being eliminated from the European Union’s Framework Programme for Research and Innovation ‘Horizon 2020’. In response, it is necessary to identify and promote the use of effective strategies for enhancing the social impact of research, so that it can inform evidence-based policies and the actions of professionals, citizens and civil society organisations.
Liz Richardson and Peter John
Behaviour change policies, known as nudges, have been used by governments across the world to get people to behave in pro-social ways, such as making healthier lifestyle choices or reducing their environmental footprints. Nudges use behavioural insights to steer people into doing the right thing, while also giving them the choice. Critics argue that traditional nudge policies are top-down, manipulative and un-transparent. Nudge policies seem to expect the worse in people, and are easy to caricature as a technocratic approaches to policy design.
However, a new kind of nudge – ‘nudge plus’ – has started to spring up. Nudge plus tackles the risks of paternalism in traditional approaches through the participation of those being nudged. If nudges are going to be even more ‘bottom-up’, how can such behavioural public policies be developed?
Matthew Flinders, Gary Lowery and Barry Gibson
The COVID-19 pandemic has sparked a major debate about the role of experts in policymaking and the capacity of politicians to ‘follow the science’. The trend we have seen, where expert advisers have increasingly become the public face of the pandemic, raises questions about the evolving role of experts in other public policy challenges – including challenges where the scientific base is arguably far clearer about effective policy responses. If politicians are willing to ‘follow the science’ with such diligence in relation to COVID-19, why does the same principle not apply to other public health challenges?
Why, for example, when paediatric oral health remains a dire challenge for the UK, don’t politicians ‘follow the science’ in relation to the apparent benefits of fluoridating public drinking water? This is a question that a two-year project at the University of Sheffield has sought to answer through our recent Evidence & Policy article, ‘When evidence alone is not enough: the problem, policy and politics of water fluoridation in England’ . On balance, the available data confirms that fluoridation is a low-cost, high-benefit, low-risk response, which explains its promotion by global health bodies.
Peter van der Graaf
Keen to have impact with your research but getting lost in all the knowledge exchange frameworks and models that are out there? Based on ten years’ experience working in translational public health for Fuse – The Centre for Translational Research in Public Health, a UK Clinical Research Centre collaboration across five universities in North East England, we identified four practical steps to develop collaborative research and achieve meaningful change in policy and practice.
The challenges of using research to inform policy and practice are well documented, including in public health where the evidence base for interventions or programmes is patchy or contested. In response to these challenges, an abundance of models and frameworks have been developed in recent years that try to define the knowledge exchange process (how research evidence can be used, in combination with other types of knowledge, to change policy and practice). Practitioners and researchers venturing into the field of knowledge exchange are bewildered by the options available, which don’t go beyond the conceptual level and fail to describe in practical terms what research translation on the ground looks like.
Sarah Ball and Joram Feitsma
One of the major trends within the contemporary policy scene is ‘the use of behavioural insights (BI)’ to improve policymaking. All around the world, from Qatar to England and Japan, ‘Behavioural Insights Teams’ (or ‘BITs’), ‘Nudge advisers’ and ‘Chief Behavioural Officers’ now inhabit government, seeking to infuse it with state-of-the-art knowledge and methods from the behavioural sciences. The more specific signature traits of this BI agenda appear to be its focus on new behavioural economics, nudge techniques and Randomized Controlled Trials (RCTs). The COVID-19 crisis hasn’t hampered the behavioural momentum – quite the contrary: in the absence of a distributed vaccine, halting the spread of the coronavirus has very much been a behaviour change challenge, with BI being in great demand. The recent launch of dedicated ‘COVID-19 Teams’ and ‘Corona Behavioural Units’ within the UK’s and Dutch policy scene didn’t come as a surprise, and only confirmed that behavioural government is here to stay.
Intriguingly enough, though, one question about the new institutional praxis of ‘using BI’ remains not yet convincingly answered: What is it, really?
R. Christopher Sheldrick, Justeen Hyde, Laurel K. Leslie and Thomas Mackie
Achieving balance is as important to progress as innovation and discovery. That’s one of the main conclusions we drew as we wrote our recent Evidence & Policy article, ‘The debate over rational decision making in evidence-based medicine: Implication for evidence-informed policy’.
Many of us place our hopes on innovative breakthroughs and groundbreaking discoveries, believing them to be our best bet to achieve a better world. And indeed, science has produced extraordinary breakthroughs. Vaccines radically reduced the risk of death from communicable diseases. Nitrogen-based fertilisers vastly increased the production of food. Computers completely transformed how modern humans learn, work and communicate. Surely, it would seem that investing in scientific breakthroughs is the key to progress. In this spirit, social scientists develop ‘evidence-based’ practices and policies and create hierarchies of evidence to determine ‘what works’. Many believe that if only science can produce enough evidence, discoveries will follow that can change the world – if only we can effectively compel others to accept them.
You know the story. A lone cowboy (unfortunately never a cowgirl) rides away into the sunset having saved the day. The same expectations are often placed on knowledge brokers who bring together different communities to share knowledge and catalyse change. The lone knowledge broker is supposed to be a hero. But speaking from decades of experience, you just can’t do it alone. A single person does not have all the necessary networks, knowledge, understanding, skills or credibility. To be effective, knowledge brokers need teams.
In a unique experiment from 2013–2016, we set up the Bristol Knowledge Mobilisation team. This was made up of four local healthcare policymakers (called ‘commissioners’) and three primary care academics; all of whom had part-time contracts with both the university and in healthcare commissioning. Our aim was for both communities to draw on each other’s knowledge to create ‘research-informed commissioning’ and ‘commissioning-informed research’ (i.e. research of genuine relevance).