One of the major trends within the contemporary policy scene is ‘the use of behavioural insights (BI)’ to improve policymaking. All around the world, from Qatar to England and Japan, ‘Behavioural Insights Teams’ (or ‘BITs’), ‘Nudge advisers’ and ‘Chief Behavioural Officers’ now inhabit government, seeking to infuse it with state-of-the-art knowledge and methods from the behavioural sciences. The more specific signature traits of this BI agenda appear to be its focus on new behavioural economics, nudge techniques and Randomized Controlled Trials (RCTs). The COVID-19 crisis hasn’t hampered the behavioural momentum – quite the contrary: in the absence of a distributed vaccine, halting the spread of the coronavirus has very much been a behaviour change challenge, with BI being in great demand. The recent launch of dedicated ‘COVID-19 Teams’ and ‘Corona Behavioural Units’ within the UK’s and Dutch policy scene didn’t come as a surprise, and only confirmed that behavioural government is here to stay.
Intriguingly enough, though, one question about the new institutional praxis of ‘using BI’ remains not yet convincingly answered: What is it, really?
Many of us place our hopes on innovative breakthroughs and groundbreaking discoveries, believing them to be our best bet to achieve a better world. And indeed, science has produced extraordinary breakthroughs. Vaccines radically reduced the risk of death from communicable diseases. Nitrogen-based fertilisers vastly increased the production of food. Computers completely transformed how modern humans learn, work and communicate. Surely, it would seem that investing in scientific breakthroughs is the key to progress. In this spirit, social scientists develop ‘evidence-based’ practices and policies and create hierarchies of evidence to determine ‘what works’. Many believe that if only science can produce enough evidence, discoveries will follow that can change the world – if only we can effectively compel others to accept them.
The UK Parliament performs key democratic functions holding the government to account by scrutinising policy, debating legislation and providing a venue for the public to air their views through elected representatives. Despite the key role of the UK Parliament in shaping government policy, for example in recent times on Brexit and COVID-19 (though many argue Parliament should have a greater role on the latter), scholars of science-policy interfaces have rarely explored how evidence is sourced and used in legislatures.
Especially in times of crisis, the relationship between evidence and policymaking may change dramatically. The current Covid-19 crisis generated manifestations of ‘evidence informed policymaking’ in an unprecedented way, both nationally and locally. It also showed that the need to use internationally organised, reliable data for effective policy interventions has never been more urgent in times of peace. This information needs to be both profound and directly available.
In the processes of shaping evidence informed policymaking, scientists from all kinds of disciplines play a crucial role to substantiate the development of policies. An international, virtual conference taking place 15–18 December 2020 will treat the outcomes of the current crisis as input for the challenge of professionalising the structured interaction between evidence and policymaking. The current learning processes will be analysed in the context of the existing knowledge infrastructure for policymakers. Instruments for creating evidence for policymakers have recently grown with the introduction of Big Data and the development of algorithms. Another widespread trend is the use of innovative evaluation processes in order to enhance the effectiveness of policy instruments and the growth of new standards for experimental policies.
This special issue uses the lens of Creativity and Co-production to explore the meaning of ‘evidence’ and whose meaning counts. It considers what the terms ‘creating’, ‘making’ and ‘production’ mean with regards knowledge creation, sharing and putting into action. It examines the potential role that created artefacts play. For example, what are the values embodied and represented in ‘knowledge artefacts’ and what affordance and agency might they give to human actors?
Areas for discussion include:
What evidence is valid, who produces it, and how was it produced?
What is the process by which ‘evidence’ can be interrogated by others, made sense of, and acted upon?
Not acting on evidence is commonly described as the ‘evidence gap’. Could this be broken down into a series of ‘micro’ gaps between Evidence and Knowledge, Knowledge and Knowing, Knowing and Action?
What role do creative practices, tangible objects, and visual language play in bridging each of these micro gaps?
In my work with federal agencies over the last 15 years on violence prevention, social emotional learning, mental health and homelessness, the idea of translating research to practice has become increasingly important. We know there is a gap between what we discover through research and what is applied by practitioners, funders and policymakers.
Over the past decade, federal agencies — and the US Department of Health and Human Services (HHS), in particular — have sought to learn more about the ‘science’ of implementing programmes, practices and policies. They want to invest smartly and do a better job of ensuring the most evidence-based decisions. These are noble goals — especially during this pandemic, when health and human service organizations are being asked to do things they have never done before, with lightning speed. Unfortunately, it gets complicated fast: Each field has its own terminology, frameworks and measures, making it difficult to synthesise information and create a shared body of knowledge across disciplines. So where do we start?
What does it mean to use evidence in policymaking? This seemingly simple question has been remarkably under-defined in all the calls for increased use of evidence. Indeed, many of those who champion ‘evidence-based policymaking’ do little to explain what it means for a policy to be evidence-based, and have trouble explaining what evidence use actually means when decision makers have multiple competing goals and social concerns. Evidence is simply seen as a good thing – and more use is better – without really considering what that means or what happens when there is disagreement around which evidence to use for what goals.
Policy scholars who study evidence, on the other hand, have approached the issue from the perspective that ‘evidence use’ can mean any number of things within a policy setting. The literature can, therefore, appear divided into two extremes: either evidence use is taken for granted to be a known (assumed to be good) thing, with little consideration of political realities, or alternatively it is seen as multidimensional, the form of which is constructed by the nature of policy ideas, processes, and interactions.
Our university-policy maker partnership produces ‘fake’ abstracts of articles we’ve not written yet (on results we frankly don’t even know we’ve got) to loosen up thinking. It helps the team visualise pathways for policy action.
Ours is a tricky situation, politically-speaking. A health department is undertaking Australia’s largest ever scale-up of evidence-based childhood obesity programs into every school and childcare centre across the state. It costs $45m. They have an electronic data monitoring system in place. It’s already telling them that targets are being met. But rather than just rest on their success, they invite a team of researchers to do a behind-the-scenes, no-holds-barred ethnography. It could reveal the ‘real’ story of what’s goes on at the ground level.
Jennifer Lawlor, Kathryn McAlindon, Kristen Mills, Jennifer Neal and Zachary Neal
Policy makers are working hard to promote the use of research in education. But, does ‘research’ mean the same thing to policy makers and educators? While this question might seem basic, it’s important to know if policy makers and educators are speaking the same language.
It examines similarities and differences between educators’ definitions of research and the definitions used in US Federal education policy. Our findings show that educators tend to focus on the process and products of research, while policy definitions focus on data and outcomes.
‘Wouldn’t it be great if the evidence-to-policy work we’re seeing on the rise in Africa could be visible to a wider audience?’ That was the question my colleagues at the William and Flora Hewlett Foundation and I had on our minds in 2017, seeing the creativity and resourcefulness of a host of organisations and champions from the region as they advanced a complex agenda. Now, just a few years later, the opportunity to learn from African experiences is realised in the volume Using Evidence in Policy and Practice: Lessons from Africa, edited by Ian Goldman and Mine Pabari (Routledge, 2020). The book, which both articulates a conceptual framework for thinking about the elements of a contextually-determined evidence ecosystem and presents eight case studies about diverse experiences, adds immeasurably to the literature on evidence-informed decision making.