‘Everyone’s a patient’ is a refrain occasionally heard from professional health policy actors dismissive of health service user evidence; they argue that their own lived experience of a visit to the doctor’s gives them sufficient authority. The fallacy of this is suggested by an eminent psychiatrist’s astonishment at his treatment when hospitalised with a complex leg fracture. A fleeting association with primary care does not equate with the expertise developed by those with conditions with no quick fix – chronic conditions and disabilities. The much-discussed PACE trial shows how political tensions can arise from a disconnect between researchers who make flawed assumptions and those they seek to help.
So how can we ensure the ‘technical precision and expressive function’ of evidence meet the diverse needs, theoretical and ideological assumptions and priorities of the range of policy actors? How can we prevent procedural values-based decisions driven by political contingencies, drawing selectively on evidence, or the lack of representation or partial representation of disability diversity within evidence and policy?
After a period in which the onward march of evidence-informed decision-making appeared to be faltering in countries such as the US and UK, the acute uncertainties of the COVID-19 pandemic have triggered a fresh explosion of engagement with evidence and policy interactions – from diverse disciplinary, sectoral and institutional perspectives.
Does scientifically-backed information capture the attention of policymakers? To test this, I conducted a field experiment embedded in a real-life advocacy initiative targeted to members of the European Parliament in April 2018. As described in my Evidence & Policy article, ‘The silver bullet reversed: the impact of empirical evidence on policymaker attention’, results indicate that ideas-based information, rather than empirical information, gathers more attention from policymakers. More precisely, it is the announcement of ideas rather the actual information which manages to capture policymaker interest. Crucially, these findings hold across political groups, policy support and gender.
Does research add value? How can we tell? With no mechanism to quality rate research outside of the university sector, research can be overlooked, or worse discontinued, particularly when organisations face ever-increasing pressures. In this blog, we discuss how we sought to protect our research investment by providing an evidence trail of how project findings contributed to strategic priorities. This blog covers the key points of what we did and what we found: for a fuller version, see our Evidence & Policy article, ‘Research assessment in a National Health Service organisation: a process for learning and accountability’.
Knowledge brokers are intermediaries who provide a potentially vital role galvanising change. Studies of knowledge brokers have mostly taken place in high-income countries, so we know much less about knowledge brokers in LMICs. To help address this gap, a global health focused research team conducted three studies following up with knowledge broker participants of international conferences in 2012, 2013 and 2015. The aim was to identify whether evidence from the conferences was shared with others and led to actions such as changes in health policy and practice, and what factors influenced decisions to share and act on evidence.
One of the major trends within the contemporary policy scene is ‘the use of behavioural insights (BI)’ to improve policymaking. All around the world, from Qatar to England and Japan, ‘Behavioural Insights Teams’ (or ‘BITs’), ‘Nudge advisers’ and ‘Chief Behavioural Officers’ now inhabit government, seeking to infuse it with state-of-the-art knowledge and methods from the behavioural sciences. The more specific signature traits of this BI agenda appear to be its focus on new behavioural economics, nudge techniques and Randomized Controlled Trials (RCTs). The COVID-19 crisis hasn’t hampered the behavioural momentum – quite the contrary: in the absence of a distributed vaccine, halting the spread of the coronavirus has very much been a behaviour change challenge, with BI being in great demand. The recent launch of dedicated ‘COVID-19 Teams’ and ‘Corona Behavioural Units’ within the UK’s and Dutch policy scene didn’t come as a surprise, and only confirmed that behavioural government is here to stay.
Intriguingly enough, though, one question about the new institutional praxis of ‘using BI’ remains not yet convincingly answered: What is it, really?
Many of us place our hopes on innovative breakthroughs and groundbreaking discoveries, believing them to be our best bet to achieve a better world. And indeed, science has produced extraordinary breakthroughs. Vaccines radically reduced the risk of death from communicable diseases. Nitrogen-based fertilisers vastly increased the production of food. Computers completely transformed how modern humans learn, work and communicate. Surely, it would seem that investing in scientific breakthroughs is the key to progress. In this spirit, social scientists develop ‘evidence-based’ practices and policies and create hierarchies of evidence to determine ‘what works’. Many believe that if only science can produce enough evidence, discoveries will follow that can change the world – if only we can effectively compel others to accept them.
This special issue uses the lens of Creativity and Co-production to explore the meaning of ‘evidence’ and whose meaning counts. It considers what the terms ‘creating’, ‘making’ and ‘production’ mean with regards knowledge creation, sharing and putting into action. It examines the potential role that created artefacts play. For example, what are the values embodied and represented in ‘knowledge artefacts’ and what affordance and agency might they give to human actors?
Areas for discussion include:
What evidence is valid, who produces it, and how was it produced?
What is the process by which ‘evidence’ can be interrogated by others, made sense of, and acted upon?
Not acting on evidence is commonly described as the ‘evidence gap’. Could this be broken down into a series of ‘micro’ gaps between Evidence and Knowledge, Knowledge and Knowing, Knowing and Action?
What role do creative practices, tangible objects, and visual language play in bridging each of these micro gaps?
On the understanding that human beings are relational and storytelling animals, who make sense of the world through narrative and dialogue, we developed a story-telling approach to using evidence, which started by developing what has been described as an ‘enriched environment of care and learning’. Within such an environment, everyone involved should gain a sense of security, continuity, belonging, purpose, achievement and significance. To enable this, we started with their priorities and valued their evidence (i.e. practice knowledge, lived experience of older people and carers and organisational knowledge), alongside the research evidence, which we were careful not to impose on them. A challenge for the research team was how to do this.