This blog post is part of a series linked to the Evidence & Policy Special Issue (Volume 17, Issue 2): The many faces of disability in evidence for policy and practice. Guest Edited by Carol Rivas, Ikuko Tomomatsu and David Gough. This post is based on the Special Issue article, ‘Exploring a non-universal understanding of waged work and its consequences: sketching out employment activation for people with an intellectual disability‘.
Less than 6% of working aged adults with a learning disability, who receive social care, are in any form of employment – yet studies show that 65% of this population would like to have paid work. Drawing on empirical data, collected predominantly through ethnographic work, the research presented here offers a critical assessment of the mismatch between current policy and available evidence. What this research shows is that the majority of people within this demographic are underserved or excluded from targeted work preparation support in England and Wales. As a consequence, such dismal employment rates are highly unlikely to increase, regardless of government rhetoric.
Mark Priestley and Stefanos Grammenos
This blog post is part of a series linked to the Evidence & Policy Special Issue (Volume 17, Issue 2): The many faces of disability in evidence for policy and practice. Guest Edited by Carol Rivas, Ikuko Tomomatsu and David Gough. This post is based on the Special Issue article, ‘How useful are equality indicators? The expressive function of ‘stat imperfecta’ in disability rights advocacy‘.
Measuring equality can be difficult, especially when there is a lack of suitable data available, but it makes a difference. If a thing is worth measuring then it is worth measuring well – but even approximate indications of inequality can be useful in drawing public attention to injustices, making marginalised groups more visible and challenging policy assumptions. In a newly published article in Evidence & Policy, we argue that public investments in measuring inequalities have a social value that can’t be measured by technical perfection alone. Imperfect statistics sometimes have strong policy effects!
This blog post is part of a series linked to the Evidence & Policy Special Issue (Volume 17, Issue 2): The many faces of disability in evidence for policy and practice. Guest Edited by Carol Rivas, Ikuko Tomomatsu and David Gough. This post is based on the Special Issue Editorial, ‘The many faces of disability in evidence for policy and practice: embracing complexity’.
‘Everyone’s a patient’ is a refrain occasionally heard from professional health policy actors dismissive of health service user evidence; they argue that their own lived experience of a visit to the doctor’s gives them sufficient authority. The fallacy of this is suggested by an eminent psychiatrist’s astonishment at his treatment when hospitalised with a complex leg fracture. A fleeting association with primary care does not equate with the expertise developed by those with conditions with no quick fix – chronic conditions and disabilities. The much-discussed PACE trial shows how political tensions can arise from a disconnect between researchers who make flawed assumptions and those they seek to help.
So how can we ensure the ‘technical precision and expressive function’ of evidence meet the diverse needs, theoretical and ideological assumptions and priorities of the range of policy actors? How can we prevent procedural values-based decisions driven by political contingencies, drawing selectively on evidence, or the lack of representation or partial representation of disability diversity within evidence and policy?
What makes experts legitimate in the eyes of policymakers? Even though this is one of the foundational questions of the interdisciplinary scholarship on evidence and policy, the answer is neither straightforward nor simple. Expert legitimacy is driven by seeming contradictions – experts have to be responsive to policymakers’ needs but, at the same time, they cannot be too close to politics. They have to provide advice which is strongly grounded in science but if their advice is too complex it risks being ignored or being perceived too ‘detached’ and ‘academic’. Experts are legitimate when they are insiders and outsiders at the same time. This dynamic has become particularly evident in the ongoing pandemic, where government advisors have had to represent (and at times defend) science whilst at the same time accounting for what policy directions are ‘doable’ – publicly and politically acceptable and economically feasible.
Rebecca S. Natow
Qualitative research has the potential to be of great value in policymaking. By examining stakeholders’ lived experiences, providing rich detail about policy contexts, and offering nuanced insights about the processes through which programmes are implemented, qualitative research can supply useful information that is not easily, if at all, obtainable through surveys and other quantitative methods. However, policymakers consistently express a preference for quantitative research. This is particularly true for randomised controlled trials (RCTs), which have been called the ‘gold standard’ of evaluation methods.
I’ve learnt a few things in the few weeks since my Evidence & Policy debate article about using participatory budgeting for research funding decisions has been published. This article emerged from my PhD research about tradeoffs in deliberative public engagement with science. It argues that using participatory budgeting public engagement methods to make research funding decisions would further the international shift towards public participation in governance.
More controversially, my article argues that this would be a better way to reform research funding than lotteries, which others’ research indicates would be better than current norms. Norms are changing though – one of the things I’ve learnt more about since publishing this article is how the Health Research Council of New Zealand has been using a lottery to allocate some grants. They have been doing that for long enough to publish a peer-reviewed paper about it.
Liz Richardson and Peter John
Behaviour change policies, known as nudges, have been used by governments across the world to get people to behave in pro-social ways, such as making healthier lifestyle choices or reducing their environmental footprints. Nudges use behavioural insights to steer people into doing the right thing, while also giving them the choice. Critics argue that traditional nudge policies are top-down, manipulative and un-transparent. Nudge policies seem to expect the worse in people, and are easy to caricature as a technocratic approaches to policy design.
However, a new kind of nudge – ‘nudge plus’ – has started to spring up. Nudge plus tackles the risks of paternalism in traditional approaches through the participation of those being nudged. If nudges are going to be even more ‘bottom-up’, how can such behavioural public policies be developed?
Matthew Flinders, Gary Lowery and Barry Gibson
The COVID-19 pandemic has sparked a major debate about the role of experts in policymaking and the capacity of politicians to ‘follow the science’. The trend we have seen, where expert advisers have increasingly become the public face of the pandemic, raises questions about the evolving role of experts in other public policy challenges – including challenges where the scientific base is arguably far clearer about effective policy responses. If politicians are willing to ‘follow the science’ with such diligence in relation to COVID-19, why does the same principle not apply to other public health challenges?
Why, for example, when paediatric oral health remains a dire challenge for the UK, don’t politicians ‘follow the science’ in relation to the apparent benefits of fluoridating public drinking water? This is a question that a two-year project at the University of Sheffield has sought to answer through our recent Evidence & Policy article, ‘When evidence alone is not enough: the problem, policy and politics of water fluoridation in England’ . On balance, the available data confirms that fluoridation is a low-cost, high-benefit, low-risk response, which explains its promotion by global health bodies.
Sarah Ball and Joram Feitsma
One of the major trends within the contemporary policy scene is ‘the use of behavioural insights (BI)’ to improve policymaking. All around the world, from Qatar to England and Japan, ‘Behavioural Insights Teams’ (or ‘BITs’), ‘Nudge advisers’ and ‘Chief Behavioural Officers’ now inhabit government, seeking to infuse it with state-of-the-art knowledge and methods from the behavioural sciences. The more specific signature traits of this BI agenda appear to be its focus on new behavioural economics, nudge techniques and Randomized Controlled Trials (RCTs). The COVID-19 crisis hasn’t hampered the behavioural momentum – quite the contrary: in the absence of a distributed vaccine, halting the spread of the coronavirus has very much been a behaviour change challenge, with BI being in great demand. The recent launch of dedicated ‘COVID-19 Teams’ and ‘Corona Behavioural Units’ within the UK’s and Dutch policy scene didn’t come as a surprise, and only confirmed that behavioural government is here to stay.
Intriguingly enough, though, one question about the new institutional praxis of ‘using BI’ remains not yet convincingly answered: What is it, really?
R. Christopher Sheldrick, Justeen Hyde, Laurel K. Leslie and Thomas Mackie
Achieving balance is as important to progress as innovation and discovery. That’s one of the main conclusions we drew as we wrote our recent Evidence & Policy article, ‘The debate over rational decision making in evidence-based medicine: Implication for evidence-informed policy’.
Many of us place our hopes on innovative breakthroughs and groundbreaking discoveries, believing them to be our best bet to achieve a better world. And indeed, science has produced extraordinary breakthroughs. Vaccines radically reduced the risk of death from communicable diseases. Nitrogen-based fertilisers vastly increased the production of food. Computers completely transformed how modern humans learn, work and communicate. Surely, it would seem that investing in scientific breakthroughs is the key to progress. In this spirit, social scientists develop ‘evidence-based’ practices and policies and create hierarchies of evidence to determine ‘what works’. Many believe that if only science can produce enough evidence, discoveries will follow that can change the world – if only we can effectively compel others to accept them.