R. Christopher Sheldrick, Justeen Hyde, Laurel K. Leslie and Thomas Mackie
Achieving balance is as important to progress as innovation and discovery. That’s one of the main conclusions we drew as we wrote our recent Evidence & Policy article, ‘The debate over rational decision making in evidence-based medicine: Implication for evidence-informed policy’.
Many of us place our hopes on innovative breakthroughs and groundbreaking discoveries, believing them to be our best bet to achieve a better world. And indeed, science has produced extraordinary breakthroughs. Vaccines radically reduced the risk of death from communicable diseases. Nitrogen-based fertilisers vastly increased the production of food. Computers completely transformed how modern humans learn, work and communicate. Surely, it would seem that investing in scientific breakthroughs is the key to progress. In this spirit, social scientists develop ‘evidence-based’ practices and policies and create hierarchies of evidence to determine ‘what works’. Many believe that if only science can produce enough evidence, discoveries will follow that can change the world – if only we can effectively compel others to accept them.
As social scientists, the authors of this blog fundamentally agree that science is essential to achieving a better world, but we also recognise that scientific evidence alone is not sufficient to produce better policy. Insights from the study of rational decision-making demonstrate why.
One reason is that scientific uncertainty is ubiquitous, and science itself documents this point. Replication of prior research often fails; re-analyses of identical datasets often yields inconsistent results. It is difficult to judge whether results of even well-designed studies will generalise to other populations or less-resourced settings, to valued outcomes not included in original studies, or to future time periods. Many well-studied programmes work in one setting, but are difficult to spread to another setting without careful attention to context. For reasons such as these, expert consensus about best clinical practices often changes from decade to decade and organisation to organisation, sometimes in dramatic ways. Whereas some sources of uncertainty can be meaningfully informed by additional study (a common plea among scientists), others fundamentally depend on human judgment and expertise to apply available scientific evidence to policy decisions in a rational, informed way.
A second challenge to the application of scientific evidence to practice and policy lies in the fact that most decisions result in multiple outcomes. Benefits are almost always balanced by risks and costs. Decision analysts use the term ‘preference-sensitive’ to describe cases where existing evidence is insufficient to resolve trade-offs among different outcomes, and rational decisions are therefore dependent on which outcomes are valued most highly by decision-makers, either at the individual clinical or policy level. Unfortunately, we often act as if scientific evidence speaks for itself – that judgment is not required, that values and preferences are somehow superfluous or irrational, and that dialog is, at best, a means to an end. In reality, most decision-makers, particularly policymakers, must select among a variety of choices, each of which may result in a wide range of outcomes that are fraught with uncertainty. They can’t wait until the next study is published or a breakthrough happens, because a decision is required now.
The COVID-19 pandemic offers an all-too familiar example. We all hope that a scientific breakthrough – a new vaccine or an as yet unidentified medical treatment – will offer a clear solution. In the meantime, we have little choice but to work together to seek balance among competing outcomes in the face of considerable uncertainty. Social distancing effectively reduces transmission rates and can save lives – but by how much, and what are the trade-offs? Indeed, social distancing also impairs mental health and interferes with children’s education and the economy – but again, by how much? Scientific evidence of the highest quality is clearly necessary to make rational decisions about how best to enact social distancing – but applying that evidence requires judgment and expertise to account for scientific uncertainty, together with consideration of people’s varied preferences regarding which outcomes they value most, and what trade-offs they are willing to accept. Policymakers must make these decisions, not just for themselves and their families, but for larger populations and with insufficient evidence.
Until scientific breakthroughs occur, we must find a way to balance competing priorities. When the need for trade-offs arises, a single solution is unlikely; indeed, we should expect even reasonable people to disagree. To achieve balance, science needs to promote deeper dialog – about the judgment we use when we apply evidence to address uncertainty, about the outcomes each of us values most when making decisions, and about how we reach some consensus to inform necessary policies.
R. Christopher Sheldrick
Boston University School of Public Health, USA
Center for Healthcare Outcomes and Implementation Research ENRM Veteran’s Affairs Medical Center, USA
Laurel K. Leslie
Tufts University School of Medicine, USA
Rutgers, The State University of New Jersey, USA
All authors collaborate to study Research Evidence Adoption for Child Health (REACH), an ongoing project designed to integrate theoretic and empirical findings to improve use of research evidence.
You can read the original research in Evidence & Policy:
Sheldrick, C.R. Hyde, J. Leslie, L.K. and Mackie, T. (2019). The debate over rational decision making in evidence-based medicine: Implications for evidence-informed policy. Evidence & Policy, DOI: 10.1332/174426419X15677739896923.
Image credit: Pikist.com
If you enjoyed this blog post, you may also be interested to read:
Making evidence and policy in public health emergencies: lessons from COVID-19 for adaptive evidence-making and intervention [Open Access]
Plural forms of evidence in public health: tolerating epistemological and methodological diversity