Should scientists who want to influence government ‘speak truth to power’, or follow the ‘rules of the game’? Do you make more difference as an outsider or an insider? This matters to any scientist who wants their research findings to have impact. As a former Department of Health civil servant employed in a University public health department, I often work with my Liverpool and Oxford colleagues on achieving ‘policy impact’, and this question arises each time, but it has never mattered more than in the Covid-19 pandemic. So what is the best way to influence government?
Kelsey Wuerstl, Miranda A. Cary, Katrina Plamondon, Davina Banner-Lukaris, Nelly Oelke, Kathryn M. Sibley, Kristy Baxter, Mathew Vis-Dunbar, Alison M. Hoens, Ursula Wick, Stefan Bigsby and Heather Gainforth
When reading articles describing a collaborative research decision, such as a research partnership creating a list of research priorities, we often read the statement, ‘The research partnership came to consensus’. But what does this statement actually mean – what is consensus, what does it mean to come to consensus, and how did the research partnership come to consensus?
Research partnerships are characterised by researchers and research users engaged in a collaborative research project to enhance the relevance and usefulness of research findings. Consensus methods require group members to develop and agree to support a solution that is in the group’s best interest. However, simply doing partnered research and using consensus methods does not guarantee the research addresses the priorities of those most affected, nor that inclusion and power dynamics have been considered. Consensus methods are often poorly reported and missing crucial information about how the research partnership made decisions about the project, as well as how issues of inclusion, equity and power dynamics were navigated.
We conducted a scoping review to better understand how research partnerships use consensus methods in health research and how these research partnerships navigate inclusion and power dynamics. Our findings, published in Evidence & Policy, identified six recommendations to enhance the quality of research teams’ consensus methods.
The use of evidence in public health decision-making is not as straightforward as it may seem – people have different ideas of what constitutes ‘evidence’, and how it should be interpreted and used in different contexts. Even when there is agreement on what constitutes evidence, research has shown that the same evidence, used in different contexts can lead to different policy decisions. A current example of this is e-cigarette policies and their recommendations. Our Evidence and Policy article explores how context, broadly defined as the factors that influence decision-making, influences the role of evidence in developing recommendations and how it may contribute to different policy approaches.
In a society that is steered by complex processes such as globalisation and institutional complexity, we are increasingly confronted with what is sometimes called ‘wicked problems’ (i.e., a problem that is difficult or impossible to solve because of incomplete, contradictory, and changing requirements that are often difficult to recognise). At Ghent University in Belgium, we were interested in trying to solve such problems by setting up a collaboration with diverse community stakeholders. We sent out a call to diverse stakeholders asking them to submit proposals on societal challenges that could be addressed by bringing together various actors and making use of scientific knowledge. An employee of the municipal department of well-being and equal opportunities submitted a case related to antisocial behavior by youth at a municipal park. The researchers involved selected the case as a pilot project to further investigate collaborative processes. For our study published in Evidence & Policy, we analysed a set of four video recorded co-creation sessions of this particular case to learn to better understand the process of how knowledge exchange can actually contribute to problem solving.
Our Evidence and Policy article reports the findings from a systematic review of how and for what purpose legislators use research evidence. It also examines legislators’ perspectives on enablers and barriers to using research evidence.
We searched for all published studies, either in English or French, in which the type of use and the barriers and facilitators to using research evidence by legislators were empirically examined. We included relevant studies regardless of the year of publication, the country where the study was conducted or the kind of legislatures. We found twenty-one studies, most of which were conducted in the United States. There has been a noticeable growth in studies since the 2010s.
We gain fascinating insights working alongside organisations across sectors in public services, that want to use data and evidence well to understand and track their impact. We specialise in working with organisations where it is hard to simply measure the difference they make, and where the main focus for change is relationships: work that educates, empowers, inspires, supports, encourages or influences people. What we have observed over the last 5 years, is that every organisation is influenced by their data culture, but it is rarely talked about. It is something we highlight in our new book: How do you know you are making a difference, from Policy Press.
Through our company Matter of Focus, we support organisations to understand the context for their work, set out their theories of change, and use this as a lens to collect and analyse data that can help them understand their change processes and evidence the difference they make. This means we host workshops and meetings where people really get to grips with different elements of their work, and we see what is inside the pandora’s box when organisations start to review the data they hold about their own work.
The COVID-19 pandemic has posed significant challenges for governments, policy advisors and citizens alike. Wide-reaching and contentious decisions had to be made at a moment’s notice while evidence about the virus was scarce, and at times involved conflicting knowledge claims. Under these conditions of uncertainty and ambiguity, questions have emerged about how values and ethical advice played roles in the decision-making process.
In our Evidence & Policy article, we look beyond the discussion of scientific advice and ask where, when and how ethical advice was sought. The article is based on documentary analysis of policy papers and documents published by UK government advisory committees and a workshop with UK government ethics advisors and researchers. Our analysis focuses on both the temporal and spatial dimensions of ethical advice during the COVID-19 lockdown in the UK. By asking when and how ethical advice was sought, we set out to account for the (changing) role of ethics and point out distinct ethical moments and stages of how ethics were taken into consideration.
Are policies based on available evidence? Are academic experts willing to provide their expertise? What enables or constrains the effective use of evidence for policymaking?
Public policy scholarship has puzzled over such questions of the evidence and policy relationship for decades. Over time, ever more differentiated branches of public policy research have developed, which complement and enrich each other. However, they have also developed their own perspectives, languages, and understandings of ‘evidence’, ‘policy’ and their connections.
Such differences in terminology and employed concepts are more than ‘just words’. Rather, attentiveness to careful conceptualisations helps to set clear boundaries for theory development and empirical research, to avoid misunderstandings, and enable dialogue across different literatures. Against that backdrop, in our article published in Evidence & Policy, we conducted a qualitative systematic review of recent public policy scholarship with the aim to trace different conceptualisations of ‘evidence’, ‘policy’ and their connections. To be included in our review, the research articles needed to address some sort of evidence, some sort of policy, and had to deal with some sort of connection between the two (a list of all included research articles is available online). The review followed all steps of the PRISMA methodology.
While the evidence base on successful practices in knowledge exchange is rapidly growing, much less attention has been given in the academic literature to documenting and reflecting on failures in trying to exchange different types of evidence between academics, practice partners and policy makers. However, learning from failures is just as important, if not more crucial, than celebrating successes.
In 2017–2018, a large school district in the U.S. was threatened by the state education agency with closure of 23 struggling elementary schools unless it could improve students’ performance on state-mandated assessments. The district’s Office of Elementary Curriculum and Development immediately tried to determine which reading resources (reading programmes, assessments, online tools, book collections, and professional development supports) were available at each school and to assess their effectiveness at improving student reading proficiency. To help with this evaluation task, our research-practice team explored various options for quickly providing suitable evidence on the effectiveness of each of 23 reading resources used at one or more of these schools. We expected to find reasonable consistency across multiple sources of information that we could use to help guide the district’s actions. The results were not quite as expected.