What counts as good evidence for policy? It’s a question that’s fundamental to work in the Science Policy world and it was the focus of a debate, jointly organised by the Science and Technology Studies Department at UCL and the SPRU at the University of Sussex, earlier this week. The assembled panel were well-placed to discuss the topic, and the diverse experiences they shared at the outset set us up for a lively question session.

Roger Pielke, Professor of Environmental Studies at the University of Colorado at Boulder and author of ‘The Honest Broker’—a key text about science and politics—kicked things off with a story of how Hurricane Sandy’s meteorological classification became a political issue. On its approach to the US east coast, the storm was initially called a ‘post-tropical cyclone of hurricane strength’ rather than a hurricane, putting the liability for ensuing damage onto insurance companies. In the aftermath—when the final classification of the storm is made—government officials put pressure on meteorologists to uphold this classification for its political (and financial) implications. Pielke’s tale illustrated from the off that for evidence in policy, the goals of governments on one side and scientists on the other can be very different.

Next up was Georgina Mace FRS, Professor of Biodiversity and Ecosystems at University College London, discussing her role as a scientist in a policy-relevant area. Being commissioned to identify endangered species for the IUCN red list highlighted to her the prescriptive limits of scientific evidence; she could identify endangered species, but prioritising their preservation was a social and political question, the answer to which will differ across cultures and groups.

Richard Horton, Editor of The Lancet described the history of public health policy for smoking from the presentation of evidence about smoking’s harmful effects to government in the 1950s, to the introduction of the smoking ban in 2007. Is this how long it takes to effect change on the basis of good evidence? Obviously the true story of smoking policy is more nuanced, but Horton identified three key factors in its progress: evidence must come from a trusted source, the media need to be on side and the message must be timely with respect to broader public opinion.

Finally, Jonathan Breckon from the Alliance for Useful Evidence discussed the perspective of those in power and the most common question he hears from them: what is enough evidence? Good, bad, biased and spun evidence might all make their way to the ears of the decision makers; the difficulty lies in discerning when the evidence is ‘enough’ to act. Pielke commented later in the discussion that for climate science, we have known ‘enough’ for a long time, but the continued focus on the strength of evidence has actually held back a focus on action. The panel’s opening statements framed the debate well and raised diverse issues; some key themes are discussed below.

What about missing evidence? Dr Ben Goldacre’s recent campaign for all clinical trials to be registered—in order to prevent pharmaceutical companies withholding results and biasing evidence about medicines—has had strong public support, and GSK has recently signed up to it and agreed to release trial data. It was agreed that accessing all available evidence is equally as important as providing good evidence. Further, how should areas where evidence is unavailable be treated? Should policy makers commission more research? If so, which organisations should they trust to provide it?

How and when should evidence be delivered for maximum impact? Windows of opportunity in which evidence can be most influential—when policy positions are being determined—are difficult to foresee, but Horton said the policy advice on an issue must be ready to ship when it is requested, not ready to be commissioned. Preparing evidence and advice can take years, which makes the work of experts and campaigners more difficult.

Should there be a separation between scientists and campaigners? Is such a separation possible? Richard Horton pointed out that for a public health researcher, not being an activist is the exception. However, Pielke pointed out that for climate science, experts being activists can actually lessen their credibility.

The issues discussed over the course of the evening highlighted variation in approaches to policy across fields of expertise, indicating that—surprisingly enough for a policy debate—there are many answers to the question of what counts as good evidence for policy.  The discussion was lively and spilled over to the wine and nibbles table once the official session was over. Perhaps more questions were raised than answered, but it was a fascinating evening of discussion amongst people both knowledgeable and passionate about the field.

  • http://twitter.com/aDissentient Bishop Hill

    “Should there be a separation between scientists and campaigners? Is such
    a separation possible? Richard Horton pointed out that for a public
    health researcher, not being an activist is the exception.
    However, Pielke pointed out that for climate science, experts being
    activists can actually lessen their credibility.”

    Perhaps this explains the differences some have had with Sir Paul over his wish to take the Royal Society into the policy realm.

  • Jeremy Poynton

    Your typeface is very poor. Hard to read. 

    • Anonymous

      Hi Jeremy, thanks for your comment. We are looking to make the font a bit easier to read and hope to do this in the near future.

  • Dee Thomas

    evidence must come from a trusted source, the media need to be on side and the message must be timely with respect to broader public opinion.

    We have an interesting  situation here in Perth and Kinross with a pilot study called Evidence2Success which is a controversial social policy questionnaire surveying 9-15yr old school populations with the aim of extracting statistically valid evidence of wellbeing and inequality, with an objective of using the survey data to target interventions, but actually really to refocus investment at whole populations deemed to be less equal than another.

    The questionnaire was slipped into schools and marked with the words “secret” at the top of the first page, some not all parents received an ambiguously worded opt out consent and the participants(the children) were given little time (given their age) and very little information about the ramifications of consenting. The survey was immediately rejected by vast numbers of parents and children, with a huge media outpouring regarding the “stealing of innocence”/data protection suspicion/ulterior motives and calls for MSP’s to act to stop the data being processed. In a nutshell broader public opinion has quickly became the stronger evidence for ditching the whole idea.