IMG_0937

What do avatars, white lies and poor statistical analysis have in common?

They were all covered in the 3rd World Conference on Research Integrity which was held in Montreal on 5-8 May 2013. I was lucky enough to attend and hear a range of interesting speakers give their thoughts on research integrity, or more to the point when researchers have a failure of integrity!

The focus of the conference was on empirical studies, but also included case studies, such as researchers under investigation by the US Office for Research Integrity who had gone to the trouble of inventing people, not just fabricating data but fabricating whole identities. One researcher had even hired actors to appear in court as his colleagues – he told them it was a mock trial!

This is the Stapel end of the spectrum though, where it is clearly fraud, but there are also problems at the other end of the spectrum. Some presenters referred to this as misbehaviour, rather than misconduct, and this is where the culture of science comes under scrutiny. Apparently the pressure to publish is causing researchers to get sloppy about their statistical analysis, be a bit selective in the data they present and not repeat their experiments to prove the effects are real. This was brought home by Veronique Kiermer from Nature who talked about the number of corrections that they have to publish because of sloppy work by researchers.

There were also presentations by John Ioannidis and Daniele Fanelli, who are almost household names in research integrity literature.

John Ioannidis suggested we need to end our love affair with large effect sizes and improve our reporting practices, which is an apt message from the author of the most accessed publication on PLoS Medicine – “Why most published research findings are false”.

Daniele Fanelli was again questioning the evidence base for the actions taken to prevent research misconduct and also reminded us that the increased retraction rate is not a sign of a problem but the solution.

Possibly the most provocative contribution was from Dan Ariely, who presented his work on dishonesty. His experiments uncover some uncomfortable truths. Firstly, we all lie … a lot … and this result has been found all over the world, so we ALL do it. Only little lies though, as we all still like to think of ourselves as good people, well until we don’t. Apparently a person will have the same low level of dishonesty as the rest of us but then will suddenly have a marked and sustained increase, known as the “what the hell!?” effect. And how do you bring them back from their dishonesty spree? Confession. It turns out that giving them a chance at a clean slate returns them to pre-effect levels.

So how do ensure research integrity? It might come back to acknowledging the fact that researchers are human like the rest of us and we need to create a system and a culture that supports researchers to act with integrity … and gives them a way to rehabilitate so they don’t end up caught up in the lie and having to hire actors!

  • Paul Hutchings

    Considering there has been a high-profile study using avatars recently and that it doesn’t explicitly have anything to do with the rest of the piece – unless you know something the rest of us don’t – I’d suggest you change your opening paragraph or prepare the lawyers.

    • Elizabeth Bohm

      Thanks for your comment Paul. I had listed those
      three things together because they were interesting topics at the conference,
      not because there was a particular link between them. The research that you
      mention sounds interesting, could you explain more about it?