Jeremy Farrar, Director of the Wellcome Trust, welcoming delegates to the 1:AM conference.

Jeremy Farrar, Director of the Wellcome Trust, welcoming delegates to the 1:AM conference.

Somewhere underneath the Wellcome Collection, I spent two days last week discussing ‘altmetrics’, the catchy portmanteau for ‘alternative metrics’—new ways of tracking conversations and usage of research online. The 1:AM conference brought together the tech teams developing altmetrics with a wider community that might want to use them, including research funders, librarians, Universities, as well as scholarly societies and bibliometrics researchers.

Alternative to what?

Academic papers and citation counts are the main form of output for the majority of researchers in science and engineering disciplines. Citation-based metrics—like the Journal Impact Factor or h-index—therefore carry a lot of weight amongst academics and publishers, but such metrics are not perfect measures of quality and don’t account for all types of impact. The Royal Society signed the San Francisco Declaration on Research Assessment in 2013, stating that “the Journal Impact Factor receives too much emphasis in research assessment processes worldwide.”

The concept of altmetrics emerged a few years ago to capture some of the broader uses of research. For a given paper, altmetrics providers, such as and ImpactStory, are able to show associated conversations and other metrics collected from Twitter, news coverage, blog posts and even citations in ‘grey literature’ (such as policy documents).

What are they good for?

This new wave of article-level metrics are not without their critics. When articles are ‘ranked’ by their scores, comedy rather than quality can often win the day; for example the high-scoring “An In-Depth Analysis of a Piece of Shit: Distribution of Schistosoma mansoni and Hookworm Eggs in Human Stool”. There are also worries that researchers could fake or buy tweets or stories online to boost their score. Altmetrics data generally don’t correlate well with citations, but rather than being a weakness, this might indicate that they are measuring something different, but what?

Altmetrics data seem to capture something that isn’t always easy to spot: use of research that doesn’t result in citations. Given the focus on impact in the last Research Excellence Framework, we can bet that tools that help us to identify impact and tell stories about it are going to be useful. The Royal Society generally supported the inclusion of impact on the last Research Excellence Framework, but has called for metrics to be broad and flexible.

From papers to figures and data

Of course scholarly communication is not just conducted through papers, and towards the end of the second day of the conference, we focussed on other research outputs. For example, figures and datasets are increasingly shared online and Sarah Callaghan from the Research Data Alliance told us how citation-based metrics tell us very little about the use of their open data sets. So, altmetrics might be useful here too. The way that researchers and others share and comment on their work is changing, something we’ll be exploring as part of our celebration of 350 years of scientific publishing next year. The role altmetrics will ultimately play is not clear.

Matchmaking for metrics

No research metrics are perfect, but attendees saw many uses for them, for institutions to make funding or recruitment decisions, for funders to evaluate their spending or for HEFCE to assess the quality of the institutions they distribute Government funding to. HEFCE are conducting an independent review of the role of metrics in research assessment, with the aim of working out how best to apply them. Having more diverse tools might be useful, but the very idea of using metrics and measuring impact is not supported by all researchers—a perspective that was notably absent at the conference—so they may continue to raise controversy.

One attendee referred to altmetrics as an answer in search of a question. By the end of the conference, the emphasis was on matching the metrics with the questions we want to ask about research quality and impact. 1:AM was a good step in bringing the stakeholders together, albeit with relatively sparse representation of the academic community. Now they need to keep talking to work out how altmetrics are ultimately used in future.

  • David_Colquhoun

    I’m glad you pointed out that this meeting had a “sparse representation of the academic community”. In fact this meeting resembled a sales meeting for companies who are trying to sell their product to universities. Altmetrics is indeed an “answer in search of a question”. Or perhaps one should say many answers, because each salesman and different sets of things that it counted, with different, entirely arbitrary weightings.

    Thanks for linking to my piece at
    The examples cited there suggest to me that tweets might well be negatively correlated with the quality of the work. Certainly, nothing with much mathematics is likely to do well on any sort of altmetrics. That alone is an indication of the harm that these methods could do if they were taken seriously for the assessment of people or departments.

    One major problem is that there is no serious research. What passes for research is usually the correlation of one surrogate outcome of unknown value with another. This tells us next to nothing.

    Any sort of research that ignores the content of papers is useless. But the companies that sell the products are unlikely to do any sort of research that might reduce sales. Until such time as these products can be shown to have value, they should be ignored,

    • Eleanor Beal

      David, glad you found the post interesting – thanks for providing some interesting comments alongside it.