Somewhere underneath the Wellcome Collection, I spent two days last week discussing ‘altmetrics’, the catchy portmanteau for ‘alternative metrics’—new ways of tracking conversations and usage of research online. The 1:AM conference brought together the tech teams developing altmetrics with a wider community that might want to use them, including research funders, librarians, Universities, as well as scholarly societies and bibliometrics researchers.
Alternative to what?
Academic papers and citation counts are the main form of output for the majority of researchers in science and engineering disciplines. Citation-based metrics—like the Journal Impact Factor or h-index—therefore carry a lot of weight amongst academics and publishers, but such metrics are not perfect measures of quality and don’t account for all types of impact. The Royal Society signed the San Francisco Declaration on Research Assessment in 2013, stating that “the Journal Impact Factor receives too much emphasis in research assessment processes worldwide.”
The concept of altmetrics emerged a few years ago to capture some of the broader uses of research. For a given paper, altmetrics providers, such as Altmetric.com and ImpactStory, are able to show associated conversations and other metrics collected from Twitter, news coverage, blog posts and even citations in ‘grey literature’ (such as policy documents).
What are they good for?
This new wave of article-level metrics are not without their critics. When articles are ‘ranked’ by their Altmetric.com scores, comedy rather than quality can often win the day; for example the high-scoring “An In-Depth Analysis of a Piece of Shit: Distribution of Schistosoma mansoni and Hookworm Eggs in Human Stool”. There are also worries that researchers could fake or buy tweets or stories online to boost their score. Altmetrics data generally don’t correlate well with citations, but rather than being a weakness, this might indicate that they are measuring something different, but what?
Altmetrics data seem to capture something that isn’t always easy to spot: use of research that doesn’t result in citations. Given the focus on impact in the last Research Excellence Framework, we can bet that tools that help us to identify impact and tell stories about it are going to be useful. The Royal Society generally supported the inclusion of impact on the last Research Excellence Framework, but has called for metrics to be broad and flexible.
From papers to figures and data
Of course scholarly communication is not just conducted through papers, and towards the end of the second day of the conference, we focussed on other research outputs. For example, figures and datasets are increasingly shared online and Sarah Callaghan from the Research Data Alliance told us how citation-based metrics tell us very little about the use of their open data sets. So, altmetrics might be useful here too. The way that researchers and others share and comment on their work is changing, something we’ll be exploring as part of our celebration of 350 years of scientific publishing next year. The role altmetrics will ultimately play is not clear.
Matchmaking for metrics
No research metrics are perfect, but attendees saw many uses for them, for institutions to make funding or recruitment decisions, for funders to evaluate their spending or for HEFCE to assess the quality of the institutions they distribute Government funding to. HEFCE are conducting an independent review of the role of metrics in research assessment, with the aim of working out how best to apply them. Having more diverse tools might be useful, but the very idea of using metrics and measuring impact is not supported by all researchers—a perspective that was notably absent at the conference—so they may continue to raise controversy.
One attendee referred to altmetrics as an answer in search of a question. By the end of the conference, the emphasis was on matching the metrics with the questions we want to ask about research quality and impact. 1:AM was a good step in bringing the stakeholders together, albeit with relatively sparse representation of the academic community. Now they need to keep talking to work out how altmetrics are ultimately used in future.