Mike Taylor is a software engineer by day, a palaeontologist by night, and an open-access advocate in his spare time. Among his achievements are the co-description and naming of two dinosaurs: Xenoposeidon (“alien earthquake god”) and Brontomerus (“thunder thighs”). With fellow palaeontologist Matt Wedel, he runs the blog Sauropod Vertebra Picture of the Week, a 50/50 blend of hardcore osteology and scholarly publishing discussion.

CC-BY (2.0) epSos.de.

CC-BY (2.0) epSos.de.

It’s widely recognised that the established scholarly publishers skim an awful lot of money off the top of research budgets. The Big Four (Elsevier, Springer, Wiley, Informa) all have profit margins in the range 32–42%. For Elsevier alone, a 38.9% profit on revenue of £2126M (page 17 of their own 2013 annual report) represents £826M diverted away from research – a figure more than sixteen times the £50M that the Finch Report estimated as the annual cost of transition to an all-open-access ecosystem.

Elsevier representatives will point out in their defence that some open-access publishers have even higher profit-margins: for example Hindawi’s founder claimed in a 2012 interview a net profit of $3.3M on revenue of $6.3M for the first half of 2012 – a profit of 52.4%. Even PLOS, an avowedly non-profit organisation, runs at an operating surplus of 27% (expenses of $37M against revenue of $50.8M according to their 2013 report).

Can this be justified? I have three thoughts.

First, the emphasis on profit margins – that is, profit as a percentage of revenue – is misleading. Hindawi’s median APC is $600 (calculated from their listing). So a 52.4% profit on a typical paper represents $314 leaving academia and going into shareholders’ pockets; whereas 38.9% of a typical Elsevier paper, with an APC of $3000, is $1167. So when the Wellcome Trust funds publication in a hybrid OA Elsevier journal, it diverts nearly four times as much cash out of academia than when its authors use Hindawi.

Second, much depends on the destination of the profits. When Elsevier or Hindawi profit from publishing, that money is lost to academia. By contrast, PLOS’s operating surplus – $367 of the $1350 APC on a PLOS ONE paper – is ploughed back into their mission “to accelerate progress in science and medicine by leading a transformation in research communication”. The same obviously applies to society publishers such as the Royal Society itself.

Finally, and most important, what really matters is not how much profit a publisher makes, but simply how much they charge to publish. To funding agencies, the price of an APC is money that can’t be spent elsewhere, whether it goes to publisher profits or merely covers publisher costs. It’s better to pay a $400 APC of which $200 is profit than a $500 APC of which $150 is profit. APC funds can be more effectively used when the price of publishing goes down, and it really doesn’t matter much whether that is achieved by publishers cutting profits or cutting costs.

And this in the end is the conclusive argument against legacy publishers such as Elsevier: irrespective of what the profit margins are, the prices are simply too expensive. There is no legitimate need for the Wellcome Trust to continue spending an average of £1821 ($2730) on APCs, mostly with legacy publishers, when newer born-digital publishers can do an objectively better job for much less money.

This post was first published on the Royal Society’s In Verba blog on 1 May 2015, and relates to our Future of Scholarly Scientific Communication events (#FSSC), bringing together stakeholders in a series of discussions on evolving and controversial areas in scholarly communication, looking at the impact of technology, the culture of science and how scientists might communicate in the future.

10 Responses to “#FSSC: Are publisher profits justifiable?”

  1. Richard Sever

    Estimates suggest Wellcome Trust are spending $10-15K per article in
    eLife, which is a born-digital publisher. This high cost is in part
    because it is a selective journal with a high rejection rate. In
    selective journals that charge APCs, accepted articles must bear the
    cost of processing large numbers of rejected articles. Higher APCs of
    some legacy journals may thus simply be a result of their higher
    rejection rates. The fact that most born-digital journals (unlike eLife)
    are designed to have high acceptance rates, allowing them to monetize
    a greater percentage of articles (e.g. PLOSOne), or are subsidized by
    such journals (e.g. PLOS Biology by PLOSOne), supports the idea that
    rejection rates are critical determinants.

    None of this is to
    say that there isn’t overcharging in some cases. And whether the system
    should have these selective journals at all is of course a key question
    and something that emerges from the incentive structures that exist in
    academia. But it’s important to realize born-digital alone does not
    provide the solution. Indeed journals like Nature Communications and
    Cell Reports, like eLife, are born-digital and have APCs higher than those of hybrid journals and the Wellcome Trust mean spend.

  2. Mike Taylor

    To answer my own questlon … Theo Andrew did just this in a 2012 article in Ariadne. See http://www.ariadne.ac.uk/issue70/andrew

    The graph does indeed show a strong tendency for hybrid APCs to be higher than those of equally prestigious pure-OA journals, although there is more overlap than I would have predicted.

    Frustratingly (and in keeping with much of the discussion at the FSSC meetings!), although the dataset used in the study is supposedly available from http://datashare.is.ed.ac.uk/handle/10283/250 it can’t actually be downloaded: Firefox reports “Content Encoding Error”. That means I can’t calculate the regression lines for the two sets of journals,

    • Richard Sever

      That graph does seem to show higher APCs for higher IF journals, which
      probably correlate with higher rejection rates [again I’m not making any claims about ‘quality’].

      • Mike Taylor

        Yes. But what a shame that, in an article that did all the hard work, they didn’t go the extra nine yards and actually report the regression equations and correlation coefficients!

      • Mike Taylor

        BTW., Royal Society, you will get much more of a back-and-forth going in these comments sections if you stop moderating every individual comment. You should be using the WordPress default policy of considering a commenter OK to post once you’ve moderated one of their comments through.

  3. Richard Sever

    Just to clarify: I am not making any claims about journal prestige, solely rejection rate (though many will understandably argue there is a correlation between the two). I’m not aware of a data set out there, but I can tell you these are exactly the sorts of mental calculation publishers make when setting APCs given that, as was pointed out at #FSSC, the cost of administering peer review is such a big part in the equation.

    • Tim Vines

      My calculations suggest a cost of $350 per round of peer review, with editorial rejects costing about $50. These costs are mostly salary for editorial office staff. One can get close to the APC of an OA journal with

      (1/acceptance rate * 350) + 600

      where the $600 is the fixed costs for each accepted article for typesetting, article hosting and admin. For example:

      PLoS ONE = 1/0.6 * 350 + 600 = $1,183

      BMC Ev Biol = 1/0.25 * 350 + 600 = $2,000

      PLoS Biology = 1/0.15 * 350 + 600 = $2,933

      One can thus see that the majority of the expense in science publishing is incurred when the editorial office spends time on papers that are then rejected (i.e. for which no APC can be collected).

      This is why independent peer review (e.g. Axios) is so important – our ‘prefiltering’ step ensures that journals receive papers that they think are publishable, and unsuitable manuscripts are diverted elsewhere. The acceptance rate of all journals rises as a result, saving the journal (and the community) a lot of money.