Photo: Roger Atkinson
[ Roger Atkinson's Home Page ] [ Publications Contents ]

Bibliometrics: A fifty years perspective

Roger Atkinson

In much earlier times - I'm talking mid to late 1960s here - authors were often informed about the publication of their newest research paper by the arrival of air-mailed 'offprint' requests. I especially liked requests from German scientists, quite flattering for me as a young postgraduate student to be addressed as 'Herr Doktor Professor', and the foreign stamps were useful for a young relative who was into collecting. Reprint requests were necessary, this being before the days of ready availability of photocopiers, and of course air-mailed requests arrived before UWA Library's surface-mailed copy of the journal and the publishers 'offprints'.

One's count of reprint requests constituted an early kind of bibliometric data, notable for two desirable characteristics: data easily obtained (I just counted the heap), and data available soon after publication of one's paper. Now, fast forward fifty years. Our contemporary authors are not counters of 'snail mail' reprint requests, they may instead look at the number of Facebook 'likes' they have scored, or their Altmetric score [1], or their Google Scholar citation count and h-index [2]. That fifty year contrast sets up a good number of interesting questions! Too many, so to narrow the focus, this column considers just the two 'desirable characteristics' as perceived fifty years ago, namely 'data obtained easily' and 'data available soon after publication'.

To illustrate 'data obtained easily', in earlier times journal editors used to email complimentary hardcopy to ISI [3], as I started doing for AJET in June 2004, hoping eventually to obtain an Impact Factor for AJET ('eventually' turned out to be 2010, six years later, [4]). At the time, counting of citations was a manual operation, but now thanks to incredible advances in information technology, 'data obtained easily' is true. Data collection is done by 'bots' that 'crawl' websites all over the world, copy vast numbers of publications, and churn out citation counts from which other computer programs calculate and publish numerous kinds of bibliometrics and rankings.

However, 'data obtained easily', or perhaps too easily, creates a risk, the risk of attaching too much importance to bibliometrics like Impact Factor, h-index, SCImago Journal Rank [5], altmetrics [6, 7], and others, or to related topics such as journal rankings. 'Data obtained easily' is problematic, as it is not counter-balanced or moderated by the data that is hard to obtain, that is, data that computer programs and 'simple quantification' cannot obtain. Many observers have drawn attention to this risk, and the 'perverse unintended consequences' that may occur. Table 1 presents a small, non-systematic but illustrative sample of critical views, using article titles or brief quotations and themed around an 'obsession with metrics and rankings'.

Table 1: Snips from some publications, themed around an 'obsession with metrics and rankings'

Wilsdon, et al. (2015)The metric tide: ... review of the role of metrics in research assessment and management ... we only have to look around us, at the blunt use of metrics such as journal impact factors, h-indices and grant income targets to be reminded of the pitfalls. Some of the most precious qualities of academic culture resist simple quantification, and individual indicators can struggle to do justice to the richness and plurality of our research.[8, 9]
Werner (2015)The focus on bibliometrics makes papers less useful. ...As the tyranny of bibliometrics tightens its grip, it is having a disastrous effect on the model of science presented to young researchers.[10]
Hicks, et al. (2015)Research metrics can provide crucial information that would be difficult to gather or understand by means of individual expertise. But this quantitative information must not be allowed to morph from an instrument into the goal.[11]
Smith & Bennett (2015)The ERA journal rankings were abolished in 2011. However, their ghost influences decisions from journal selection to academic recruitment and promotion.[12]
The Guardian (2015)Our obsession with metrics turns academics into data drones.[13]
Calver & Beattie (2015)Our obsession with metrics is corrupting science.[14]
Gruber (2014)Academic sell-out: How an obsession with metrics and rankings is damaging academia.
... unhealthy trend of institutions (and individuals) becoming increasingly obsessed with journal metrics and, recently, article-level metrics.[15]
Colquhoun & Plested (2014)Scientists don't count: Why you should ignore altmetrics and other bibliometric nightmares.[16]
Adams (2014)Bibliometrics has a problem. In fact, the field - which tracks scholarly impact in everything from journal papers to citations, data sets and tweets - has a lot of problems. Its indicators need better interpretation, its tools are widely misused, its relevance is often questioned....[17]
Kitt & Wearne (2013)... the noise of citations and other metrics that serve benchmarking, meta-analysis and university league tables cloud and crowd discussion in this space.[18]
Thompson (2011)No tyranny of metrics: Scrapped Australian plan now a revolution in research assessment in England.[19]
Jarwal, et al. (2008)Measuring research quality using the journal impact factor, citations and 'ranked journals': Blunt instruments or inspired metrics?[20]

The contemporary equivalent to 'data available soon after publication' is something more recently and more rapidly developed than was the case with the evolution of 'data obtained easily'. To illustrate, the following quotation is from Altmetric's website:

Knowing who's talking about your research and what they're saying is crucial in today's increasingly online world. Ensuring your work is being accurately represented and interpreted, as well as getting to the right people at the right time, all plays an important factor its broader impact. ...
With altmetrics, you can start to track this information as soon as your research is published - meaning no waiting around for citations, and the chance to engage directly with the audiences who are interested in your work. (emphasis added) [] [6]
The first example of the adoption of altmetrics by a major publisher that I explored was from Taylor & Francis, publishers of HERDSA's journal, HERD [21]. If you look at the online tables of contents for recent volumes of HERD, you will find some new lines, for example:
Citing Articles:
CrossRef (2) | Web of Science (3) | Scopus (3)
Article Views: 274
Altmetric score: 2
Cursor over any example to obtain a brief popup, or obtain a longer explanation from T&F explain that 'The Article metrics widget displays article-level metrics, including views, citation counts from CrossRef [22], the Web of Science [23], and Scopus [24]. It also displays an Altmetric score', where Altmetric [6] is T&F's provider of this score. Scopus (Elsevier) is the ARC's provider of citation data for ERA 2015 [25], whilst Web of Science (Thomson-Reuters) and CrossRef (PILA) also are providers of citation counts.

Whilst 'data available soon after publication' is very dependent upon the 'data obtained easily' factor, that is upon advances in computer networked, automated gathering of information, the growing influence of the new metrics owes much to changing attitudes towards assessments of research publications. The new approaches to metrics tend towards open access and universal availability, accord much more attention to article level metrics, and perhaps, somewhat surprisingly, have helped to wean research funding bodies such as our ARC away from the 'blunt instruments' approaches that marked earlier times.

One of the oldest and perhaps best known metrics, ISI's Impact Factor (now Thomson-Reuters) is not freely accessible to readers, as a subscription to Journal Citation Reports [26] is needed for access. However, a trend towards open access metrics seems to be emerging, for example the Google Scholar Metrics and SCImago Journal Rank provisions of h-index data, and Elsevier's Scopus Metrics. Google Scholar Metrics [2] provides authors with a free "Citation indices" service, which authors may keep private or elect to make public. SCImago Journal Rank [5] provides journal editors with HTML code that can be used for free, to display their journal's SJR, 'cites per doc' and 'total cites' data. Altmetric data that T&F has begun providing is free to readers (as it is included with the 'abstract only' view that is free to any reader).

Perhaps the most prominent illustration of increased attention to article level metrics is Google Scholar Metrics. Other examples include ResearchGate, which features "Join for free" and allows you to "See in-depth stats on who's been reading your work and keep track of your citations". Increased attention to article level metrics helps authors who may have published an outstanding article in a journal with low ranking according to journal level metrics. Conversely, authors of a mediocre article that somehow secured publication in a highly ranked journal will not like article level metrics, because they will be 'found out'! However, we could note a potential dark side to article level metrics, from the perspective of journal editors. This is the risk of unfairly discriminating against authors and articles that are likely to attain only poor or modest citation counts, in particular authors from a language background other than English, and works based in country contexts other than advanced Western may be in this category, which deserves further research.

Finally, the new approaches to metrics may have helped to wean research funding bodies and university managers away from 'blunt instruments' approaches, as starkly illustrated by the now defunct journal rankings scheme, Tiers A*, A, B and C. Notwithstanding Table 1's criticisms, we can look on the bright side and say that the new metrics helped to show conclusively the silliness of Tiers and similar journal rankings, though I would like to accord the last word in my fifty year perspective to Smith & Bennett (2015), "... their ghost influences decisions from journal selection to academic recruitment and promotion" [26].


  1. J. Priem, J., Taraborelli, D., Groth, P. & Neylon, C. (2010), Altmetrics: A manifesto, 26 October 2010.

  2. Google Scholar Metrics (undated).

  3. The ISI (Institute for Scientific Information) subsequently became part of Thomson Reuters, see their Citation Impact Center at

  4. AJET (Australasian Journal of Educational Technology) Editorial 26(5).

  5. Scimago (undated). SCImago Journal Rank.

  6. Altmetric (n.d.).

  7. Chin Roemer, R. & Borchardt, R. (2012). From bibliometrics to altmetrics: A changing scholarly landscape. College & Research Libraries News, 73(10), 596-600.

  8. Wilsdon, J., et al. (2015). The metric tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management. HEFCE.

  9. Wouters, P. et al. (2015). The metric tide: Literature Review (Supplementary Report I to the Independent Review of the Role of Metrics in Research Assessment and Management). HEFCE.

  10. Werner, R. (2015). The focus on bibliometrics makes papers less useful. Nature, 517, 245 (15 January 2015).

  11. Hicks, D., Wouters, P., Waltman, L., de Rijcke, S. & Rafols, I. (2015). Bibliometrics: The Leiden Manifesto for research metrics. Nature, 520, 429-431 (23 April 2015).

  12. Smith, C. & Bennett, D. (2015). Will the impact framework fix the problems the research audit found? The Conversation, 15 December.

  13. The Guardian (2015). Our obsession with metrics turns academics into data drones. Academics anonymous, The Guardian, 27 November.

  14. Calver, M. & Beattie, A. (2015). Our obsession with metrics is corrupting science. The Conversation, 1 June.

  15. Gruber, T. (2014). Academic sell-out: How an obsession with metrics and rankings is damaging academia. Journal of Marketing for Higher Education, 24(2), 165-177.

  16. Colquhoun, D. & Plested, A. (2014). Scientists don't count: Why you should ignore altmetrics and other bibliometric nightmares. Blog posting, DC's Improbable Science.

  17. Adams, J. (2014). The citation game. Nature, 510, 470-471 (26 June 2014).

  18. Kitt, S. & Wearne, J. (2013). Little fish in a big pond: Towards research performance metrics for smaller institutions. Journal of Institutional Research, 18(1), 36-46.

  19. Thompson, M. (2011). No tyranny of metrics: Scrapped Australian plan now a revolution in research assessment in England. The Conversation, 3 November.

  20. Jarwal, S. D., Brion, A. M. & King, M. L. (2008). Measuring research quality using the journal impact factor, citations and 'Ranked Journals': Blunt instruments or inspired metrics? Journal of Higher Education Policy and Management, 31(4), 289-300.

  21. HERD (Higher Education Research & Development].

  22. CrossRef (undated).

  23. Web of Science (undated).

  24. Scopus (undated).

  25. Australian Research Council (2015). ERA 2015.

  26. Journal Citation Reports (undated).

  27. ResearchGate (undated).

  28. Smith, C. & Bennett, D. (2015). Will the impact framework fix the problems the research audit found? The Conversation, 15 December.

Author: Roger Atkinson retired from Murdoch University in June 2001. His current activities include honorary work on the TL Forum conference series, Issues in Educational Research, and other academic conference support and publishing activities. Website (including this article in html format):

Note: The version presented here is longer than the print published version, as it includes references that were omitted for space constraint reasons.

Please cite as: Atkinson, R. J. (2016). Bibliometrics: A fifty years perspective. HERDSA News, 38(1).

[ Roger Atkinson's Home Page ] [ Publications Contents ]
Created 8 May 2016. Last correction: 8 May 2016. HTML author: Roger Atkinson []
This URL: