Photo: Roger Atkinson
[ Roger Atkinson's Home Page ] [ Publications Contents ]

Bibliometrics: A fifty years perspective

Roger Atkinson

In much earlier times - I'm talking mid to late 1960s here - authors were often informed about the publication of their newest research paper by the arrival of air-mailed 'offprint' requests. I especially liked requests from German scientists, quite flattering for me as a young postgraduate student to be addressed as 'Herr Doktor Professor', and the foreign stamps were useful for a young relative who was into collecting. Reprint requests were necessary, this being before the days of ready availability of photocopiers, and of course air-mailed requests arrived before UWA Library's surface-mailed copy of the journal and the publishers 'offprints'.

One's count of reprint requests constituted an early kind of bibliometric data, notable for two desirable characteristics: data easily obtained (I just counted the heap), and data available soon after publication of one's paper. Now, fast forward fifty years. Our contemporary authors are not counters of 'snail mail' reprint requests, they may instead look at the number of Facebook 'likes' they have scored, or their Altmetric score [1], or their Google Scholar citation count and h-index [2]. That fifty year contrast sets up a good number of interesting questions! Too many, so to narrow the focus, this column considers just the two 'desirable characteristics' as perceived fifty years ago, namely 'data obtained easily' and 'data available soon after publication'.

To illustrate 'data obtained easily', in earlier times journal editors used to email complimentary hardcopy to ISI [3], as I started doing for AJET in June 2004, hoping eventually to obtain an Impact Factor for AJET ('eventually' turned out to be 2010, six years later, [4]). At the time, counting of citations was a manual operation, but now thanks to incredible advances in information technology, 'data obtained easily' is true. Data collection is done by 'bots' that 'crawl' websites all over the world, copy vast numbers of publications, and churn out citation counts from which other computer programs calculate and publish numerous kinds of bibliometrics and rankings.

However, 'data obtained easily', or perhaps too easily, creates a risk, the risk of attaching too much importance to bibliometrics like Impact Factor, h-index, SCImago Journal Rank [5], altmetrics [6, 7], and others, or to related topics such as journal rankings. 'Data obtained easily' is problematic, as it is not counter-balanced or moderated by the data that is hard to obtain, that is, data that computer programs and 'simple quantification' cannot obtain. Many observers have drawn attention to this risk, and the 'perverse unintended consequences' that may occur. Table 1 presents a small, non-systematic but illustrative sample of critical views, using article titles or brief quotations and themed around an 'obsession with metrics and rankings'.

Table 1: Snips from some publications, themed around an 'obsession with metrics and rankings'

Wilsdon, et al. (2015)The metric tide: ... review of the role of metrics in research assessment and management ... we only have to look around us, at the blunt use of metrics such as journal impact factors, h-indices and grant income targets to be reminded of the pitfalls. Some of the most precious qualities of academic culture resist simple quantification, and individual indicators can struggle to do justice to the richness and plurality of our research.[8, 9]
Werner (2015)The focus on bibliometrics makes papers less useful. ...As the tyranny of bibliometrics tightens its grip, it is having a disastrous effect on the model of science presented to young researchers.[10]
Hicks, et al. (2015)Research metrics can provide crucial information that would be difficult to gather or understand by means of individual expertise. But this quantitative information must not be allowed to morph from an instrument into the goal.[11]
Smith & Bennett (2015)The ERA journal rankings were abolished in 2011. However, their ghost influences decisions from journal selection to academic recruitment and promotion.[12]
The Guardian (2015)Our obsession with metrics turns academics into data drones.[13]
Calver & Beattie (2015)Our obsession with metrics is corrupting science.[14]
Gruber (2014)Academic sell-out: How an obsession with metrics and rankings is damaging academia.
... unhealthy trend of institutions (and individuals) becoming increasingly obsessed with journal metrics and, recently, article-level metrics.[15]
Colquhoun & Plested (2014)Scientists don't count: Why you should ignore altmetrics and other bibliometric nightmares.[16]
Adams (2014)Bibliometrics has a problem. In fact, the field - which tracks scholarly impact in everything from journal papers to citations, data sets and tweets - has a lot of problems. Its indicators need better interpretation, its tools are widely misused, its relevance is often questioned....[17]
Kitt & Wearne (2013)... the noise of citations and other metrics that serve benchmarking, meta-analysis and university league tables cloud and crowd discussion in this space.[18]
Thompson (2011)No tyranny of metrics: Scrapped Australian plan now a revolution in research assessment in England.[19]
Jarwal, et al. (2008)Measuring research quality using the journal impact factor, citations and 'ranked journals': Blunt instruments or inspired metrics?[20]

The contemporary equivalent to 'data available soon after publication' is something more recently and more rapidly developed than was the case with the evolution of 'data obtained easily'. To illustrate, the following quotation is from Altmetric's website:

Knowing who's talking about your research and what they're saying is crucial in today's increasingly online world. Ensuring your work is being accurately represented and interpreted, as well as getting to the right people at the right time, all plays an important factor its broader impact. ...
With altmetrics, you can start to track this information as soon as your research is published - meaning no waiting around for citations, and the chance to engage directly with the audiences who are interested in your work. (emphasis added) [www.altmetric.com] [6]
The first example of the adoption of altmetrics by a major publisher that I explored was from Taylor & Francis, publishers of HERDSA's journal, HERD [21]. If you look at the online tables of contents for recent volumes of HERD, you will find some new lines, for example:
Citing Articles:
CrossRef (2) | Web of Science (3) | Scopus (3)
Article Views: 274
Altmetric score: 2
Cursor over any example to obtain a brief popup, or obtain a longer explanation from www.tandfonline.com/page/article-metrics. T&F explain that 'The Article metrics widget displays article-level metrics, including views, citation counts from CrossRef [22], the Web of Science [23], and Scopus [24]. It also displays an Altmetric score', where Altmetric [6] is T&F's provider of this score. Scopus (Elsevier) is the ARC's provider of citation data for ERA 2015 [25], whilst Web of Science (Thomson-Reuters) and CrossRef (PILA) also are providers of citation counts.

Whilst 'data available soon after publication' is very dependent upon the 'data obtained easily' factor, that is upon advances in computer networked, automated gathering of information, the growing influence of the new metrics owes much to changing attitudes towards assessments of research publications. The new approaches to metrics tend towards open access and universal availability, accord much more attention to article level metrics, and perhaps, somewhat surprisingly, have helped to wean research funding bodies such as our ARC away from the 'blunt instruments' approaches that marked earlier times.

One of the oldest and perhaps best known metrics, ISI's Impact Factor (now Thomson-Reuters) is not freely accessible to readers, as a subscription to Journal Citation Reports [26] is needed for access. However, a trend towards open access metrics seems to be emerging, for example the Google Scholar Metrics and SCImago Journal Rank provisions of h-index data, and Elsevier's Scopus Metrics. Google Scholar Metrics [2] provides authors with a free "Citation indices" service, which authors may keep private or elect to make public. SCImago Journal Rank [5] provides journal editors with HTML code that can be used for free, to display their journal's SJR, 'cites per doc' and 'total cites' data. Altmetric data that T&F has begun providing is free to readers (as it is included with the 'abstract only' view that is free to any reader).

Perhaps the most prominent illustration of increased attention to article level metrics is Google Scholar Metrics. Other examples include ResearchGate, which features "Join for free" and allows you to "See in-depth stats on who's been reading your work and keep track of your citations". Increased attention to article level metrics helps authors who may have published an outstanding article in a journal with low ranking according to journal level metrics. Conversely, authors of a mediocre article that somehow secured publication in a highly ranked journal will not like article level metrics, because they will be 'found out'! However, we could note a potential dark side to article level metrics, from the perspective of journal editors. This is the risk of unfairly discriminating against authors and articles that are likely to attain only poor or modest citation counts, in particular authors from a language background other than English, and works based in country contexts other than advanced Western may be in this category, which deserves further research.

Finally, the new approaches to metrics may have helped to wean research funding bodies and university managers away from 'blunt instruments' approaches, as starkly illustrated by the now defunct journal rankings scheme, Tiers A*, A, B and C. Notwithstanding Table 1's criticisms, we can look on the bright side and say that the new metrics helped to show conclusively the silliness of Tiers and similar journal rankings, though I would like to accord the last word in my fifty year perspective to Smith & Bennett (2015), "... their ghost influences decisions from journal selection to academic recruitment and promotion" [26].

References

  1. J. Priem, J., Taraborelli, D., Groth, P. & Neylon, C. (2010), Altmetrics: A manifesto, 26 October 2010. http://altmetrics.org/manifesto

  2. Google Scholar Metrics (undated). https://scholar.google.com.au/citations?view_op=top_venues&hl=en

  3. The ISI (Institute for Scientific Information) subsequently became part of Thomson Reuters, see their Citation Impact Center at http://ip-science.thomsonreuters.com/citationimpactcenter/

  4. AJET (Australasian Journal of Educational Technology) Editorial 26(5). http://ajet.org.au/index.php/AJET/article/view/1051/311

  5. Scimago (undated). SCImago Journal Rank. http://www.scimagojr.com

  6. Altmetric (n.d.). http://www.altmetric.com

  7. Chin Roemer, R. & Borchardt, R. (2012). From bibliometrics to altmetrics: A changing scholarly landscape. College & Research Libraries News, 73(10), 596-600. http://crln.acrl.org/content/73/10/596.full

  8. Wilsdon, J., et al. (2015). The metric tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management. HEFCE. http://dx.doi.org/10.13140/RG.2.1.4929.1363

  9. Wouters, P. et al. (2015). The metric tide: Literature Review (Supplementary Report I to the Independent Review of the Role of Metrics in Research Assessment and Management). HEFCE. http://dx.doi.org/10.13140/RG.2.1.5066.3520

  10. Werner, R. (2015). The focus on bibliometrics makes papers less useful. Nature, 517, 245 (15 January 2015). http://dx.doi.org/10.1038/517245a

  11. Hicks, D., Wouters, P., Waltman, L., de Rijcke, S. & Rafols, I. (2015). Bibliometrics: The Leiden Manifesto for research metrics. Nature, 520, 429-431 (23 April 2015). http://dx.doi.org/10.1038/520429a

  12. Smith, C. & Bennett, D. (2015). Will the impact framework fix the problems the research audit found? The Conversation, 15 December. https://theconversation.com/will-the-impact-framework-fix-the-problems-the-research-audit-found-52152

  13. The Guardian (2015). Our obsession with metrics turns academics into data drones. Academics anonymous, The Guardian, 27 November. http://www.theguardian.com/higher-education-network/2015/nov/27/our-obsession-with-metrics-turns-academics-into-data-drones

  14. Calver, M. & Beattie, A. (2015). Our obsession with metrics is corrupting science. The Conversation, 1 June. https://theconversation.com/our-obsession-with-metrics-is-corrupting-science-39378

  15. Gruber, T. (2014). Academic sell-out: How an obsession with metrics and rankings is damaging academia. Journal of Marketing for Higher Education, 24(2), 165-177. http://dx.doi.org/10.1080/08841241.2014.970248

  16. Colquhoun, D. & Plested, A. (2014). Scientists don't count: Why you should ignore altmetrics and other bibliometric nightmares. Blog posting, DC's Improbable Science. http://www.dcscience.net/?p=6369

  17. Adams, J. (2014). The citation game. Nature, 510, 470-471 (26 June 2014). http://dx.doi.org/10.1038/510470a

  18. Kitt, S. & Wearne, J. (2013). Little fish in a big pond: Towards research performance metrics for smaller institutions. Journal of Institutional Research, 18(1), 36-46. http://www.aair.org.au/app/webroot/media/pdf/JIR/Journal%20of
    %20Institutional%20Research%20in%20Australasia%20and%20JIR/Volume%2018,%20No.1/JIR18-1KittWearne.pdf

  19. Thompson, M. (2011). No tyranny of metrics: Scrapped Australian plan now a revolution in research assessment in England. The Conversation, 3 November. https://theconversation.com/no-tyranny-of-metrics-scrapped-australian-plan-now-a-revolution-in-research-assessment-in-england-4138

  20. Jarwal, S. D., Brion, A. M. & King, M. L. (2008). Measuring research quality using the journal impact factor, citations and 'Ranked Journals': Blunt instruments or inspired metrics? Journal of Higher Education Policy and Management, 31(4), 289-300. http://dx.doi.org/10.1080/13600800903191930

  21. HERD (Higher Education Research & Development]. http://www.tandfonline.com/toc/cher20/current

  22. CrossRef (undated). http://www.crossref.org

  23. Web of Science (undated). http://ipscience.thomsonreuters.com/product/web-of-science/

  24. Scopus (undated). https://www.elsevier.com/solutions/scopus

  25. Australian Research Council (2015). ERA 2015. http://www.arc.gov.au/era-2015

  26. Journal Citation Reports (undated). http://ipscience.thomsonreuters.com/product/journal-citation-reports/

  27. ResearchGate (undated). https://www.researchgate.net

  28. Smith, C. & Bennett, D. (2015). Will the impact framework fix the problems the research audit found? The Conversation, 15 December. https://theconversation.com/will-the-impact-framework-fix-the-problems-the-research-audit-found-52152

Author: Roger Atkinson retired from Murdoch University in June 2001. His current activities include honorary work on the TL Forum conference series, Issues in Educational Research, and other academic conference support and publishing activities. Website (including this article in html format): http://www.roger-atkinson.id.au/

Note: The version presented here is longer than the print published version, as it includes references that were omitted for space constraint reasons.

Please cite as: Atkinson, R. J. (2016). Bibliometrics: A fifty years perspective. HERDSA News, 38(1). http://www.roger-atkinson.id.au/pubs/herdsa-news/38-1.html


[ Roger Atkinson's Home Page ] [ Publications Contents ]
Created 8 May 2016. Last correction: 8 May 2016. HTML author: Roger Atkinson [rjatkinson@bigpond.com]
This URL: http://www.roger-atkinson.id.au/pubs/herdsa-news/38-1.html