I retired on 1 October 2013, shortly before the deadline for being entered for the Research Excellence Framework (REF). The REF was introduced to satisfy a threefold purpose: to provide accountability for public investment in research and produce evidence of the benefits of this investment; to provide benchmarking information and establish reputational yardsticks, for use within the higher education sector and for public information; and to inform the selective allocation of funding for research. It is a process of expert review, conducted by panels for each of 34 subject-based ‘units of assessment’ under the overall guidance of four main panels. Expert panels are made up of senior academics, international members and research users. For each submission, three distinct elements are assessed: the quality of ‘outputs’ (eg publications, performances, and exhibitions), their ‘impact’ beyond academia, and the ‘environment’ that supports research. At least, that’s the theory. Sound reasonable?
There is a general point to be made upfront. The introduction of ‘targets’ and the increasing use of metrics to judge the extent to which they are met constitute a means of institutional control over personnel. This is not just because each individual can be deemed a success or failure based on a definitive score, it is also because the very process constrains that individual and binds him or her to projects and activities that ‘count’, that attract a metric reward. After all, poor metrics threaten prospects of salary increments, promotion and continuing/future employment.
In the years before my retirement loomed sociologists were largely unrecognised within UCL. There was certainly no formal entry to the sociology panel. Readers might recall me being invited to publish in Nature by a lab-based Head of the Department of Medicine, Patrick Vallance, who was clueless both about me and about sociology. Notwithstanding the positive feedback he received about my work when he made enquiries, further down the line I was to receive a letter from UCL stating that my record in ‘non-hospital-based clinical subjects’ was not strong enough and that I would not be ‘returned’. I replied that I was equally inept at radio astronomy and music: I was a sociologist. I insisted that a letter be sent saying that I was not to be returned for ‘strategic reasons’, and that my record was in fact strong, or that I be returned to my own panel in sociology. The latter option was ruled out because UCL did not have a Department of Sociology. The eventual compromise was that I would be returned with a referral across to the sociology panel. Another coming of the REF was set for the year of my retirement, but I just escaped its clutches by retiring on1 October 2013.
I was and am aware that many excellent sociologists with enviable reputations have found themselves disadvantaged by the REF, some of whom have never been returned despite significant accomplishments. Sometimes this is simply ignorance on the part of heads of departments or senior managers, and sometimes it’s a function of more perverse motives. The risk of not being returned can be that one is left in a ‘dustbin’ category of the miscellaneous comprising the ‘research inactive’. I know of one colleague who was informed, apparently with a straight face, that her work was ‘too scholarly’ and that she should be focusing on random control trials (RCTs). It is only too apparent that the REF is: intrinsically flawed, no such simple metric of the type deployed being possible; allows for over- and under-estimating academic worth (several past Nobel Prize winners would simply not have been returned according to REF criteria); cannot cope with anomalies, like sociologists in medical or dental schools; is open to abuse by universities, who rather than use discretion can and do use the REF as a management device; and exercises a corrosive surveillance over the research of academics many years in advance of its submission deadlines (by rating total research revenue over its products, and productivity in high impact journals over the contents of papers).
The senior management teams of universities will argue that the ‘REF game’ is one they cannot afford not to play. I am reminded of a point I used to make to my students when they were unhappy about a new university policy: ‘You do realise that if you all walked out, they’d drop it, don’t you?’ In similar vein, if all Vice-Chancellors declined to play the ‘REF game’, it would quickly be abandoned. This, incidentally, is a core and enduring sociological quandary: how to mobilise people to defend their interests, or better, how might they mobilise themselves. But as things stand as I write this, most line-managers feel stuck with the REF. They are charged then to insist on and use discretion. If they do not exercise discretion, if for example, they abandon or maliciously define first-rate sociologists in the likes of medical or dental schools as under-achievers, then they are guilty of what Bourdieu called symbolic violence. They are jeopardising their careers and should be held to account. As I was ultimately to be informed by an ally, UCL’s Vice-Provost for Research David Price, there is no excuse for not referring a sociologist to the sociology panel.
Universities are now awash with metrics, spewing ever-more precisely formulated ranking tables and systems of appraisal for academics. Indeed, people are beginning to refer more widely to the ‘metric society’. I recently went online to research what my own metrics amount to. I then compared then with subsequent generations of academic sociologists. As has been made clear, I personally was able to side-step the worst of this process through a mix of babyboomer luck and determined resistance. But well into retirement I visited ‘google scholar’ and signed up. I was assailed by the ‘h-index’, which apparently refers to the number of publications for which the author is cited by another author at least that same number of times (eg an h-index of 17 means that the author has published at least 17 papers each cited 17 times). I discovered that an h-index of 20 is ‘good’, 40 no less than ‘great’. The ‘i10-index’ is the second one of some import. It is simpler: it refers to the number of publications with at least 10 citations. Consulting google scholar as I write (on 22 October, 2022), I find that my h-index is 49 (so, ‘great’), and my i10-index is 108. I have logged up a total of 10,248 citations to date.
There are several points to make. When I initially checked google scholar I was surprised, and not unpleasantly, that my work was being cited. But, second, closer inspection of the figures led me to qualify their value. Consider the i10-index. On the face of it this seems a reasonable indicator of something like ‘impact’. However, I looked – am looking at this moment, as I sit in a local cafe – at my most cited publications. Top of the list is my 1986 article in Sociology of Health and Illness on ‘Being epileptic: coming to terms with stigma’, which is gratifying; this had been cited 1,022 times to date. Second comes another contribution to Sociology of Health and Illness, ‘Health-related Stigma’, this one published in 2009 and with 768 citations. Bear with me while I continue down this list for a bit longer. Third is a multi-authored paper in the Lancet, published in 2014, called ‘Culture and Health’, with 724 citations. Fourth: another Lancet paper, ‘Stigma and disease: changing paradigms’, published in 1998, with 428 citations. Fifth: a co-authored paper in Social Science and Medicineentitled ‘Health work, female sex workers and HIV/AIDS: global and local dimensions of stigma and deviance as barriers to effective interventions’, published in 2008 and accumulating 302 citations. Finally, in this ‘short’ short list, a paper accepted by Social Theory and Health called ‘Re-framing stigma: felt and enacted stigma and challenges to the sociology of chronic and disabling conditions’, published in 2004, with 292 citations. Apart from the focus on stigma, which is perhaps predictable, it should be obvious that two of my supposedly ‘top’ publications are in the Lancet, which for me is a secondary outlet, a bit like a magazine and therefore of little moment.
Having checked my own metrics, I then looked up several colleagues, an exercise that on the face of it put me firmly in my place. There were two lessons I learned. The first confirmed that what is key for ‘good metrics’ is not only where you publish (ie in high-impact journals with a wide, non-specialist readership) but the subject-matter. To expand on this second lesson, it is clear that papers on methods, (meta)review articles, and contributions on issues of immediate current concern (eg presently, COVID-19) attract most interest, and therefore most citations. The second lesson was that fellow sociologists a decade or more younger than me typically have much ‘better’ metrics. I accept that this likely reflects scholarly excellence and appeal as well as increased productivity, but there is surely a question mark over what else this new level of productivity could or does reflect and amount to. What exactly am I saying here? My hypothesis is that the ubiquitous use of metrics is indeed constraining colleagues both to be more productive and to be more selective in the kinds of publications they concentrate on and where they place them. For sociologists this amounts to a taming of the discipline. It is a general hypothesis, if a vitally important one. I do not for a moment want to denigrate colleagues’ high-outputs or the quality of their work, but I do wonder how much will survive their sometimes-stellar careers. This is an issue that picks up on my previous explications of system imperatives and lifeworld colonisation.
The retrospective application of metrics to my own career is another matter of interest, at least to me. When I was promoted to professor the question arose as to which ‘band’ I should be allocated. The problem as I saw it was that the whole of my career up to that time had been geared to scholarly excellence, and indeed I was rated highly for my international reputation. However, a new set of criteria had been newly appended, encompassing the likes of policy and public engagement and service. Unsurprisingly my CV had not been geared to such criteria. I ended up in the lowest pay band and was to remain there for the rest of my time at UCL.
A broader point amounts to the now commonplace assertion that we inhabit a ‘metric society’. In other words, the constraining arena of seemingly well-intentioned but ultimately taming metrics has become the norm for many citizens in many walks of life. It is a mode of control and intentionally constructed as such. In relation to the NHS, I recall that when New Labour required GPs to hit targets for seeing their patients, my local surgery – predictably enough – took steps to cook its books. If no consultation was available on the day patients telephoned for one, they were told to ring back the next day, and so on until an appointment became available that day. Insisting there was no great urgency and an appointment made now for a few days later cut no ice. The upshot was that every patient could be said to get an appointment on the day they telephoned. This sleight of hand ran right through society and became symbolic. What mattered was not what kind of service was delivered but whether enough boxes could be ticked to satisfy the specified criteria for a good service (ie as with the GP example). This phenomenon was ubiquitously termed ‘box ticking’. A metric society is in many respects a box-ticking society; and this privileges appearance over reality. Fake reality and cover your backs at the same time! It is the epitomy of what Ritzer in his The McDonaldisation of Society calls ‘the irrationality of rationality’.
It is not unreasonable to liken the metricalisation of academia and society to the way head coaches of international sports teams adjust tactics not by observing players’ performances but by scrutinising GPS data on their laptops. It is as if academics must be constantly alert to senior management’s moving verdicts on their worth to the institution in the lead up to the latest national review of universities’ comparative productivity. Metrics in this context represent a crude Weberian ‘juridification’ or bureaucratisation of an academic field now infiltrated and ‘caged’ by neoliberal political agendas. Theorists like myself are at a particular disadvantage because they attract less funding and rarely edge their way into the public sphere or fuel policy decisions. Contrastingly, we often raise challenging and uncomfortable questions for the institutions in which we work.