Bureaucratic Reading

By Andrew Dean

© all rights reserved. Printer friendly PDF version.          doi: 10.56449/14429291


 

John Guillory in his 1993 book Cultural Capital dedicates nearly a hundred pages to a study of Paul de Man’s writing and reception. De Man’s critical writing, he suggests, is symptomatic of the institutional arrangements that organised American literary academia at the time, and in particular the emergent academic star system. He finds that a certain ideology of professionalism ‘has subtended the claims for the subversiveness of literary theory, and rhetorical reading in particular, by naming the “institution” as the object of subversive teaching by charismatic master theorists’ (Cultural Capital 254). The critical writing of the theory superstars of the late twentieth century in this sense does different work from what it claims to, as it navigates a position in relation to the university before anything else. The academic celebrities of the period were in turn valued by ‘competitive university administrations, for whom the content of theory, subversive or otherwise, was largely irrelevant. What mattered was that the charisma of the master theorists could be converted into bureaucratic prestige’ (Cultural Capital 254-5).

The idea of the profession, though, which underlies Guillory’s reading, shapes more than just the likes of de Man. On this account, it is fundamental to how those of us in the academic humanities conceive of our labour. Professions, as Guillory suggests, tend to ‘den[y] the subordination of professional activity to merely bureaucratic ends’ (Cultural Capital 254). As literary professionals, we like to think of ourselves as independent from the organisations that employ us. We have a certain anti-institutional understanding of our positions, as we base our understanding of what it means to be a literature professor not on the terms provided by our employers but rather through a wider abstraction of what a ‘professor’ is. This approach explains the sense that many of us have that our professional lives should be a certain way: free from constraint, at leisure to teach, research, and mentor as we wish. There is a tension here, of course, with the employment conditions that govern our labour, in which we are to a greater or lesser extent told what to do.

In what follows, I explore how bureaucracies value our research and examine the proxies they use to assess it. The work done in literary studies is undoubtedly shaped by how bureaucracies assign value to our research. Like Guillory, I understand university administrations to be largely agnostic on the question of the content of our work. As far as bureaucracies are concerned, reading neo-Marxist philosophy is no different from researching communication strategies for major firms—so long as the research leads to grants, publications, and student tuition fees. Viewed in this way, the heroic acts of subversion that we take on as literature professors—of the canon, politics, and much else besides—participate in competitive position-taking inherent to professional advancement in the university. The professional structures that underlie our research in turn enable both its intense rhetoric and often insignificant political consequence. As Guillory puts it in his most recent monograph, Professing Criticism (2022):

The absurdity of the situation should be evident to all of us: as literary study wanes in public importance, as literature departments shrink in size, as majors in literature decline in numbers, the claims for the criticism of society are ever more overstated. In these circumstances, we ought not to pretend that the university is actually rewarding the political work of scholarship. (Professing 78)[1]

The question that concerns me here, then, is what is being rewarded in our universities. What are universities incentivising, how are they doing it, and what are the outcomes? In chapter 10 of Professing Criticism, Guillory addresses the system for evaluating humanities scholarship in higher education, primarily in the United States. At the heart of such endeavours is a question of value: what is the significance of specific acts of research into literature? Who might it be valuable for? How can any claims associated with it be articulated beyond the discipline—and how can these claims be assured? As he writes: ‘The practice of evaluating individual works of scholarship is also a practice through which the value of scholarship is expressed to ourselves, to those in nonhumanities disciplines, and ultimately to those who fund the university, whether by donation, tuition, or through the taxes they pay’ (Professing 280). The stakes are high indeed, as they describe the apparent return on investment for those who pay our salaries, ever less willingly. A better understanding of research evaluation procedures is necessary if we wish to understand how they form and deform the state of the profession.

The difficulties with evaluation are legion. For a start, the proxies through which we might ascribe value to research are imperfect and only becoming more so. It is troubling that in humanities disciplines that expect faculty to publish monographs, university presses have been reducing their publication lists. At the same time, ‘monographs are expected to be shorter and press runs smaller’ (Professing 280). There is a sense in which departments use presses as methods by which to evaluate their staff, leaving publishers ‘to wonder why they have been asked to do the work of ad hoc tenure committees’ (Professing 290). The system is degrading even as the demand to quantify achievement increases.

At the heart of Guillory’s approach to evaluation is a model that traces the escalation of judgment through institutions, moving from the rich description that expert readers offer to the ‘flattening or thinning of accountable discourse’ that takes place at the highest levels (Professing 282). In the first instance, subject area experts assess scholarship based on their understanding of how research contributes to a field of knowledge. The judgments that are being offered at this level tend to be ones that emerge out of specific disciplinary understandings, and are likely to be lively and disputatious. During a tenure evaluation process—the primary scene of evaluation in the academic system in the United States—judgments are sought from other experts in the field, both in the candidate’s home department and in other institutions. Their comments are then sent to tenure committees and provosts and deans, whose expertise ‘may be far […] from the field of candidates for tenure’ (Professing 282). As all this takes place, argues Guillory, ‘the descriptive language of evaluation loses its density’ (Professing 282-3). At the end of this process, ‘the cumulative record of publication and other numerable measures substitute for the information-rich accounts of descriptions of scholarly work by experts in the field’ (Professing 282-3).

The validity of such an evaluative mechanism lies in how effectively it translates from the lowest to the highest levels—in that sense, the flattening Guillory describes is not necessarily a problem. Candidates would rightly be concerned if a dean from a distant discipline sought to make their own judgments about the quality of the research presented in the file. To do so would be to assume unearned mastery over the significance of the candidate’s scholarship for a discipline. ‘At the most external site of evaluation, no reading of a candidate’s work need be done at all and would even in some ways be undesirable’, Guillory concludes (Professing 283). Administrators and committees ‘must defer’, in this regard at least, ‘to the immanence of the field’ (Professing 283). Expert reports are non-substitutable for this model, representing as they do the primary substantive intellectual engagement underlying the exercise (Professing 283). It is immanent field expertise that ultimately gives validity to the assessment. Nonetheless, deans’ offices can gain ‘confidence’ in expert assessments by co-ordinating them with whatever measures can be accessed through achievements legible to the bureaucracy—prizes, fellowships, publications, grants, and so on.

One of the concerns that lies behind Guillory’s chapter is the increasing demand for ‘the quantification of judgment’ (Professing 282). This demand follows wider social dynamics, in which the quantification of productivity measures has been integrated into everything from food delivery to public services to software engineering. In academia, the strategy has been to develop methods of evaluation that bypass ‘evaluative discourse’ altogether (Professing 281). Inevitably, such quantification carries the air of neutrality and even accountability, as pre-determined criteria supersede any particular person, administrator, or discipline. ‘At the limit of externality, evaluation would appear to be capable of being reduced entirely to a numerical tally’, undertaken purely by provosts and deans (Professing 283). But this seeming objectivity comes at the considerable cost of refusing to engage with the intellectual life of systems of knowledge. ‘The quantifying of achievement is a policy that arouses concern in the humanities, for very good reasons’, Guillory writes, ‘if one can demonstrate in some (or any) cases a disparity between objective measures and contradictory but persuasively argued subjective judgments about the quality of a scholar’s work’ (Professing 283). There are few ways to arbitrate such disagreements, and those making the final determinations are unlikely to be able to meaningfully weigh the substantive judgments that experts have provided.

As Guillory suggests in Professing Criticism, quantifiable measures of achievement are often understood as attempts to ‘transcend bias’ (Professing 286). Yet, as he goes on to note, ‘to say that judgment is subjective is as tautological as saying that water is wet’ (Professing 286). That is because valid judgments require expert evaluators to make claims that move between the specific research in question and the wider intellectual environment. ‘The social condition of intersubjectivity is not actually transcended by the procedure of externalization’, Guillory writes (Professing 286). In other words, the attempt to direct judgment into the institution and away from experts merely changes the location (and genre, though I will take this up later) in which judgments are made (Professing 286). To know nothing is to achieve true detachment, but, at the same time, to lose all meaning. Or as Guillory puts it: ‘the purest objectivity […]—the absence of bias—belongs to simple ignorance’ (Professing 286).

I am labouring over these points because ‘simple ignorance’ describes how a large number of academics in the humanities in Australia are evaluated by their internal university systems. In the model currently adopted by the Faculty of Arts and Education at my institution, each faculty member’s research fraction is calculated as part of an annual exercise. This exercise operates through counting specific kinds of publications and achievements, which I explain further below, and assigns annual research loading on a sliding scale from as low as 10 percent to as high as 60 percent (at Professor level). Before I go into details, though, it should be noted that a major review into workload is currently underway. This may well replace the research allocation sliding scale with an alternative. Whatever the final details, the new system is likely to remove the most explicitly distorting elements of the current model, especially by no longer incentivising volume of publication over research quality.

Nonetheless, the system as it is currently implemented has had a profound impact on how faculty undertake research at my institution, and the research culture more broadly. It works as follows: every year, a faculty member’s ‘points’ for research outputs and achievements in the previous three years is calculated, and then compared to a table that assigns expectations to each level of seniority. At Level B (Lecturer), for example, to achieve a 40 percent research allocation a staff member must accrue 5 points. At Level E (Professor), a 40 percent research allocation requires 11 points. Low research points can lead to one’s research fraction dropping to as low as 10 percent. Once a faculty member’s research fraction drops to this baseline they often find that their increased teaching load makes it difficult to return to a higher research loading. Research points are calculated as follows: a chapter in an edited book is worth 1; journal articles between 1 and 1.5 (depending on ‘Scimago quartile’); a PhD completion between 1 and 1.25 (depending on timeliness); and a monograph 5. Every A$10,000 per annum of grant income received by the university is given 1 point. ‘Non-traditional research outputs’ are assigned on a sliding scale from 1 to 12. This model applies without regard to differences in expectations of quantity of publication between disciplines in the Faculty.

At the heart of the current workload model is the concept of equivalence—all research outputs can be translated between one another. A chapter in an edited book is worth the same as a PhD completion, while a monograph is worth 5 book chapters, or 3 book chapters and 2 journal articles in Scimago quartile 3 journals—and so on. Likewise, any article is of equal value to another in a similarly ranked journal, irrespective of topic or discipline. An article in anthropology is fundamentally the same kind of thing as an article in literary studies, just as an article about George Eliot is treated identically with an article about the rhetoric of a packet of Rice Bubbles. By not knowing anything substantive about any particular act of research, the institution can assign workloads and determine performance relative to a set of expectations.

As one might imagine, the incumbent model produces perverse outcomes. At the very least, it is hard to understand what a publication actually represents in terms of research. A single article, for example, can be strategically broken up into two publications (and hence receive double the points). An article in a Scimago quartile 1 journal that accepts short papers of 3,000 words is the same as a similarly ranked one that accepts papers of 10,000 words. A monograph of 120,000 words in length is identical to one of 60,000 words. Publishing articles and chapters out of a monograph counts for additional points, despite not representing additional research. Meanwhile, the research that one does in the process of creating an anthology or editing a collection is not recognised at all in the current system. The quanta that are assigned to specific outputs often feel arbitrary. My own monograph, which took 8 years from conception to publication, received 5 points, while a public-facing essay I wrote, which took me less than a week, received 1. This structure has encouraged staff to produce shorter, less deeply researched publications rather than attempt ambitious projects that seek new knowledge but might well take significant time and have uncertain results.

Where attempts have been made to recognise and incentivise research quality rather than quantity, the current evaluation system relies on metrics and data from Scimago that are limited at best and perverse at worst. The limitations of journal ranking systems in humanities are well-known. As Anthony Uhlmann notes, ‘Scimago allows journals to nominate multiple disciplines’, which leads to the extraordinary situation in which the top-ranked journals in literary studies are not meaningfully engaged with the discipline at all (9). For example, in December 2022 the fourth-ranked journal in ‘Literature and Literary Theory’ according to Scimago was Plant Phenomics (9). At the time of writing, PMLA, the pre-eminent journal for anglophone literary studies, was ranked 123, one behind Jordan Journal of Modern Languages and Literatures and one ahead of Islamic Africa. It is a situation that is worse than arbitrary. By embedding an evaluation system such as Scimago into the university’s workload model, faculty are ultimately incentivised to publish in journals that Scimago ranks highly, irrespective of actual standing in the field.

A workload model fundamentally structures in academic institutions, and its consequences are systemic. In the current model in my Faculty, opportunities for leadership and collaboration with external colleagues have been limited by failing to recognise as research the editing of books or special issues of journals. Scholars in disciplines that place high priority on original academic monographs have been encumbered by the short-termism built into the model, which requires them to publish a high quantity of articles and chapters as they work on their books. Per Guillory: ‘as a systemwide tendency, raising standards by demanding more publication is paradoxically likely to result in a decline in the quality of scholarship and a creeping cynicism about publication’ (Professing 288). More damaging altogether is that teaching has come to be seen as a punishment for poor research performance in this model, discouraging academics from committing time to the activity that, among its many other values, ultimately brings in a large proportion of the department’s funding.

It is with this kind of scenario in mind that Guillory resists the tendency to narrowly identify scholarship ‘with the form of publication’—he writes that scholarship ‘as a practice’ must be separated from ‘publication as product’ (Professing 280). By not distinguishing between the two, and instead favouring publication counts, strongly quantitative systems of evaluation end up ‘distorting scholarship as practice and […] limiting its range of expression’ (Professing 280).[2] If one thinks of one’s research as extending beyond publication and into practices of observation, collection, analysis, and argument, the possibilities for undertaking research dramatically and meaningfully expand, reanimating the intellectual life of the discipline.

The failures of data-driven approaches to evaluation are likely only to become more significant in the coming years in Australian higher education. Irrespective of what approach universities themselves take, statements from government ministers indicate a preference for metrics. The 2022 letter from Minister for Education Jason Clare to the CEO of the ARC specified that the Excellence in Research Australia (ERA) exercise should be revised to operate using a ‘modern data driven approach informed by expert review’ (see Uhlmann 1).[3] ERA in its previous iterations at least had a kind of peer review built into it, whereby committees of experts with field-specific knowledge used a variety of methods (including citation metrics) to determine departmental research quality. While the ARC Review Panel has recommended against developing a solely ‘metrics-based exercise’, it is not yet clear exactly how evaluation will proceed in future (Sheil et al. 60, cited in Uhlmann 3).

The sidelining of academic expertise is no doubt one of the reasons that data-driven research evaluation appeals to academic bureaucracies, as it delivers evaluative capability to those who have no field-specific knowledge. Administrators have much greater capacity to set performance expectations and assess faculty against them in an environment where value is understood quantitatively and where one does not need to know anything about a topic to assess the quality of any particular act of research. While Professing Criticism acknowledges the consequences of the growth of administrative power in the university, the situation in Australia, without academic tenure, goes well beyond what Guillory describes in his book. In effect, job insecurity and embedded, inflexible workload models further diminish the role of the profession itself in our working lives.

Guillory suggests that ‘we must reject the line of argument that simply concedes the primacy of objectivity by naming the object of judgment as nothing other than professional success’ (Professing 290). By this he means that we should turn against the view, now at large, that ‘it is only the signs of professional recognition that we can recognise as the legitimate basis for evaluation’ (Professing 290). Such an approach starts from the view that content should not influence evaluation, as it is believed that every kind of intervention and disciplinary undertaking can be understood to be equally valuable, so long as it meets certain criteria of prestige (mostly in terms of publication). This is a kind of relativism, which we should reject. As he puts it, ‘groundless subjectivity flips over into its opposite’, as we end up simply relying on our managers to count how diligently a scholar has been ‘“objectively accruing the signs” of professional success’ (Professing 292).

Resisting quantification endeavours therefore requires that we make positive statements about the value of research in terms other than publication outcome. Or, to put that another way, those of us who do not believe that a scholar’s research achievement is best assessed by counting publications must explain why and how we should assess it better. It is with this in mind that we can begin from the question that should ultimately motivate all evaluation: ‘What is the contribution of this work to scholarship, to the humanities as disciplines of knowledge?’ (Professing 295). Responding in any depth requires the kinds of attention that are specific to fields of knowledge, as we seek to develop a descriptive account of a work’s value, both answering for it and extending beyond it. Any evaluative exercise which does not seek to answer this question should be thought of as inadequate, as Guillory suggests, as it fails to defend humanities scholarship. It does nothing more than measure ‘professional success’ (Professing 295).

Trusting the capacity of those working within a field to recognise and describe quality, and demanding that evaluations richly articulate and situate it, is to my mind one of the central businesses of a research institution. To circumvent judgment is to call into question the necessity and even value of expertise. Judgment, after all, may be subjective—but it is not arbitrary. When we make judgments we engage in a shared discourse of reason. The judgments that underlie evaluations draw on accumulations of expertise, situated knowledge, and argumentative idioms, to come to conclusions about the value of scholarship. Such points have been developed at some length by the likes of Michael Clune in A Defense of Judgment (2021), as he points out how the concept of political equality has been applied to scholarly knowledge and literary expression, a strategy that ultimately deforms our scholarly labours and embeds market rationality at the core of our readings.

In turn—and admittedly this is where matters become more difficult—if one believes that research is non-equivalisable in this way, then one must also be able to conclude that certain kinds of research are not valuable. Judgment universalises scholarship by speaking collectively about value. To argue that research efforts have not advanced knowledge or contributed meaningfully to human understanding is to engage directly in that ongoing conversation. The capacity to say no to a research undertaking is part and parcel of any legitimate enterprise of judgment. Without a sense of what is important about what our research does (or not), evaluation is denuded to the point that it becomes little more than a management technique—and bureaucracies need little encouragement to deliver to themselves the capability to determine what is and is not valuable about what we do.

It is the strength of Guillory’s work that he clearly articulates the effect that institutional shifts have had on how literary studies proceeds in the university. This is not just at the level of how our roles are performed, but also, as I suggested at the start of this essay, at the level of the research that scholars do and have done over the last few decades. Guillory makes a compelling case that ‘surrogational politics’—whereby literary study ‘became a surrogate for society in the “criticism of society”’—have come to occupy much of the attention of the discipline (Professing 69).[4] It is with this in mind that we might wonder if it is not despite the embedding of performance measures in universities that research in literary studies has become more expressly political, but in some small way because of it. Likewise, the tendency to be suspicious of the politics that underlie all cultural forms may not have challenged but rather accelerated the managerial capture of how the value of our discipline is described.

Here in Australia, we have a conversation with many speakers and few listeners, many quantifiable research achievements but few readers. Disrupting the cycle of high speed, low impact research requires both sustained collective action in our universities and a better narrative about how research in our disciplines can be evaluated. On the former point, the recent series of campaign successes by the National Tertiary Education Union is encouraging. At my institution, sustained collective action seems to be hastening change. The new Enterprise Agreement signed in 2023, which followed the first university-wide strike action in a decade, required the establishment of Academic Workload Advisory committees, with 40 percent representation from the Union. It is through democratic participation in our workplaces that the manifest absurdities of the current situation are starting to be unwound. A series of major union victories has been won across Australian universities, suggesting too the emergence of a new energy in political organising.

The latter point, though, requires that we reject the ideologies of relativism that have propelled managerial capture of our labours over the last several decades, and instead develop a system in which colleagues do in fact read each other’s work, make comments on it, and engage in active public debate about what matters and why. These activities, it should be noted, are usually not given any explicit weight in current systems. Evaluation is a crucial practice of academic life, and it is to our peril that we accept an increased reliance on quantification. It hollows out our labours, leads to a profound cynicism about our research, and narrows the possibility of what research can be. We say many of the same things about the classroom, where students are subject to increased quantification and metrics that can limit their imaginative endeavour. Evaluation at its best is part of how scholarship comes to have meaning in the discipline and the wider world, as debates proceed about how literature is to be understood and valued. Drawing on the non-equivalence of our research and the nature of our expertise, we can collectively participate in the project of specialist knowledge. Evaluation, again, can be an argument. All else is mere numbers.

Andrew Dean is Lecturer in Writing and Literature at Deakin University. He is the author of Metafiction and the Postwar Novel: Foes, Ghosts, and Faces in the Water(Oxford, 2021), as well as articles in journals such as MFS: Modern Fiction Studies and Studies in the Novel.

 

Notes

[1] While Guillory’s attention to the professions no doubt illuminates the specific nature of university labour, it is worth keeping in mind how such labour is connected to shifting structural conditions. The bureaucracy of the university, after all, is not entirely defined from within itself, but is rather connected to wider demands for accountability, transparency, productivity, impact, and return on investment in our public institutions. The expectation that our undertakings contribute to specific national interests and projects is made apparent not just through the National Interest Test in certain nationally competitive grants, for example, but implicitly through ongoing reward and punishment structures that flow from government policy into the university system’s internal practices. The recommendation in the Universities Accord Interim Report (2023) that governments ‘should work together to strengthen university governing boards by rebalancing their composition to put a greater emphasis on higher education expertise’ recognises how, at the top-level at least, universities have been run as institutions akin to any other by a cadre of generalist public servants and business managers (Department of Education, Australian Universities Accord Interim Report, Canberra: Department of Education, 2023, 13).

[2] In my institution the ‘range of expression’ is even limited within standard academic genres, as it excludes reviews, review essays, edited books, textbooks, and anthologies.

[3] Full statement available here: <http://www.arc.gov.au/about-arc/our-organisation/statement-expectations-2022>.

[4] I have written about the interruption of cultural studies here: Andrew Dean, ‘Simon During, Crisis Talk and the Legacies of the 1980s’, Australian Literary Studies 38.2 (2023).

 

Works Cited

Clune, Michael W. A Defense of Judgment. Chicago: U of Chicago P, 2021.

Guillory, John. Cultural Capital: The Problem of Literary Canon Formation. Chicago: U of Chicago P, 1993.

—. Professing Criticism: Essays on the Organization of Literary Study. Chicago: U of Chicago P, 2022.

Sheil, Margaret, et al. Trusting Australia’s Ability: Review of the Australian Research Council Act 2001. ESE22/3770, Department of Education, 2023.

Uhlmann, Anthony. ‘Evaluating Literary Studies [FOR 4705] in Australia: Bad Data, Bad Peer Review’. Australian Literary Studies 38.2 (2023): 1-20.

If you would like to contribute to this discussion, please email [email protected]