Contemporary societies, as we know, rest on calculation. From the establishment of statistics, which was essential to the construction of the modern state, to double-entry bookkeeping as the key accounting technique for ‘rationalizing’ capitalism and colonial trade, the capacity to express quality (or qualities, to be more precise) through numbers is at the core of the modern world.
From a sociological perspective, this capacity involves a set of connected operations. One is valuation, the social process through which entities (things, beings) come to (be) count(ed); the other is commensuration, or the establishment of equivalence: what counts as or for what, and under what circumstances. Marion Fourcade specifies three steps in this process: nominalization, the establishment of ‘essence’ (properties); cardinalization, the establishment of quantity (magnitude); and ordinalization, the establishment of relative position (e.g. position on a scale defined by distance from other values). While, as Mauss has demonstrated, none of these processes are unique to contemporary capitalism – barter, for instance, involves both cardinalization and commensuration – they are both amplified by and central to the operation of global economies.
Given how central the establishment of equivalence is to contemporary capitalism, it is not a little surprising that we seem so palpably bad at it. How else to explain the fact that, on the day when 980 people died from Coronavirus, the majority of UK media focused on the fact that Boris Johnson was recovering in hospital, reporting in excruciating detail the films he would be watching. While some joked about excessive concern for the health of the (secular) leader as reminiscent of the doctrine of ‘The King’s Two Bodies’, others seized the metaphor and ran along with it – unironically.
Briefly (and somewhat reductively – please go somewhere else if you want to quibble, political theory bros), ‘King’s Two Bodies’ is a concept in political theology by which the state is composed of two ‘corporeal’ entities – the ‘body politic’ (the population) and the ‘body natural’ (the ruler)*. This principle allows the succession of political power even after the death of the ruler, reflected in the pronouncement ‘The King is Dead, Long Live the King’. From this perspective, the claim that 980 < 1 may seem justified. Yet, there is something troubling about this, even beyond basic principles of decency. Is there a large enough number that would disturb this balance? Is it irrelevant whose lives are those?
Formally, most liberal democratic societies forbid the operation of a principle of equivalence that values some human beings as lesser than others. This is most clearly expressed in universal suffrage, where one person (or, more specifically, one political subject) equals one vote; on the global level, it is reflected in the principle of human rights, which assert that all humans have a certain set of fundamental and unalienable rights simply as a consequence of being human. All members of the set ‘human’ have equal value, just by being members of that set: in Badiou’s terms, they ‘count for one‘.
Yet, liberal democratic societies also regularly violate these principles. Sometimes, unproblematically so: for instance, we limit the political and some other rights of children and young people until they become of ‘legal age’, which is usually the age at which they can vote; until that point, they count as ‘less than one’. Sometimes, however, the consequences of differential valuation of human beings are much darker. Take, for instance, the migrants who are regularly left to drown in the Mediterranean or treated as less-than-human in detention centres; or the NHS doctors and nurses – especially BAME doctors and nurses – whose exposure to Coronavirus gets less coverage than that of politicians, celebrities, or royalty. In the political ontology of contemporary Britain, some lives are clearly worth less than others.
The most troubling implication of the principle by which the body of the ruler is worth more than a thousand (ten thousand? forty thousand?) of ‘his’ subjects, then, is not its ‘throwback’ to mediaeval political theology: it is its meaning for politics here and now. The King’s Two Bodies, after all, is a doctrine of equivalence: the totality of the body politic (state) is worth as much as the body of the ruler. The underlying operation is 1 = 1. This is horribly disproportionate, but it is an equivalence nonetheless: both the ruler and the population, in this sense, ‘count for one’. From this perspective, the death of a sizeable portion of that population cannot be irrelevant: if the body politic is somewhat diminished, the doctrine of King’s Two Bodies suggests that the power of the ‘ruler’ is somewhat diminished too. By implication, the current political ontology of the British state currently rests not on the principle of equivalence, but on a zero-sum game: losses in population do not diminish the power of the ruler, but rather enlarge it. And that is a dangerous, dangerous form of political ontology.
*Hobbes’ Leviathan is often seen as the perfect depiction of this principle; it is possible to quibble with this reading, but the cover image for this post – here’s the credit to its creator on Twitter – is certainly the best possible reflection on the shift in contemporary forms of political power in the aftermath of the Covid-19 pandemic.
During the last #USSstrike, on non-picketing days, I practiced working to contract. Working to contract is part of the broader strategy known as ASOS – action short of a strike – and it means fulfilling your contractual obligations, but not more than that. Together with many other UCU members, I will be moving to ASOS from Thursday. But how does one actually practice ASOS in the neoliberal academia?
I am currently paid to work 2.5 days a week. Normally, I am in the office on Thursdays and Fridays, and sometimes half a Monday or Tuesday. The rest of the time, I write and plan my own research, supervise (that’s Cambridgish for ‘teaching’), or attend seminars and reading groups. Last year, I was mostly writing my dissertation; this year, I am mostly panickedly filling out research grant and job applications, for fear of being without a position when my contract ends in August.
Yet I am also, obviously, not ‘working’ only when I do these things. Books that I read are, more often than not, related to what I am writing, teaching, or just thinking about. Often, I will read ‘theory’ books at all times of day (a former partner once raised the issue of the excess of Marx on the bedside table), but the same can apply to science fiction (or any fiction, for that matter). Films I watch will make it into courses. Even time spent on Twitter occasionally yields important insights, including links to articles, events, or just generic mood of a certain category of people.
I am hardly exceptional in this sense. Most academics work much more than the contracted hours. Estimates vary from 45 to as much as 100 hours/week; regardless of what is a ‘realistic’ assessment, the majority of academics report not being able to finish their expected workload within a 37.5-40hr working week. Working on weekends is ‘industry standard’; there is even a dangerous overwork ethic. Yet increasingly, academics have begun to unite around the unsustainability of the system in which we are increasingly feeling overwhelmed, underpaid, and with mental and other health issues on the rise. This is why rising workloads are one of the key elements of the current wave of UCU strikes. It also led to coining of a parallel hashtag: #ExhaustionRebellion. It seems like the culture is slowly beginning to shift.
From Thursday onwards, I will be on ASOS. I look forward to it: being precarious makes not working sometimes almost as exhausting as working. Yet, the problem with the ethic of overwork is not only that is is unsustainable, or that is directly harmful to the health and well-being of individuals, institutions, and the environment. It is also that it is remarkably resilient: and it is resilient precisely because it relies on some of the things academics value the most.
Marx’s theory of value* tells us that the origins of exploitation in industrial capitalism lie in the fact workers do not have ownership over means of production; thus, they are forced to sell their labour. Those who own means of production, on the other hand, are driven by the need to keep capital flowing, for which they need profit. Thus, they are naturally inclined to pay their workers as little as possible, as long as that is sufficient to actually keep them working. For most universities, a steady supply of newly minted graduate students, coupled with seemingly unpalatable working conditions in most other branches of employment, means they are well positioned to drive wages further down (in the UK, 17.5% in real terms since 2009).
This, however, is where the usefulness of classical Marxist theory stops. It is immediately obvious that many of the conditions the late 19th-century industrial capitalism no longer apply. To begin with, most academics own the most important means of production: their minds. Of course, many academics use and require relatively expensive equipment, or work in teams where skills are relatively distributed. Yet, even in the most collective of research teams and the most collaborative of labs, the one ingredient that is absolutely necessary is precisely human thoughts. In social sciences and humanities, this is even more the case: while a lot of the work we do is in libraries, or in seminars, or through conversations, ultimately – what we know and do rests within us**.
Neither, for that matter, can academics simply written off as unwitting victims of ‘false consciousness’. Even if the majority could have conceivably been unaware of the direction or speed of the transformation of the sector in the 1990s or in the early 2000s, after the last year’s industrial action this is certainly no longer the case. Nor is this true only of those who are certainly disproportionately affected by its dual face of exploitation and precarity: even academics on secure contracts and in senior positions are increasingly viewing changes to the sector as harmful not only to their younger colleagues, but to themselves. If nothing else, what USS strikes achieved was to help the critique of neoliberalism, marketization and precarity migrate from the pages of left-leaning political periodicals and critical theory seminars into mainstream media discourse. Knowing that current conditions of knowledge production are exploitative, however, does not necessarily translate intoknowing what to do about them.
This is why contemporary academic knowledge production is better characterized as extractive or rentier capitalism. Employers, in most cases, do not own – certainly not exclusively – the means of production of knowledge. What they do instead is provide the setting or platform through which knowledge can be valorized, certified, and exchanged; and charge a hefty rent in the process (this is one part of what tuition fees are about). This ‘platform’ can include anything from degrees to learning spaces; from labs and equipment to email servers and libraries. It can also be adjusted, improved, fitted to suit the interests of users (or consumers – in this case, students); this is what endless investment in buildings is about.
The cunning of extractive capitalism lies in the fact that it does not, in fact, require workers to do very much. You are a resource: in industrial capitalism, your body is a resource; in cognitive capitalism, your mind is a resource too. In extractive capitalism, it gets even better: there is almost nothing you do, a single aspect of your thoughts, feelings, or actions, that the university cannot turn into profit. Reading Marxist theory on the side? It will make it into your courses. Interested in politics? Your awareness of social inequalities will be reflected in your teaching philosophy. Involved in community action? It will be listed in your online profile under ‘public engagement and impact’. It gets better still: even your critique of extractive, neoliberal conditions of knowledge production can be used to generate value for your employer – just make sure it is published in the appropriate journals, and before the REF deadline.
This is the secret to the remarkable resilience of extractive capitalism. It feeds on exactly what academics love most: on the desire to know more, to explore, to learn. This is, possibly, one of the most basic human needs past the point of food, shelter, and warmth. The fact that the system is designed to make access to all of the latter dependent on being exploited for the former speaks, I think, volumes (it also makes The Matrix look like less of a metaphor and more of an early blueprint, with technology just waiting to catch up). This makes ‘working to contract’ quite tricky: even if you pack up and leave your office at 16.38 on the dot, Monday to Friday, your employer will still be monetizing your labour. You are probably, even if unwittingly, helping them do so.
What, then, are we to do? It would be obviously easy to end with a vague call a las barricadas, conveniently positioned so as to boost one’s political cred. Not infrequently, my own work’s been read in this way: as if it ‘reminds academics of the necessity of activism’ or (worse) ‘invites to concrete action’ (bleurgh). Nothing could be farther from the truth: I absolutely disagree with the idea that critical analysis somehow magically transmigrates into political action. (In fact, why we are prone to mistaking one for the other is one of the key topics of my work, but this is an ASOS post, so I will notbe writing about it). In other words, what you will do – tomorrow, on (or off?) the picket line, in a bit over a week, in the polling booth, in the next few months, when you are asked to join that and that committee or to a review a junior colleague’s tenure/promotion folder – is your problem and yours alone. What this post is about, however, is what to do when you’re on ASOS.
Therefore, I want to propose a collective reclaiming of the life of the mind. Too much of our collective capacity – for thinking, for listening, for learning, for teaching – is currently absorbed by institutions that turn it, willy-nilly, into capital. We need to re-learn to draw boundaries. We need thinking, learning, and caring to become independent of process that turns them into profit. There are many ways to do it – and many have been tried before: workers and cooperative universities; social science centres; summer schools; and, last but not least, our own teach-outs and picket line pedagogy. But even when these are not happening, we need to seriously rethink how we use the one resource that universities cannot replace: our own thoughts.
So from Thursday next week, I am going to be reclaiming my own. I will do the things I usually do – read; research; write; teach and supervise students; plan and attend meetings; analyse data; attend seminars; and so on – until 4.40. After that, however, my mind is mine – and mine alone.
*Rest assured that the students I teach get treated to a much more sophisticated version of the labour theory of value (Soc1), together with variations and critiques of Marxism (Soc2), as well as ontological assumptions of heterodox vs. ‘neoclassical’ economics (Econ8). If you are an academic bro, please resist the urge to try to ‘explain’ any of these as you will both waste my time and not like the result. Meanwhile, I strongly encourage you to read the *academic* work I have published on these questions over the past decade, which you can find under Publications.
**This is one of the reasons why some of the most interesting debates about knowledge production today concern ownership, copyright, or legal access. I do not have time to enter into these debates in this post; for a relatively recent take, see here.
(This is a companion/’explainer’ piece to my article, ‘Knowing Neoliberalism‘, published in July 2019 in Social Epistemology. While it does include a few excerpts from the article, if using it, please cite and refer to the original publication. The very end of this post explains why).
What does it mean to ‘know’ neoliberalism?
What does it mean to know something from within that something? This question formed the starting point of my (recently defended) PhD thesis. ‘Knowing neoliberalism’ summarizes some of its key points. In this sense, the main argument of the article is epistemological — that is, it is concerned with the conditions (and possibilities, and limitations) of (human) knowledge — in particular when produced and mediated through (social) institutions and networks (which, as some of us would argue, is always). More specifically, it is interested in a special case of that knowledge — that is, what happens when we produce knowledge about the conditions of the production of our own knowledge (in this sense, it’s not ‘about universities’ any more than, say, Bourdieu’s work was ‘about universities’ and it’s not ‘on education’ any more than Latour’s was on geology or mining. Sorry to disappoint).
The question itself, of course, is not new – it appears, in various guises, throughout the history of Western philosophy, particularly in the second half of the 20th century with the rise (and institutionalisation) of different forms of theory that earned the epithet ‘critical’ (including the eponymous work of philosophers associated with the Frankfurt School, but also other branches of Marxism, feminism, postcolonial studies, and so on). My own theoretical ‘entry points’ came from a longer engagement with Bourdieu’s work on sociological reflexivity and Boltanski’s work on critique, mediated through Arendt’s analysis of the dichotomy between thinking and acting and De Beauvoir’s ethics of ambiguity; a bit more about that here. However, the critique of neoliberalism that originated in universities in the UK and the US in the last two decades – including intellectual interventions I analysed in the thesis – lends itself as a particularly interesting case to explore this question.
Why study the critique of neoliberalism?
Critique of neoliberalism in the academia is an enormously productive genre. The number of books, journal articles, special issues, not to mention ‘grey’ academic literature such as reviews or blogs (in the ‘Anglosphere’ alone) has grown exponentially since mid-2000s. Originating in anthropological studies of ‘audit culture’, the genre now includes at least one dedicated book series (Palgrave’s ‘Critical University Studies’, which I’ve mentioned in this book review), as well as people dedicated to establishing ‘critical university studies‘ as a field of its own (for the avoidance of doubt, I do not associate my work within this strand, and while I find the delineation of academic ‘fields’ interesting as a sociological phenomenon, I have serious doubts about the value and validity of field proliferation — which I’ve shared in many amicable discussions with colleagues in the network). At the start of my research, I referred to this as the paradox of the proliferation of critique and relative absence of resistance; the article, in part, tries to explain this paradox through the examination of what happens if and when we frame neoliberalism as an object of knowledge — or, in formal terms, epistemic object.
This genre of critique is, and has been, highly influential: the tropes of the ‘death’ of the university or the ‘assault’ on the academia are regularly reproduced in and through intellectual interventions (both within and outside of the university ‘proper’), including far beyond academic neoliberalism’s ‘native’ context (Australia, UK, US, New Zealand). Authors who present this kind of critique, while most frequently coming from (or being employed at) Anglophone universities in the ‘Global North’, are often invited to speak to audiences in the ‘Global South’. Some of this, obviously, has to do with the lasting influence of colonial networks and hierarchies of ‘global’ knowledge production, and, in particular, with the durability of ‘White’ theory. But it illustrates the broader point that the production of critique needs to be studied from the same perspective as the production of any sort of knowledge – rather than as, somehow, exempt from it. My work takes Boltanski’s critique of ‘critical sociology’ as a starting point, but extends it towards a different epistemic position:
Boltanski primarily took issue with what he believed was the unjustified reduction of critical properties of ‘lay actors’ in Bourdieu’s critical sociology. However, I start from the assumption that professional producers of knowledge are not immune to the epistemic biases to which they suspect their research subjects to be susceptible…what happens when we take forms and techniques of sociological knowledge – including those we label ‘critical’ and ‘reflexive’ – to be part and parcel of, rather than opposed to or in any way separate from, the same social factors that we assume are shaping epistemic dispositions of our research subjects? In this sense, recognising that forms of knowledge produced in and through academic structures, even if and when they address issues of exploitation and social (in)justice, are not necessarily devoid of power relations and epistemic biases, seems a necessary step in situating epistemology in present-day debates about neoliberalism. (KN, p. 4)
This, at the same time, is what most of the sources I analysed in my thesis have in common: by and large, they locate sources of power – including neoliberal power – always outside of their own scope of influence. As I’ve pointed out in my earlier work, this means ‘universities’ – which, in practice, often means ‘us’, academics – are almost always portrayed as being on the receiving end of these changes. Not only is this profoundly unsociological – literally every single take on human agency in the past 50-odd years, from Foucault through to Latour and from Giddens through to Archer – recognizes ‘we’ (including as epistemic agents) have some degree of influence over what happens; it is also profoundly unpolitical, as it outsources agency to variously conceived ‘others’ (as I’ve agued here) while avoiding the tricky elements of own participation in the process. This is not to repeat the tired dichotomy of complicity vs. resistance, which is another not particularly innovative reading of the problem. What the article asks, instead, is: What kind of ‘purpose’ does systematic avoidance of questions of ambiguity and ambivalence serve?
What does it aim to achieve?
The objective of the article is not, by the way, to say that the existing forms of critique (including other contributions to the special issue) are ‘bad’ or that they can somehow be ‘improved’. Least of all is it to say that if we just ‘corrected’ our theoretical (epistemological, conceptual) lens we would finally be able to ‘defeat neoliberalism’. The article, in fact, argues the very opposite: that as long as we assume that ‘knowing’ neoliberalism will somehow translate into ‘doing away’ with neoliberalism we remain committed to the (epistemologically and sociologically very limited) assumption that knowledge automatically translates into action.
(…) [the] politically soothing, yet epistemically limited assumption that knowledge automatically translates into action…not only omit(s) to engage with precisely the political, economic, and social elements of the production of knowledge elaborated above, [but] eschews questions of ambiguity and ambivalence generated by these contradictions…examples such as doctors who smoke, environmentalists who fly around the world, and critics of academic capitalism who nonetheless participate in the ‘academic rat race’ (Berliner 2016) remind us that knowledge of the negative effects of specific forms of behaviour is not sufficient to make them go away (KN, p. 10)
(If it did, there would be no critics of neoliberalism who exploit their junior colleagues, critics of sexism who nonetheless reproduce gendered stereotypes and dichotomies, or critics of academic hierarchy who evaluate other people on the basis of their future ‘networking’ potential. And yet, here we are).
What is it about?
The article approaches ‘neoliberalism’ from several angles:
Ontological: What is neoliberalism? It is quite common to see neoliberalism as an epistemic project. Yet, does the fact that neoliberalism changes the nature of the production of knowledge and even what counts as knowledge – and, eventually, becomes itself a subject of knowledge – give us grounds to infer that the way to ‘deal’ with neoliberalism is to frame it as an object (of knowledge)? Is the way to ‘destroy’ neoliberalism to ‘know it’ better? Does treating neoliberalism as an ideology – that is, as something that masses can be ‘enlightened’ about – translate into the possibility to wield political power against it?
(Plot spoiler: my answer to the above questions is no).
Epistemological: What does this mean for ways we can go about knowing neoliberalism (or, for that matter, any element of ‘the social’)? My work, which is predominantly in social theory and sociology of knowledge (no, I don’t work ‘on education’ and my research is not ‘about universities’), in many ways overlaps substantially with social epistemology – the study of the way social factors (regardless of how we conceive of them) shape the capacity to make knowledge claims. In this context, I am particularly interested in how they influence reflexivity, as the capacity to make knowledge claims about our own knowledge – including knowledge of ‘the social’. Enter neoliberalism.
What kind of epistemic position are we occupying when we produce an account of the neoliberal conditions of knowledge production in academia? Is one acting more like the ‘epistemic exemplar’ (Cruickshank 2010) of a ‘sociologist’, or a ‘lay subject’ engaged in practice? What does this tell us about the way in which we are able to conceive of the conditions of the production of our own knowledge about those conditions? (KN, p. 4)
(Yes, I know this is a bit ‘meta’, but that’s how I like it).
Sociological: How do specific conditions of our own production of knowledge about neoliberalism influence this? As a sociologist of knowledge, I am particularly interested in relations of power and privilege reproduced through institutions of knowledge production. As my work on the ‘moral economy’ of Open Access with Chris Muellerleile argued, the production of any type of knowledge cannot be analysed as external to its conditions, including when the knowledge aims to be about those conditions.
‘Knowing neoliberalism’ extends this line of argument by claiming we need to engage seriously with the political economy of critique. It offers some of the places we could look for such clues: for instance, the political economy of publishing. The same goes for networks of power and privilege: whose knowledge is seen as ‘translateable’ and ‘citeable’, and whose can be treated as an empirical illustration:
Neoliberalism offers an overarching diagnostic that can be applied to a variety of geographical and political contexts, on different scales. Whose knowledge is seen as central and ‘translatable’ in these networks is not independent from inequalities rooted in colonial exploitation, maintaining a ‘knowledge hierarchy’ between the Global North and the Global South…these forms of interaction reproduce what Connell (2007, 2014) has dubbed ‘metropolitan science’: sites and knowledge producers in the ‘periphery’ are framed as sources of ‘empirical’, ‘embodied’, and ‘lived’ resistance, while the production of theory, by and large, remains the work of intellectuals (still predominantly White and male) situated in prestigious univer- sities in the UK and the US. (KN, p. 9)
This, incidentally, is the only part of the article that deals with ‘higher education’. It is very short.
Political: What does this mean for different sorts of political agency (and actorhood) that can (and do) take place in neoliberalism? What happens when we assume that (more) knowledge leads to (more) action? (apart from a slew of often well-intended but misconceived policies, some of which I’ve analysed in my book, ‘From Class to Identity’). The article argues that affecting a cognitive slippage between two parts of Marx’s Eleventh Thesis – that is, assuming that interpreting the world will itself lead to changing it – is the thing that contributes to the ‘paradox’ of the overproduction of critique. In other words, we become more and more invested in ‘knowing’ neoliberalism – e.g. producing books and articles – and less invested in doing something about it. This, obviously, is neither a zero-sum game (and it shouldn’t be) nor an old-fashioned call on academics to drop laptops and start mounting barricades; rather, it is a reminder that acting as if there were an automatic link between knowledge of neoliberalism and resistance to neoliberalism tends to leave the latter in its place.
(Actually, maybe it is a call to start mounting barricades, just in case).
Moral: Is there an ethically correct or more just way of ‘knowing’ neoliberalism? Does answering these questions enable us to generate better knowledge? My work – especially the part that engages with the pragmatic sociology of critique – is particularly interested in the moral framing and justification of specific types of knowledge claims. Rather than aiming to provide the ‘true’ way forward, the article asks what kind of ideas of ‘good’ and ‘just’ are invoked/assumed through critique? What kind of moral stance does ‘gnossification’ entail? To steal the title of this conference, when does explaining become ‘explaining away’ – and, in particular, what is the relationship between ‘knowing’ something and framing our own moral responsibility in relation to something?
The full answer to the last question, unfortunately, will take more than one publication. The partial answer the article hints at is that, while having a ‘correct’ way of ‘knowing’ neoliberalism will not ‘do away’ with neoliberalism, we can and should invest in more just and ethical ways of ‘knowing’ altogether. It shouldn’t warrant reminding that the evidence of wide-spread sexual harrassment in the academia, not to mention deeply entrenched casual sexism, racism, ableism, ethnocentrism, and xenophobia, all suggest ‘we’ (as academics) are not as morally impeccable as we like to think we are. Thing is, no-one is. The article hopes to have made a small contribution towards giving us the tools to understand why, and how, this is the case.
I hope you enjoy the article!
P.S. One of the rather straightforward implications of the article is that we need to come to terms with multiple reasons for why we do the work we do. Correspondingly, I thought I’d share a few that inspired me to do this ‘companion’ post. When I first started writing/blogging/Tweeting about the ‘paradox’ of neoliberalism and critique in 2015, this line of inquiry wasn’t very popular: most accounts smoothly reproduced the ‘evil neoliberalism vs. poor us little academics’ narrative. This has also been the case with most people I’ve met in workshops, conferences, and other contexts I have participated in (I went to quite a few as part of my fieldwork).
In the past few years, however, more analyses seem to converge with mine on quite a few analytical and theoretical points. My initial surprise at the fact that they seem not to directly engage with any of these arguments — in fact, were occasionally very happy to recite them back at me, without acknowledgement, attribution or citation — was somewhat clarified through reading the work on gendered citation practices. At the same time, it provided a very handy illustration for exactly the type of paradox described here: namely, while most academics are quick to decry the precarity and ‘awful’ culture of exploitation in the academia, almost as many are equally quick to ‘cite up’ or act strategically in ways that reproduce precisely these inequalities.
The other ‘handy’ way of appropriating the work of other people is to reduce the scope of their arguments, ideally representing it as an empirical illustration that has limited purchase in a specific domain (‘higher education’, ‘gender’, ‘religion’), while hijacking the broader theoretical point for yourself (I have heard a number of other people — most often, obviously, women and people of colour — describe a very similar thing happening to them).
This post is thus a way of clarifying exactly what the argument of the article is, in, I hope, language that is simple enough even if you’re not keen on social ontology, social epistemology, social theory, or, actually, anything social (couldn’t blame you).
PPS. In the meantime, I’ve also started writing an article on how precisely these forms of ‘epistemic positioning’ are used to limit and constrain the knowledge claims of ‘others’ (women, minorities) etc. in the academia: if you have any examples you would like to share, I’m keen to hear them!
Hardly anyone needs convincing that the university today is in deep crisis. Critics warn that the idea of the University (at least in the form in which it emerged from Western modernity) is endangered, under attack, under fire; that governments or corporations are waging a war against them. Some even pronounce public university already dead, or at least lying in ruins. The narrative about the causes of the crisis is well known: shift in public policy towards deregulation and the introduction of market principles – usually known as neoliberalism – meant the decline of public investment, especially for social sciences and humanities, introduction of performance-based funding dependent on quantifiable output, and, of course, tuition fees. This, in turn, led to the rising precarity and insecurity among faculty and students, reflected, among other things, in a mental health crisis. Paradoxically, the only surviving element of the public university that seems to be doing relatively well in all this is critique. But what if the crisis of the university is, in fact, a crisis of imagination?
Don’t worry, this is not one of those posts that try to convince you that capitalism can be wished away by the power of positive thinking. Nor is it going to claim that neoliberalism offers unprecedented opportunities, if only we would be ‘creative’ enough to seize them. The crisis is real, it is felt viscerally by almost everyone in higher education, and – importantly – it is neither exceptional nor unique to universities. Exactly because it cannot be wished away, and exactly because it is deeply intertwined with the structures of the current crisis of capitalism, opposition to the current transformation of universities would need to involve serious thinking about long-term alternatives to current modes of knowledge production. Unfortunately, this is precisely the bit that tends to be missing from a lot of contemporary critique.
Present-day critique of neoliberalism in higher education often takes the form of nostalgic evocation of the glory days when universities were few, and funds for them plentiful. Other problems with this mythical Golden Age aside, what this sort of critique conveniently omits to mention is that institutions that usually provide the background imagery for these fantastic constructs were both highly selective and highly exclusionary, and that they were built on the back of centuries of colonial exploitation. If it seemed like they imparted a life of relatively carefree privilege on those who studied and worked in them, that is exactly because this is what they were designed to do: cater to the “life of the mind” via excluding all forms of interference, particularly if they took the form of domestic (or any other material) labour, women, or minorities. This tendency is reproduced in Ivory Tower nostalgia as a defensive strategy: the dominant response to what critics tend to claim is the biggest challenge to universities since their founding (which, as they like to remind us, was a long, long time ago) is to stick their head in the sand and collectively dream back to the time when, as Pink Floyd might put it, grass was greener and lights were brighter.
Ivory Tower nostalgia, however, is just one aspect of this crisis of imagination. A much broader symptom is that contemporary critique seems unable to imagine a world without the university. Since ideas of online disembedded learning were successfully monopolized by technolibertarian utopians, the best most academics seem to be able to come up with is to re-erect the walls of the institution, but make them slightly more porous. It’s as if the U of University and the U of Utopia were somehow magically merged. To extend the oft-cited and oft-misattributed saying, if it seems easier to imagine the end of the world than the end of capitalism, it is nonetheless easier to imagine the end of capitalism than the end of universities.
Why does the institution like a university have such a purchase on (utopian and dystopian) imagination? Thinking about universities is, in most cases, already imbued by the university, so one element pertains to the difficulty of perceiving conditions of reproduction of one’s own position (this mode of access from the outside, as object-oriented ontologists would put it, or complex externality, as Boltanski does, is something I’m particularly interested in). However, it isn’t the case just with academic critique; fictional accounts of universities or other educational institutions are proliferating, and, in most cases (as I hope to show once I finally get around to writing the book on magical realism and universities), they reproduce the assumption of the value of the institution as such, as well as a lot of associated ideas, as this tweet conveys succinctly:
This is, unfortunately, often the case even with projects whose explicit aim is to subvert existing inequalities in the context of knowledge production, including open, free, and workers’ universities (Social Science Centre in Lincoln maintains a useful mapof these initiatives globally). While these are fantastic initiatives, most either have to ‘piggyback’ on university labour – that is, on the free or voluntary labour of people employed or otherwise paid by universities – or, at least, rely on existing universities for credentialisation. Again, this isn’t to devalue those who invest time, effort, and emotions into such forms of education; rather, it is to flag that thinking about serious, long-term alternatives is necessary, and quickly, at that. This is a theme I spend a lot of time thinking about, and I hope to make one of central topics in my work in the future.
So what are we to do?
There’s an obvious bit of irony in suggesting a panel for a conference in order to discuss how the system is broken, but, in the absence of other forms, I am thinking of putting together a proposal for a workshop for Sociological Review’s 2018 “Undisciplining: Conversations from the edges” conference. The good news is that the format is supposed to go outside of the ‘orthodox’ confines of panels and presentations, which means we could do something potentially exciting. The tentative title Thinking about (sustainable?) alternatives to academic knowledge production.
I’m particularly interested in questions such as:
Qualifications and credentials: can we imagine a society where universities do not hold a monopoly on credentials? What would this look like?
Knowledge work: can we conceive of knowledge production (teaching and research) not only ‘outside of’, but without the university? What would this look like?
Financing: what other modes of funding for knowledge production are conceivable? Is there a form of public funding that does not involve universities (e.g., through an academic workers’ cooperative – Mondragon University in Spain is one example – or guild)? What would be the implications of this, and how it would be regulated?
Built environment/space: can we think of knowledge not confined to specific buildings or an institution? What would this look like – how would it be organised? What would be the consequences for learning, teaching and research?
The format would need to be interactive – possibly a blend of on/off-line conversations – and can address the above, or any of the other questions related to thinking about alternatives to current modes of knowledge production.
If you’d like to participate/contribute/discuss ideas, get in touch by the end of October (the conference deadline is 27 November).
[UPDATE: Our panel got accepted! See you at Undisciplining conference, 18-21 June, Newcastle, UK. Watch this space for more news].
A woman needs a fridge of her own if she is to write theory. In fact, I’d wager a woman needs a fridge of her own if she is to write pretty much anything, but since what I am writing at the moment is (mostly) theory, let’s assume that it can serve as a metaphor for intellectual labour more broadly.
In her famous injunction to undergraduates at Girton College in Cambridge (the first residential college for women that offered education to degree level) Virginia Woolf stated that a woman needed two things in order to write: a room of her own, and a small independent income (Woolf settled on 500 pounds a year; as this website helpfully informed me, this would be £29,593 in today’s terms). In addition to the room and the income, a woman who wants to write, I want to argue, also needs a fridge. Not a shelf or two in a fridge in a kitchen in a shared house or at the end of the staircase; a proper fridge of her own. Let me explain.
The immateriality of intellect
Woolf’s broader point in A Room of One’s Ownis that intellectual freedom and creativity require the absence of material constraints. In and of itself, this argument is not particularly exceptional: attempts to define the nature of intellectual labour have almost unfailingly centred on its rootedness in leisure – skholē – as the opportunity for peaceful contemplation, away from the vagaries of everyday existence. For ancient Greeks, contemplation was opposed to the political (as in the everyday life of the polis): what we today think of as the ‘private’ was not even a candidate, being the domain of women and slaves, neither of which were considered proper citizens. For Marx, it was the opposite of material labour, with its sweat, noise, and capitalist exploitation. But underpinning it all was the private sphere – that amorphous construct that, as feminist scholars pointed out, includes the domestic and affective labour of care, cleaning, cooking, and, yes, the very act of biological reproduction. The capacity to distance oneself from these kinds of concerns thus became the sine qua non of scholarly reflection, particularly in the case of theōria, held to be contemplation in its pure(st) form. After all, to paraphrase Kant, it is difficult to ponder the sublime from too close.
This thread runs from Plato and Aristotle through Marx to Arendt, who made it the gist of her analysis of the distinction between vita activa and vita contemplativa; and onwards to Bourdieu, who zeroed in on the ‘scholastic reason’ (raison scolastique) as the source of Homo Academicus’ disposition to project the categories of scholarship – skholē – onto everyday life. I am particularly interested in the social framing of this distinction, given that I think it underpins a lot of contemporary discussions on the role of universities. But regardless of whether we treat it as virtue, a methodological caveat, or an interesting research problem, detachment from the material persists as the distinctive marker of the academic enterprise.
What about today?
So I think we can benefit from thinking about what would be the best way to achieve this absolution from the material for women who are trying to write today. One solution, obviously, would be to outsource the cooking and cleaning to a centralised service – like, for instance, College halls and cafeterias. This way, one would have all the time to write: away with the vile fridge! (It was anyway rather unseemly, poised as it was in the middle of one’s room). Yet, outsourcing domestic labour means we are potentially depriving other people of the opportunity to develop their own modes of contemplation. If we take into account that the majority of global domestic labour is performed by women, perfecting our scholarship would most likely be off the back of another Shakespeare’s (or, for consistency’s sake, let’s say Marx’s) sister. So, let’s keep the fridge, at least for the time being.
But wait, you will say, what about eating out – in restaurants and such? It’s fine you want to do away with outsourced domestic labour, but surely you wouldn’t scrap the entire catering industry! After all, it’s a booming sector of the economy (and we all know economic growth is good), and it employs so many people (often precariously and in not very nice conditions, but we are prone to ignore that during happy hour). Also, to be honest, it’s so nice to have food prepared by other people. After all, isn’t that what Simone de Beauvoir did, sitting, drinking and smoking (and presumably also eating) in cafés all day? This doesn’t necessarily mean we would need to do away with the fridge, but a shelf in a shared one would suffice – just enough to keep a bit of milk, some butter and eggs, fruit, perhaps even a bottle of rosé? Here, however, we face the economic reality of the present. Let’s do a short calculation.
£500 a year gets you very far…or not
The £29,593 Woolf proposes as sufficient independent income comes from an inheritance. Those of us who are less fortunate and are entering the field of theory today can hope to obtain one of many scholarships. Mine is currently at £13,900 a year (no tax); ESRC-funded students get a bit more, £14,000. This means we fall well short of today’s equivalent of 500 pound/year sum Woolf suggested to students at Girton. Starting from £14,000, assuming that roughly £2000 pounds annually are spent on things such as clothes, books, cosmetics, and ‘incidentals’ – for instance, travel to see one’s family or medical costs (non-EU students are subject to something called the Immigration Health Surcharge, paid upfront at the point of application for a student visa, which varies between £150 and £200 per year, but doesn’t cover dental treatment, prescriptions, or eye tests – so much for “NHS tourism”) – this leaves us with roughly £1000 per month. Out of this, accommodation costs anything between 400 and 700 pounds, depending on bills, council tax etc. – for a “room of one’s own”, that is, a room in a shared house or college accommodation – that, you’re guessing it, almost inevitably comes with a shared fridge.
So the money that’s left is supposed to cover eating in cafés, perhaps even an occasional glass of wine (it’s important to socialise with other writers or just watch the world go by). Assuming we have 450/month after paying rent and bills, this leaves us with a bit less than 15 pounds per day. This suffices for about one meal and a half daily in most cheap high street eateries, if you do not eat a lot, do not drink, nor have tea or coffee. Ever. Even at colleges, where food is subsidised, this would be barely enough. Remember: this means you never go out for a drink with friends or to a cinema, you never buy presents, never pay for services: in short, it makes for a relatively boring and constrained life. This could turn writing, unless you’re Emily Dickinson, somewhat difficult. Luckily, you have Internet, that is, if it’s included in your bills. And you pray your computer does not break down.
Well, you can always work, you say. If the money you’re given is not enough to provide the sort of lifestyle you want, go earn more! But there’s a catch. If you are in full-time education, you are only allowed to work part-time. If you are a foreign national, there are additional constraints. This means the amount of money you can get is usually quite limited. And there are tradeoffs. You know all those part-time jobs that pay a lot, offer stability and future career progression, and everyone is flocking towards? I don’t either. If you ever wondered where the seemingly inexhaustible supply of cheap labour at universities – sessional lecturers, administrative assistants, event managers, servers etc. came from, look around you: more likely than not, it’s hungry graduate students.
The poverty of student life
Increasingly, this is not in the Steve Jobs “stay hungry” sense. As I’ve argued recently, “staying hungry” has quite a different tone when instead of a temporary excursion into relative deprivation (seen as part of ‘character building’ education is supposed to be about) it reflects the threat of, virtually, struggling to make ends meet way after graduation. Given the state of the economy and graduate debt, that is a threat faced by growing proportions of young people (and, no surprise, women are much more likely to end up in precarious employment). Of course, you could always argue that many people have it much worse: you are (relatively) young, well educated, and with likely more cultural and social capital than the average person. Sure you can get by. But remember – this isn’t about making it from one day to another. What you’re trying to do is write. Contemplate. Comprehend the beauty (and, sometimes, ugliness) of the world in its entirety. Not wonder whether you’ll be able to afford the electricity bill.
This is why a woman needs to have her own fridge. If you want access to healthy, cheap food, you need to be able to buy it in greater quantities, so you don’t have to go to the supermarket every other day, and store it at home, so you can prepare it quickly and conveniently, as well as plan ahead. For the record, by healthy I do not mean quinoa waffles, duck eggs and shitake mushrooms (not that there’s anything wrong with any of these, though I’ve never tried duck eggs). I mean the sort of food that keeps you full whilst not racking up your medical expenses further down the line. For this you need a fridge. Not half a vegetable drawer among opened cans of lager that some bro you happen to share a house with forgot to throw away months ago, but an actual fridge. Of your own. It doesn’t matter if it comes with a full kitchen – you can always share a stove, wait for your turn for the microwave, and cooking (and eating) together can be a very pleasurable way of spending time. But keep your fridge.
But, you will protest, what about women who live with partners? Surely we want to share fridges with our loved ones! Well, good for you, go ahead. But you may want to make sure that it’s not always you remembering to buy the milk, it’s not always you supplying fresh fruit and vegetables, it’s not always you throwing away the food whose use-by date had long expired. That it doesn’t mean you pay the half of household bills, but still do more than half the work. For, whether we like it or not, research shows that in heterosexual partnerships women still perform a greater portion of domestic labour, not to mention the mental load of designing, organising, and dividing tasks. And yes, this impacts your ability to write. It’s damn difficult to follow the line of thought if you need to stop five times in order to take the laundry out, empty the bins, close the windows because it just started raining, pick up the mail that came through the door, and add tea to the shopping list – not even mentioning what happens if you have children on top of all this.
So no, a fridge cannot – and will not – solve the problem of gender inequality in the academia, let alone gender inequality on a more general level (after all, academics are very, very privileged). What it can do, though, is rebalance the score in the sense of reminding us that cooking, cleaning, and cutting up food are elements of life as much as citing, cross-referencing, and critique. It can begin to destroy, once and for all, the gendered (and classed) assumption that contemplation happens above and beyond the material, and that all reminders of its bodily manifestations – for instance, that we still need to eat whilst thinking – should be if not abolished entirely, then at least expelled beyond the margins of awareness: to communal kitchens, restaurants, kebab vans, anywhere where they do not disturb the sacred space of the intellect. So keep your income, get a room, and put a fridge in it. Then start writing.
From the opening, Donna Haraway’s recent book reads like a nice hybrid of theoretical conversation and science fiction. Crescendoing in the closing Camille Stories, the outcome of a writing experiment of imagining five future generations, “Staying with the trouble” weaves together – like the cat’s cradle, one of the recurrent metaphors in the book – staple Harawayian themes of the fluidity of boundaries between human and variously defined ‘Others’, metamorphoses of gender, the role of technology in modifying biology, and the related transformation of the biosphere – ‘Gaia’ – in interaction with human species. Eschewing the term ‘Anthropocene’, which she (somewhat predictably) associates with Enlightenment-centric, tool-obsessed rationality, Haraway births ‘Chthulucene’ – which, to be specific, has nothing to do with the famous monster of H.P. Lovecraft’s imagination, instead being named after a species of spider, Pimoa Cthulhu, native to Haraway’s corner of Western California.
This attempt to avoid dealing with human(-made) Others – like Lovecraft’s “misogynist racial-nightmare monster” – is the key unresolved issue in the book. While the tone is rightfully respectful – even celebratory – of nonhuman critters, it remains curiously underdefined in relation to human ones. This is evident in the treatment of Eichmann and the problem of evil. Following Arendt, Haraway sees Eichmann’s refusal to think about the consequences of his actions as the epitome of the banality of evil – the same kind of unthinking that leads to the existing ecological crisis. That more thinking seems like a natural antidote and a solution to the long-term destruction of the biosphere seems only logical (if slightly self-serving) from the standpoint of developing a critical theory whose aim is to save the world from its ultimate extinction. The question, however, is what to do if thoughts and stories are not enough?
The problem with a political philosophy founded on belief in the power of discourse is that it remains dogmatically committed to the idea that only if one can change the story, one can change the world. The power of stories as “worlding” practices fundamentally rests on the assumption that joint stories can be developed with Others, or, alternatively, that the Earth is big enough to accommodate those with which no such thing is possible. This leads Haraway to present a vision of a post-apocalyptic future Earth, in which population has been decimated to levels that allow human groups to exist at sufficient distance from each other. What this doesn’t take into account is that differently defined Others may have different stories, some of which may be fundamentally incompatible with ours – as recently reflected in debates over ‘alternative facts’ or ‘post-truth’, but present in different versions of science and culture wars, not to mention actual violent conflicts. In this sense, there is no suggestion of sympoiesis with the Eichmanns of this world; the question of how to go about dealing with human Others – especially if they are, in Kristeva’s terms, profoundly abject – is the kind of trouble “Staying with the trouble” is quite content to stay out of.
Sympoiesis seems reserved for non-humans, which seem to happily go along with the human attempts to ‘become-with’ them. But it seems easier when ‘Others’ do not, technically speaking, have a voice: whether we like it or not, few of the non-human critters have efficient means to communicate their preferences in terms of political organisation, speaking order at seminars, or participation in elections. The critical practice of com-menting, to which Haraway attributes much of the writing in the book, is only possible to the extent to which the Other has equal means and capacities to contribute to the discussion. As in the figure of the Speaker for the Dead, the Other is always spoken-for, the tragedy of its extinction obscuring the potential conflict or irreconcilability between species.
The idea of a com-pliant Other can, of course, be seen as an integral element of the mythopoetic scaffolding of West Coast academia, where the idea of fluidity of lifestyle choices probably has near-orthodox status. It’s difficult not to read parts of the book, such as the following passage, as not-too-fictional accounts of lived experiences of the Californian intellectual elite (including Haraway herself):
“In the infectious new settlements, every new child must have at least three parents, who may or may not practice new or old genders. Corporeal differences, along with their fraught histories, are cherished. Throughout life, the human person may adopt further bodily modifications for pleasure and aesthetics or for work, as long as the modifications tend to both symbionts’ well-being in the humus of sympoiesis” (p. 133-5)
The problem with this type of theorizing is not so much that it universalises a concept of humanity that resembles an extended Comic-Con with militant recycling; reducing ideas to their political-cultural-economic background is not a particularly innovative critical move. It is that it fails to account for the challenges and dangers posed by the friction of multiple human lives in constrained spaces, and the ways in which personal histories and trajectories interact with the configurations of place, class, and ownership, in ways that can lead to tragedies like the Grenfell tower fire in London.
In other words, what “Staying with the trouble” lacks is a more profound sense of political economy, and the ways in which social relations influence how different organisms interact with their environment – including compete for its scarce resources, often to the point of mutual extinction. Even if the absolution of human woes by merging one’s DNA with those of fellow creatures works well as an SF metaphor, as a tool for critical analysis it tends to avoid the (often literally) rough edges of their bodies. It is not uncommon even for human bodies to reject human organs; more importantly, the political history of humankind is, to a great degree, the story of various groups of humans excluding other humans from the category of humans (colonized ‘Others’, slaves), citizens (women, foreigners), or persons with full economic and political rights (immigrants, and again women). This theme is evident in the contemporary treatment of refugees, but it is also preserved in the apparently more stable boundaries between human groups in the Camille Stories. In this context, the transplantation of insect parts to acquire consciousness of what it means to inhabit the body of another species has more of a whiff of transhumanist enhancement than of an attempt to confront head-on (antennae-first?) multifold problems related to human coexistence on a rapidly warming planet.
At the end of the day, solutions to climate change may be less glamorous than the fantasy of escaping global warming by taking a dip in the primordial soup. In other words, they may require some good ol’ politics, which fundamentally means learning to deal with Others even if they are not as friendly as those in Haraway’s story; even if, as the Eichmanns and Trumps of this world seem to suggest, their stories may have nothing to do with ours. In this sense, it is the old question of living with human Others, including abject ones, that we may have to engage with in the AnthropoCapitaloCthulucene: the monsters that we created, and the monsters that are us.
Jana Bacevic is a PhD candidate at the Department of Sociology at the University of Cambridge, and has a PhD in social anthropology from the University of Belgrade. Her interests lie at the intersection of social theory, sociology of knowledge, and political sociology; her current work deals with the theory and practice of critique in the transformation of higher education and research in the UK.
To a degree, this revival happens on the back of the challenges posed to the status of theory by the rise of data science, leading Lizardo and Hay to engage in defense of the value and contributions of theory to sociology and international relations, respectively. In broader terms, however, it addresses the question of the status of social sciences – and, by extension, academic knowledge – more generally; and, as such, it brings us back to the justification of expertise, a question of particular relevance in the current political context.
The meaning of theory
Surely enough, theory has many meanings (Abend, 2008), and consequently many forms in which it is practiced. However, one of the characteristics that seem to be shared across the board is that it is part of (under)graduate training, after which it gets bracketed off in the form of “the theory chapter” of dissertations/theses. In this sense, theory is framed as foundational in terms of socialization into a particular discipline, but, at the same time, rarely revisited – at least not explicitly – after the initial demonstration of aptitude. In other words, rather than doing, theory becomes something that is ‘done with’. The exception, of course, are those who decide to make theory the centre of their intellectual pursuits; however, “doing theory” in this sense all too often becomes limited to the exegesis of existing texts (what Krause refers to as ‘theory a’ and Abend as ‘theory 4’) that leads to the competition among theorists for the best interpretation of “what theorist x really wanted to say”, or, alternatively, the application of existing concepts to new observations or ‘problems’ (‘theory b and c’, in Krause’s terms). Either way, the field of social theory resembles less the groves of Plato’s Academy, and more a zoo in which different species (‘Marxists’, ‘critical realists’, ‘Bourdieusians’, ‘rational-choice theorists’) delve in their respective enclosures or fight with members of the same species for dominance of a circumscribed domain.
This summer school started from the ambition to change that: to go beyond rivalries or allegiances to specific schools of thought, and think about what doing theory really means. I often told people that wanting to do social theory was a major reason why I decided to do a second PhD; but what was this about? I did not say ‘learn more’ about social theory (my previous education provided a good foundation), ‘teach’ social theory (though supervising students at Cambridge is really good practice for this), read, or even write social theory (though, obviously, this was going to be a major component). While all of these are essential elements of becoming a theorist, the practice of social theory certainly isn’t reducible to them. Here are some of the other aspects I think we need to bear in mind when we discuss the return, importance, or practice of theory.
Theory is performance
This may appear self-evident once the focus shifts to ‘doing’, but we rarely talk about what practicing theory is meant to convey – that is, about theorising as a performative act. Some elements of this are not difficult to establish: doing theory usually means identification with a specific group, or form of professional or disciplinary association. Most professional societies have committees, groups, and specific conference sessions devoted to theory – but that does not mean theory is exclusively practiced within them. In addition to belonging, theory also signifies status. In many disciplines, theoretical work has for years been held in high esteem; the flipside, of course, is that ‘theoretical’ is often taken to mean too abstract or divorced from everyday life, something that became a more pressing problem with the decline of funding for social sciences and the concomitant expectation to make them socially relevant. While the status of theory is a longer (and separate) topic, one that has been discussed at length in the history of sociology and other social sciences, it bears repeating that asserting one’s work as theoretical is always a form of positioning: it serves to define the standing of both the speaker, and (sometimes implicitly) others contributors. This brings to mind that…
Theory is power
Not everyone gets to be treated as a theorist: it is also a question of recognition, and thus, a question of political (and other) forms of power. ‘Theoretical’ discussions are usually held between men (mostly, though not exclusively, white men); interventions from women, people of colour, and persons outside centres of epistemic power are often interpreted as empirical illustrations, or, at best, contributions to ‘feminist’ or ‘race’ theory*. Raewyn Connell wrote about this in Southern Theory, and initiatives such as Why is my curriculum white? and Decolonizing curriculum in theory and practice have brought it to the forefront of university struggles, but it speaks to the larger point made by Spivak: that the majority of mainstream theory treats the ‘subaltern’ as only empirical or ethnographic illustration of the theories developed in the metropolis.
The problem here is not only (or primarily) that of representation, in the sense in which theory thus generated fails to accurately depict the full scope of social reality, or experiences and ideas of different people who participate in it. The problem is in a fundamentally extractive approach to people and their problems: they exist primarily, if not exclusively, in order to be explained. This leads me to the next point, which is that…
Theory is predictive
A good illustration for this is offered by pundits and political commentators’ surprise at events in the last year: the outcome of the Brexit referendum (Leave!), US elections (Donald Trump!), and last but not least, the UK General Election (surge in votes for Corbyn!). Despite differences in how these events are interpreted, they in most cases convey that, as one pundit recently confessed, nobody has a clue about what is going on. Does this mean the rule of experts really is over, and, with it, the need for general theories that explain human action? Two things are worth taking into account.
To begin with, social-scientific theories enter the public sphere in a form that’s not only simplified, but also distilled into ‘soundbites’ or clickbait adapted to the presumed needs and preferences of the audience, usually omitting all the methodological or technical caveats they normally come with. For instance, the results of opinion polls or surveys are taken to presented clear predictions, rather than reflections of general statistical tendencies; reliability is rarely discussed. Nor are social scientists always innocent victims of this media spin: some actively work on increase their visibility or impact, and thus – perhaps unwittingly – contribute to the sensationalisation of social-scientific discourse. Second, and this can’t be put delicately, some of these theories are just not very good. ‘Nudgery’ and ‘wonkery’ often rest on not particularly sophisticated models of human behaviour; which is not saying that they do not work – they can – but rather that theoretical assumptions underlying these models are rarely accessible to scrutiny.
Of course, it doesn’t take a lot of imagination to figure out why this is the case: it is easier to believe that selling vegetables in attractive packaging can solve the problem of obesity than to invest in long-term policy planning and research on decision-making that has consequences for public health. It is also easier to believe that removing caps to tuition fees will result in universities charging fees distributed normally from lowest to highest, than to bother reading theories of organizational behaviour in different economic and political environments and try to understand how this maps onto the social structure and demographics of a rapidly changing society. In other words: theories are used to inform or predict human behaviour, but often in ways that reinforce existing divisions of power. So, just in case you didn’t see this coming…
Theory is political
All social theories are about constraints, including those that are self-imposed. From Marx to Freud and from Durkheim to Weber (and many non-white, non-male theorists who never made it into ‘the canon’), theories are about what humans can and cannot do; they are about how relatively durable relations (structures) limit and enable how they act (agency). Politics is, fundamentally, about the same thing: things we can and things we cannot change. We may denounce Bismarck’s definition of politics as the art of the possible as insufficiently progressive, but – at the risk of sounding obvious – understanding how (and why) things stay the same is fundamental to understanding how to go about changing them. The history of social theory, among other things, can be read as a story about shifting the boundaries of what was considered fixed and immutable, on the one hand, and constructed – and thus subject to change – on the other.
In this sense, all social theory is fundamentally political. This isn’t to license bickering over different historical materialisms, or to stimulate fantasies – so dear to intellectuals – of ‘speaking truth to power’. Nor should theories be understood as weapons in the ‘war of time’, despite Débord’s poetic formulation: this is but the flipside of intellectuals’ dream of domination, in which their thoughts (i.e. themselves) inspire masses to revolt, usually culminating in their own ascendance to a position of power (thus conveniently cutting out the middleman in ‘speaking truth to power’, as they become the prime bearers of both).
Theory is political in a much simpler sense, in which it is about society and elements that constitute it. As such, it has to be about understanding what is it that those we think of as society think, want, and do, even – and possibly, especially – when we do not agree with them. Rather than aiming to ‘explain away’ people, or fit their behaviour into pre-defined social models, social theory needs to learn to listen to – to borrow a term from politics – its constituents. This isn’t to argue for a (not particularly innovative) return to grounded theory, or ethnography (despite the fact both are relevant and useful). At the risk of sounding pathetic, perhaps the next step in the development of social theory is to really make it a form of social practice – that is, make it be with the people, rather than about the people. I am not sure what this would entail, or what it would look like; but I am pretty certain it would be a welcome element of building a progressive politics. In this sense, doing social theory could become less of the practice of endlessly revising a blueprint for a social theory zoo, and more of a project of getting out from behind its bars.
*The tendency to interpret women’s interventions as if they are inevitably about ‘feminist theory’ (or, more frequently, as if they always refer to empirical examples) is a trend I have been increasingly noticing since moving into sociology, and definitely want to spend more time studying. This is obviously not to say there aren’t women in the field of social theory, but rather that gender (and race, ethnicity, and age) influence the level of generality at which one’s claims are read, thus reflecting the broader tendency to see universality and Truth as coextensive with the figure of the male and white academic.
It is a testament to the lasting influence of Karl Popper and Richard Rorty that their work continues to provide inspiration for debates concerning the role and purpose of knowledge, democracy, and intellectuals in society. Alternatively, it is a testament to the recurrence of the problem that continues to lurk under the glossy analytical surface or occasional normative consensus of these debates: the impossibility to reconcile the concepts of liberal and epistemic democracy. Essays collected under the title Democratic Problem-Solving (Cruickshank and Sassower 2017) offer grounds for both assumptions, so this is what my review will focus on.
Boundaries of Rational Discussion
Democratic Problem-Solving is a thorough and comprehensive (if at times seemingly meandering) meditation on the implications of Popper’s and Rorty’s ideas for the social nature of knowledge and truth in contemporary Angloamerican context. This context is characterised by combined forces of neoliberalism and populism, growing social inequalities, and what has for a while now been dubbed, perhaps euphemistically, the crisis of democracy. Cruickshank’s (in other contexts almost certainly heretical) opening that questions the tenability of distinctions between Popper and Rorty, then, serves to remind us that both were devoted to the purpose of defining the criteria for and setting the boundaries of rational discussion, seen as the road to problem-solving. Jürgen Habermas, whose name also resonates throughout this volume, elevated communicative rationality to the foundational principle of Western democracies, as the unifying/normalizing ground from which to ensure the participation of the greatest number of members in the public sphere.
Intellectuals were, in this view, positioned as guardians—epistemic police, of sorts—of this discursive space. Popper’s take on epistemic ‘policing’ (see DPS, 42) was to use the standards of scientific inquiry as exemplars for maintaining a high level, and, more importantly, neutrality of public debates. Rorty saw it as the minimal instrument that ensured civility without questioning, or at least without implicitly dismissing, others’ cultural premises, or even ontological assumptions. The assumption they and authors in this volume have in common is that rational dialogue is, indeed, both possible and necessary: possible because standards of rationality were shared across humanity, and necessary because it was the best way to ensure consensus around the basic functioning principles of democracy. This also ensured the pairing of knowledge and politics: by rendering visible the normative (or political) commitments of knowledge claims, sociology of knowledge (as Reed shows) contributed to affirming the link between the epistemic and the political. As Agassi’s syllogism succinctly demonstrates, this link quickly morphed from signifying correlation (knowledge and power are related) to causation (the more knowledge, the more power), suggesting that epistemic democracy was if not a precursor, then certainly a correlate of liberal democracy.
This is why Democratic Problem-Solving cannot avoid running up against the issue of public intellectuals (qua epistemic police), and, obviously, their relationship to ‘Other minds’ (communities being policed). In the current political context, however, to the well-exercised questions Sassower raises such as—
should public intellectuals retain their Socratic gadfly motto and remain on the sidelines, or must they become more organically engaged (Gramsci 2011) in the political affairs of their local communities? Can some academics translate their intellectual capital into a socio-political one? Must they be outrageous or only witty when they do so? Do they see themselves as leaders or rather as critics of the leaders they find around them (149)?
—we might need to add the following: “And what if none of this matters?”
After all, differences in vocabularies of debate matter only if access to it depends on their convergence to a minimal common denominator. The problem for the guardians of public sphere today is not whom to include in these debates and how, but rather what to do when those ‘others’ refuse, metaphorically speaking, to share the same table. Populist right-wing politicians have at their disposal the wealth of ‘alternative’ outlets (Breitbart, Fox News, and increasingly, it seems, even the BBC), not to mention ‘fake news’ or the ubiquitous social media. The public sphere, in this sense, resembles less a (however cacophonous) town hall meeting than a series of disparate village tribunals. Of course, as Fraser (1990) noted, fragmentation of the public sphere has been inherent since its inception within the Western bourgeois liberal order.
The problem, however, is less what happens when other modes of arguing emerge and demand to be recognized, and more what happens when they aspire for redistribution of political power that threatens to overturn the very principles that gave rise to them in the first place. We are used to these terms denoting progressive politics, but there is little that prevents them from being appropriated for more problematic ideologies: after all, a substantial portion of the current conservative critique of the ‘culture of political correctness’, especially on campuses in the US, rests on the argument that ‘alternative’ political ideologies have been ‘repressed’, sometimes justifying this through appeals to the freedom of speech.
In assuming a relatively benevolent reception of scientific knowledge, then, appeals such as Chis and Cruickshank’s to engage with different publics—whether as academics, intellectuals, workers, or activists—remain faithful to Popper’s normative ideal concerning the relationship between reasoning and decision-making: ‘the people’ would see the truth, if only we were allowed to explain it a bit better. Obviously, in arguing for dialogical, co-produced modes of knowledge, we are disavowing the assumption of a privileged position from which to do so; but, all too often, we let in through the back door the implicit assumption of the normative force of our arguments. It rarely, if ever, occurs to us that those we wish to persuade may have nothing to say to us, may be immune or impervious to our logic, or, worse, that we might not want to argue with them.
For if social studies of science taught us anything, it is that scientific knowledge is, among other things, a culture. An epistemic democracy of the Rortian type would mean that it’s a culture like any other, and thus not automatically entitled to a privileged status among other epistemic cultures, particularly not if its political correlates are weakened—or missing (cf. Hart 2016). Populist politics certainly has no use for critical slow dialogue, but it is increasingly questionable whether it has use for dialogue at all (at the time of writing of this piece, in the period leading up to the 2017 UK General Election, the Prime Minister is refusing to debate the Leader of the Opposition). Sassower’s suggestion that neoliberalism exhibits a penchant for justification may hold a promise, but, as Cruickshank and Chis (among others) show on the example of UK higher education, ‘evidence’ can be adjusted to suit a number of policies, and political actors are all too happy to do that.
Does this mean that we should, as Steve Fuller suggested in another SERRC article see in ‘post-truth’ the STS symmetry principle? I am skeptical. After all, judgments of validity are the privilege of those who can still exert a degree of control over access to the debate. In this context, I believe that questions of epistemic democracy, such as who has the right to make authoritative knowledge claims, in what context, and how, need to, at least temporarily, come second in relation to questions of liberal democracy. This is not to be teary-eyed about liberal democracy: if anything, my political positions lie closer to Cruickshank and Chis’ anarchism. But it is the only system that can—hopefully—be preserved without a massive cost in human lives, and perhaps repurposed so as to make them more bearable.
In this sense, I wish the essays in the volume confronted head-on questions such as whether we should defend epistemic democracy (and what versions of it) if its principles are mutually exclusive with liberal democracy, or, conversely, would we uphold liberal democracy if it threatened to suppress epistemic democracy. For the question of standards of public discourse is going to keep coming up, but it may decreasingly have the character of an academic debate, and increasingly concern the possibility to have one at all. This may turn out to be, so to speak, a problem that precedes all other problems. Essays in this volume have opened up important venues for thinking about it, and I look forward to seeing them discussed in the future.
Last Friday in April, I was at a conference entitled Universities, neoliberalisation and (in)equalityat Goldsmiths, University of London. It was an one-day event featuring presentations and interventions from academics who work on understanding, and criticising, the transformation of working conditions in neoliberal academia. Besides sharing these concerns, attending such events is part of my research: I, in fact, study the critique of neoliberalism in UK higher education.
Why study critique, you may ask? At the present moment, it may appear all the more urgent to study the processes of transformation themselves, especially so that we can figure out what can be done about them. This, however, is precisely the reason: critique is essential to how we understand social processes, in part because it entails a social diagnostic – it tells us what is wrong – and, in part, because it allows us to conceptualise our own agency – what is to be done – about this. However, the link between the two is not necessarily straightforward: first you read some Marx, and then you go and start a revolution. Some would argue that the reading of Marx (what we usually think of as consciousness-raising) is essential part of the process, but there are many variables that intervene between awareness of the unfairness of certain conditions – say, knowing that part-time, low paid teaching work is exploitative – and actually doing something about those conditions, such as organising an occupation. In addition, as virtually everyone from the Frankfurt School onwards had noted, linking these two aspects is complicated by the context of mass consumerism, mass media, and – I would add – mass education. Still, the assumption of an almost direct (what Archer dubbed an ‘hydraulic’) link between knowledge and action still haunts the concept of critique, both as theory and as practice.
In the opening remarks to the conference, Vik Loveday actually zeroed in on this, asking: why is it that there seems to be a burgeoning of critique, but very little resistance? For it is a burgeoning indeed: despite it being my job, even I have issues keeping up to speed with the veritable explosion of the writing that seeks to analyse, explain, or simply mourn the seemingly inevitable capitulation of universities in the face of neoliberalism. By way of illustration, the Palgrave series in “Critical University Studies” boasts eleven new titles, all published in 2016-7; and this is but one publisher, in English language only.
What can explain the relationship between the relative proliferation of critique, and relative paucity of resistance? This question forms the crux of my thesis: less, however, as an invocation for the need to resist, and more as the querying of the relationship between knowledge – especially as forms of critique, including academic critique – and political agency (I do see political agency on a broader spectrum than the seemingly inexhaustible dichotomy between ‘compliance’ and ‘resistance’, but that is another story).
So here’s a preliminary hypothesis (H, if you wish): the link between critique and resistance is mediated by the existence of and position in of academic hierarchy. Two presentations I had the opportunity to hear at the conference were very informative in this regard: the first is Loveday’s analysis of academics’ experience of anxiety, the other was Neyland and Milyaeva’s research on the experiences of REF panelists. While there is a shared concern among academics about the neoliberalisation of higher education, what struck me was the pronounced difference in the degree to which two groups express doubts about their own worth as academics, future, and relevance (in colloquial parlance, ‘impostor syndrome’). While junior* and relatively precarious academics seem to experience high levels of anxiety in relation to their value as academics, senior* academics who sit on REF panels experience it far less. The difference? Level of seniority and position in decision-making.
Well, you may say, this is obvious – the more established academics are, the more confident they are going to be. However, what varies with levels of seniority is not just confidence and trust in one’s own judgements: it’s the sense of entitlement, the degree to which you feel you deserve to be there (Loveday writes about the classed aspects of the sense of entitlement here). I once overheard someone call it the Business Class Test: the moment you start justifying to yourself flying business class on work trips (unless you’re very old, ill, or incapacitated), is the moment when you will have convinced yourself you deserve this. The issue, however, is not how this impacts travel practices: it’s the effect that the differential sense of entitlement has on the relationship between critique and resistance.
So here’s another hypothesis (h1, if you wish). The more precarious your position, the more likely you are to perceive the working conditions as unfair – and, thus, to be critical of the structure of academic hierarchy that enables it. Yet, at the same time, the more junior you are, the more risk voicing that critique – that is, translating it into action – entails. Junior academics often point out that they have to shut up and go on ‘playing the game’: churning out publications (because REF), applying for external funding (because grant capture), and teaching ever-growing numbers of students (because students generate income for the institution). Thus, junior academics may well know everything that is wrong with the academia, but will go on conforming to it in ways that reproduce exactly the conditions they are critical of.
What happens once one ascends to the coveted castle of permanent employment/tenure and membership in research evaluation panels and appointment committees? Well, I’ve only ever been tenure track for a relatively short period of time (having left the job before I found myself justifying flying business class) but here’s an assumption based on anecdotal evidence and other people’s data (h2): you still grin and bear it. You do not, under any circumstances, stop participating in the academic ‘game’ – with the added catch that now you actually believe you deserved your position in it. I’m not saying senior academics are blind to the biases and social inequalities reflected in the academic hierarchy: what I am saying is that it is difficult, if not altogether impossible, to simultaneously be aware of it and continue participating in it (there’s a nod to Sartre’s notion of ‘bad faith‘ here, but I unfortunately do not have the time to get into that now). Ever encounter a professor stand up at a public lecture or committee meeting and say “I recognize that I owe my being here to the combined fortunes of inherited social capital, [white] male privilege, and the fact English is my native language”? I didn’t either. If anything, there are disavowals of social privilege (“I come from a working class background”), which, admirable as they may be, unfortunately only serve to justify the hierarchical nature of academia and its selection procedures (“I definitely deserve to be here, because look at all the odds I had to beat in order to get here in the first place”).
In practice, this leads to the following. Senior academics stay inside the system, and, if they are critical, believe to work against the system – for instance, by fighting for their discipline, or protecting junior colleagues, or aiming to make academia that little bit more diverse. In the longer run, however, their participation keeps the system going – the equivalent of carbon offsetting your business class flight; sure, it may help plant trees in Guinea Bissau, but it does not obfuscate the fact you are flying in the first place. Junior academics, on the other hand, contribute through their competition for positions inside the system – believing that if only they teach enough (perform low-paid work), publish enough (contribute to abundance), or are visible enough (perform unpaid labour of networking on social media, through conferences etc.) – they will get away from precarity, and then they can really be critical (there’s a nod to Berlant’s cruel optimism here that I also unfortunately cannot expand on). Except that, of course, they end up in the position of senior academics, with an added layer of entitlement (because they fought so hard) and an added layer of fear (because no job is really safe in neoliberalism). Thus, while everyone knows everything is wrong, everyone still plays along. This ‘gamification’ of research, which seems to be the new mot du jour in the academia, becomes a stand-in term for the moral economy of justifying one’s own position while participating in the reproduction of the conditions that contribute to its instability.
Cui bono critique, in this regard? It depends. If critique is divorced from its capacity to incite political action, there is no reason why it cannot be appropriated – and, correspondingly, commodified – in the broader framework of neoliberal capitalism. It’s already been pointed out that critique sells – and, perhaps less obviously, the critique of neoliberal academia does too. Even if the ever-expanding number of publications on the crisis of the university do not ‘sell’ in the narrow sense of the term, they still contribute to the symbolic economy via accruing prestige (and citation counts!) for their authors. In other words: the critique of neoliberalism in the academia can become part and parcel of the very processes it sets out to criticise. There is nothing, absolutely nothing, in the content, act, or performance of critique itself that renders it automatically subversive or dangerous to ‘the system’. Sorry. (If you want to blame me for being a killjoy, note that Boltanski and Chiapello have noted a long time ago in “The New Spirit of Capitalism” that contemporary capitalism grew through the appropriation of the 1968 artistic critique).
Does this mean critique has, as Latour famously suggested, ‘run out of steam’? If we take the steam engine as a metaphor for the industrial revolution, then the answer may well be yes, and good riddance. Along with other Messianic visions, this may speed up the departure of the Enlightenment’s legacy of pastoral power, reflected – imperfectly, yet unmistakably – in the figure of (organic or avant-guarde) ‘public’ intellectual, destined, as he is (for it is always a he) to lead the ‘masses’ to their ultimate salvation. What we may want to do instead is to examine what promise critique (with a small c) holds – especially in the age of post-truth, post-facts, Donald Trump, and so on. In this, I am fully in agreement with Latour that it is important to keep tabs on the difference between matters of fact, and maters of concern; and, perhaps most disturbingly, think about whether we want to stake out the claim for defining the latter on the monopoly on producing the former.
For getting rid of the veneer of entitlement to critique does not in any way mean abandoning the project of critical examination altogether – but it does, very much so, mean reexamining the positions and perspectives from which it is made. This is the reason why I believe it is so important to focus on the foundations of epistemic authority, including that predicated on the assumption of difference between ‘lay’ and academic forms of reflexivity (I’m writing up a paper on this – meanwhile, my presentation on the topic from this year’s BSA conference is here). In other words, in addition to the analysis of threats to critical scholarship that are unequivocally positioned as coming from ‘the outside’, we need to examine what it is about ‘the inside’ – and, particularly, about the boundaries between ‘out’ and ‘in’ – that helps perpetuate the status quo. Often, this is the most difficult task of all.
P.S. People often ask me what my recommendations would be. I’m reluctant to give any – the academia is broken, and I am not sure whether fixing it in this form makes any sense. But here’s a few preliminary thoughts:
(a) Stop fetishising the difference between ‘inside’ and ‘outside’. ‘Leaving’ the academia is still framed like some epic sort of failure, which amplifies both the readiness of precarious workforce to sustain truly abominable working conditions just in order to stay “in”, and the anxiety and other mental health issues arising from the possibility of falling “out”. Most people with higher education should be able to do well and thrive in all sorts of jobs; if we didn’t frame tenure as a life-or-death achievement, perhaps fewer would agree to suffer for years in hope of its attainment.
(b) Fight for decent working conditions for contingent faculty. Not everyone needs to have tenure if working part-time (or going in and out) are acceptable career choices that offer a liveable income and a level of social support. This would also help those who want to have children or, godforbid, engage in activities other than the rat race for academic positions.
(c) This doesn’t get emphasised enough, but one of the reasons why people vie for positions in the academia is because at least it offers a degree of intellectual satisfaction, in opposition to what Graeber has termed the ever-growing number of ‘bullshit jobs’. So, one of the ways of making working conditions in the academia more decent is by making working conditions outside of academia more decent – and, perhaps, by decentralising a bit the monopoly on knowledge work that the academia holds. Not, however, in the neoliberal outsourcing/’creative hubs’ model, which unfortunately mostly serves to generate value for existing centres while further depleting the peripheries.
* By ”junior” and “senior” I obviously do not mean biological age, but rather status – I am intentionally avoiding denominators such as ‘ECRs’ etc. since I think someone can be in a precarious position whilst not being exactly at the start of their career, and, conversely, someone can be a very early career researcher but have a type of social capital, security, and recognition that are normally associated with ‘later’ career stages.
One Saturday in late January, I go to the PhD office at the Department of Sociology at the University of Cambridge’s New Museums site (yes, PhD students shouldn’t work on Saturdays, and yes, we do). I swipe my card at the main gate of the building. Nothing happens.
I try again, and again, and still nothing. The sensor stays red. An interaction with a security guard who seems to appear from nowhere conveys there is nothing wrong with my card; apparently, there has been a power outage and the whole system has been reset. A rather distraught-looking man from the Department History and Philosophy of Science appears around the corner, insisting to be let back inside the building, where he had left a computer on with, he claims, sensitive data. The very amicable security guard apologises. There’s nothing he can do to let us in. His card doesn’t work, either, and the system has to be manually reset from within the computers inside each departmental building.
You mean the building noone can currently access, I ask.
I walk away (after being assured the issue would be resolved on Monday) plotting sci-fi campus novels in which Skynet is not part of a Ministry of Defense, but of a university; rogue algorithms claim GCSE test results; and classes are rescheduled in a way that sends engineering undergrads to colloquia in feminist theory, and vice versa (the distances one’ s mind will go to avoid thinking about impending deadlines)*. Regretfully pushing prospective pitches to fiction publishers aside (temporarily)**, I find the incident particularly interesting for the perspective it offers on how we think about the university as an institution: its spatiality, its materiality, its boundaries, and the way its existence relates to these categories – in other words, its social ontology.
War on universities?
Critiques of the current transformation of higher education and research in the UK often frame it as an attack, or ‘war’, on universities (this is where the first part of the title of my thesis comes from). Exaggeration for rhetorical purposes notwithstanding, being ‘under attack’ suggests is that it is possible to distinguish the University (and the intellectual world more broadly) from its environment, in this case at least in part populated by forces that threaten its very existence. Notably, this distinction remains almost untouched even in policy narratives (including those that seek to promote public engagement and/or impact) that stress the need for universities to engage with the (‘surrounding’) society, which tend to frame this imperative as ‘going beyond the walls of the Ivory Tower’.
The distinction between universities and the society has a long history in the UK: the university’s built environment (buildings, campuses, gates) and rituals (dress, residence requirements/’keeping term’, conventions of language) were developed to reflect the separateness of education from ordinary experience, enshrined in the dichotomies of intellectual vs. manual labour, active life vs. ‘life of the mind’ and, not least, Town vs. Gown. Of course, with the rise of ‘redbrick’, and, later, ‘plateglass’ universities, this distinction became somewhat less pronounced. Rather than in terms of blurring, however, I would like to suggest we need to think of this as a shift in scale: the relationship between ‘Town’ and ‘Gown’, after all, is embedded in the broader framework of distinctions between urban and suburban, urban and rural, regional and national, national and global, and the myriad possible forms of hybridisation between these (recent work by Addie, Keil and Olds, as well as Robertson et al., offers very good insights into issues related to theorising scale in the context of higher education).
Policing the boundaries: relational ontology and ontological (in)security
What I find most interesting, in this setting, is the way in which boundaries between these categories are maintained and negotiated. In sociology, the negotiation of boundaries in the academia has been studied in detail by, among others, Michelle Lamont (in How Professors Think, as well as in an overview by Lamont and Molnár), Thomas Gieryn (both in Cultural Boundaries of Science and few other texts), Andrew Abbott in The Chaos of Disciplines (and, of course, in sociologically-inclined philosophy of science, including Feyerabend’s Against Method, Lakatos’ work on research programmes, and Kuhn’s on scientific revolutions, before that). Social anthropology has an even longer-standing obsession with boundaries, symbolic as well as material – Mary Douglas’ work, in particular, as well as Augé’s Non-Placesoffer a good entry point, converging with sociology on the ground of neo-Durkheimian reading of the distinction between the sacred and profane.
My interest in the cultural framing of boundaries goes back to my first PhD, which explored the construal of the category of (romantic) relationship through the delineation of its difference from other types of interpersonal relations. The concept resurfaced in research on public engagement in UK higher education: here, the negotiation of boundaries between ‘inside’ (academics) and ‘outside’ (different audiences), as well as between different groups within the university (e.g. administrators vs. academics) becomes evident through practices of engaging in the dissemination and, sometimes, coproduction of knowledge, (some of this is in my contribution to this volume). The thread that runs through these cases is the importance of positioning in relation to a (relatively) specified Other; in other words, a relational ontology.
It is not difficult to see the role of negotiating boundaries between ‘inside’ and ‘outside’ in the concept of ontological security (e.g. Giddens, 1991). Recent work in IR (e.g. Ejdus, 2017) has shifted the focus from Giddens’ emphasis on social relations to the importance of stability of material forms, including buildings. I think we can extend this to universities: in this case, however, it is not (only) the building itself that is ‘at risk’ (this can be observed in intensified securitisation of campuses, both through material structure such as gates and cards-only entrances, and modes of surveillance such as Prevent – see e.g. Gearon, 2017), but also the materiality of the institution itself. While the MOOC hype may have (thankfully) subsided (though not dissappeared) there is the ubiquitous social media, which, as quite a few people have argued, tests the salience of the distinction between ‘inside’ and ‘outside’ (I’ve written a bit about digital technologies as mediating the boundary between universities and the ‘outside world’ here as well in an upcoming article in Globalisation, Education, Societies special issue that deals with reassembling knowledge production with/out the university).
Barbarians at the gates
In this context, it should not be surprising that many academics fear digital technologies: anything that tests the material/symbolic boundaries of our own existence is bound to be seen as troubling/dirty/dangerous. This brings to mind Kavafy’s poem (and J.M. Coetzee’s novel) Waiting for the Barbarians, in which an outpost of the Empire prepares for the attack of ‘the barbarians’ – that, in fact, never arrives. The trope of the university as a bulwark against and/or at danger of descending into barbarism has been explored by a number of writers, including Thorstein Veblen and, more recently, Roy Coleman. Regardless of the accuracy or historical stretchability of the trope, what I am most interested in is its use as a simultaneously diagnostic and normative narrative that frames and situates the current transformation of higher education and research.
As the last line of Kavafy’s poem suggests, barbarians represent ‘a kind of solution’: a solution for the otherwise unanswered question of the role and purpose of universities in the 21st century, which began to be asked ever more urgently with the post-war expansion of higher education, only to be shut down by the integration/normalization of the soixante-huitards in what Boltanski and Chiapello have recognised as contemporary capitalism’s almost infinite capacity to appropriate critique. Disentangling this dynamic is key to understanding contemporary clashes and conflicts over the nature of knowledge production. Rather than locating dangers to the university firmly beyond the gates, then, perhaps we could use the current crisis to think about how we perceive, negotiate, and preserve the boundaries between ‘in’ and ‘out’. Until we have a space to do that, I believe we will continue building walls only to realise we have been left on the wrong side.
(*) I have a strong interest in campus novels, both for PhD-related and unrelated reasons, as well as a long-standing interest in Sci-Fi, but with the exception of DeLillo’s White Noise can think of very few works that straddle both genres; would very much appreciate suggestions in this domain!
(**) I have been thinking for a while about a book that would be a spin-off from my current PhD that would combine social theory, literature, and critical cultural political economy, drawing on similarities and differences between critical and magical realism to look at universities. This can be taken as a sketch for one of the chapters, so all thoughts and comments are welcome.
It is by now commonplace to claim that digital technologies have fundamentally transformed knowledge production. This applies not only to how we create, disseminate, and consume knowledge, but also who, in this case, counts as ‘we’. Science and technology studies (STS) scholars argue that knowledge is an outcome of coproduction between (human) scientists and objects of their inquiry; object-oriented ontology and speculative realism go further, rejecting the ontological primacy of humans in the process. For many, it would not be overstretching to say machines do not only process knowledge, but are actively involved in its creation.
What remains somewhat underexplored in this context is the production of critique. Scholars in social sciences and humanities fear that the changing funding and political landscape of knowledge production will diminish the capacity of their disciplines to engage critically with the society, leading to what some have dubbed the ‘crisis’ of the university. Digital technologies are often framed as contributing to this process, speeding up the rate of production, simultaneously multiplying and obfuscating the labour of academics, perhaps even, as Lyotard predicted, displacing it entirely. Tensions between more traditional views of the academic role and new digital technologies are reflected in, often heated, debates over academics’ use of social media (see, for instance, #seriousacademic on Twitter). Yet, despite polarized opinions, there is little systematic research into links between the transformation of the conditions of knowledge production and critique.
My work is concerned with the possibility – that is, the epistemological and ontological foundations – of critique, and, more precisely, how academics negotiate it in contemporary (‘neoliberal’) universities. Rather than trying to figure out whether digital technologies are ‘good’ or ‘bad’, I think we need to consider what it is about the way they are framed and used that makes them either. From this perspective, which could be termed the social ontology of critique, we can ask: what is it about ‘the social’ that makes critique possible, and how does it relate to ‘the digital’? How is this relationship constituted, historically and institutionally? Lastly, what does this mean for the future of knowledge production?
Between pre-digital and post-critical
There are a number of ways one can go about studying the relationship between digital technologies and critique in the contemporary context of knowledge production. David Berry and Christian Fuchs, for instance, both use critical theory to think about the digital. Scholars in political science, STS, and sociology of intellectuals have written on the multiplication of platforms from which scholars can engage with the public, such as Twitter and blogs. In “Uberfication of the University”, Gary Hall discusses how digital platforms transform the structure of academic labour. This joins the longer thread of discussions about precarity, new publishing landscapes, and what this means for the concept of ‘public intellectual’.
One of the challenges of theorising this relationship is that it has to be developed out of the very conditions it sets out to criticise. This points to limitations of viewing ‘critique’ as a defined and bounded practice, or the ‘public intellectual’ as a fixed and separate figure, and trying to observe how either has changed with the introduction of the digital. While the use of social media may be a more recent phenomenon, it is worth recalling that the bourgeois public sphere that gave rise to the practice of critique in its contemporary form was already profoundly mediatised. Whether one thinks of petitions and pamphlets in the Dreyfus affair, or discussions on Twitter and Facebook – there is no critique without an audience, and digital technologies are essential in how we imagine them. In this sense, grounding an analysis of the contemporary relationship between the conditions of knowledge production and critique in the ‘pre-digital’ is similar to grounding it in the post-critical: both are a technique of ‘ejecting’ oneself from the confines of the present situation.
The dismissiveness Adorno and other members of the Frankfurt school could exercise towards mass media, however, is more difficult to parallel in a world in which it is virtually impossible to remain isolated from digital technologies. Today’s critics may, for instance, avoid having a professional profile on Twitter or Facebook, but they are probably still using at least some type of social media in their private lives, not to mention responding to emails, reading articles, and searching and gathering information through online platforms. To this end, one could say that academics publicly criticising social media engage, in fact, in a performative contradiction: their critical stance is predicated on the existence of digital technologies both as objects of critique and main vehicles for its dissemination.
This, I believe, is an important source of perceived tensions between the concept of critique and digital technologies. Traditionally, critique implies a form of distancing from one’s social environment. This distancing is seen as both spatial and temporal: spatial, in the sense of providing a vantage point from which the critic can observe and (choose to) engage with the society; temporal, in the sense of affording shelter from the ‘hustle and bustle’ of everyday life, necessary to stimulate critical reflection. Universities, at least in a good part of 20th century, were tasked with providing both. Lukács, in his account of the Frankfurt school, satirized this as “taking residence in the ‘Grand Hotel Abyss’”: engaging in critique from a position of relative comfort, from which one can stare ‘into nothingness’. Yet, what if the Grand Hotel Abyss has a wifi connection?
Changing temporal frames: beyond the Twitter intellectual?
Some potential perils of the ‘always-on’ culture and contracting temporal frames for critique are reflected in the widely publicized case of Steven Salaita, an internationally recognized scholar in the field of Native American studies and American literature. In 2013, Salaita was offered a tenured position at the University of Illinois. However, in 2014 the Board of Trustees withdrew the offer, citing Salaita’s “incendiary” posts on Twitter as the reason. Salaita is a vocal critic of Israel, and his Tweets at the time concerned Israeli military offensive in the Gaza Strip; some of the University’s donors found this problematic and pressured the Board to withdraw the offer. Salaita has in the meanwhile appealed the decision and received a settlementfrom the University of Illinois, but the case – though by no means unique – drew attention to the issue of the (im)possibility of separating the personal, political and professional on social media.
At the same time, social media can provide venues for practicing critique in ways not confined by the conventions or temporal cycles of the academia. The example of Eric Jarosinski, “The rock star philosopher of Twitter”, shows this clearly. Jarosinski is a Germanist whose Tweets contain clever puns on the Frankfurt school, as well as, among others, Hegel and Nietzsche. In 2013, he took himself out of consideration for tenure at the University of Pennsylvania, but continued to compose philosophically-inspired Tweets, eventually earning a huge following, as well as a column in the two largest newspapers in Germany and The Netherlands. Jarosinski’s moniker, #failedintellectual, is an auto-ironic reminder that it is possible to succeed whilst deviating from the established routes of intellectual critique.
Different ways in which it can be performed on Twitter should not, however, detract from the fact that critique operates in fundamentally politicized and stratified spaces; digital technologies can render them more accessible, but that does not mean that they are more democratic or offer a better view of ‘the public’. This is particularly worth remembering in the light of recent political events in the UK and the US. Once the initial shock following the US election and the British EU referendum had subsided, many academics (and intellectuals more broadly) have taken to social media to comment, evaluate, or explain what had happened. Yet, for the most part, these interventions end exactly where they began – on social media. This amounts to live Tweeting from the balcony of the Grand Hotel Abyss: the view is good, but the abyss no less gaping for it.
By sticking to critique on social media, intellectuals are, essentially, doing what they have always been good at – engaging with audiences and in ways they feel comfortable with. To this end, criticizing the ‘alt-right’ on Twitter is not altogether different from criticising it in lecture halls. Of course, no intellectual critique can aspire to address all possible publics, let alone equally. However, it makes sense to think how the ways in which we imagine our publics influences our capacity to understand the society we live in; and, perhaps more importantly, how it influences our ability to predict – or imagine – its future. In its present form, critique seems far better suited to an idealized Habermasian public sphere, than to the political landscape that will carry on in the 21st century. Digital technologies can offer an approximation, perhaps even a good simulation, of the former; but that, in and of itself, does not mean that they can solve problems of the latter.
Jana Bacevic is a PhD researcher at the Department of Sociology at the University of Cambridge. She works on social theory and the politics of knowledge production; her thesis deals with the social, epistemological and ontological foundations of the critique of neoliberalism in higher education and research in the UK. Previously, she was Marie Curie fellow at the University of Aarhus in Denmark at Universities in Knowledge Economies (UNIKE). She tweets at @jana_bacevic
Late in the morning after the US election, I am sitting down to read student essays for the course on social theory I’m supervising. This part of the course involves the work of Popper, Kuhn, Lakatos, and Feyerabend, and its application in the social sciences. The essay question is: do theories need to be falsifiable, and how to choose between competing theories if they aren’t? The first part is a standard essay question; I added the second a bit more than a week ago, interested to see how students would think about criteria of verification in absence of an overarching regime of truth.
This is one of my favourite topics in the philosophy of science. When I was a student at the University of Belgrade, feeling increasingly out of place in the post-truth and intensely ethnographic though anti-representationalist anthropology, the Popper-Kuhn debatein Criticism and the Growth of Knowledge held the promise that, beyond classification of elements of material culture of the Western Balkans, lurked bigger questions of the politics and sociology of knowledge (paradoxically, this may have been why it took me very long to realize I actually wanted to do sociology).
I was Popper-primed well before that, though: the principle of falsification is integral to the practice of parliamentary-style academic debating, in which the task of the opposing team(s) is to ‘disprove’ the motion. In the UK, this practice is usually associated with debate societies such as the Oxford and Cambridge Union, but it is widespread in the US as well as the rest of the world; during my undergraduate studies, I was an active member of Yugoslav (now Serbian) Universities Debating Network, known as Open Communication. Furthermore, Popper’s political ideas – especially those in Open Society and its Enemies – formed the ideological core of the Open Society Foundation, founded by the billionaire George Soros to help the promotion of democracy and civil society in Central and Eastern Europe.
In addition to debate societies, the Open Society Foundation supported and funded a greater part of civil society activism in Serbia. At the time, most of it was conceived as the opposition to the regime of Slobodan Milošević, a one-time-banker-turned-politician who ascended to power in the wake of the dissolution of the Socialist federal republic of Yugoslavia. Milošević played a major role in the conflicts in its former republics, simultaneously plunging Serbia deeper into economic and political crisis exacerbated by international isolation and sanctions, culminating in the NATO intervention in 1999. Milošević’s rule ended in a coup following a disputed election in 2000.
I had been part of the opposition from the earliest moment conceivable, skipping classes in secondary school to go to anti-government demos in 1996 and 1997. The day of the coup – 5 October 2000 – should have been my first day at university, but, together with most students and staff, I was at what would turn out to be the final public protest that ended up in the storming of the Parliament. I swallowed quite a bit of tear gas, twice in situations I expected not to get out of alive (or at the very least unharmed), but somehow made it to a friend’s house, where, together with her mom and grandma, we sat in the living room and watched one of Serbia’s hitherto banned TV and radio stations – the then-oppositional B92 – come back on air. This is when we knew it was over.
Sixteen years and little more than a month later, I am reading students’ essays on truth and falsehood in science. This, by comparison, is a breeze, and it’s always exciting to read different takes on the issue. Of course, in the course of my undergraduate studies, my own appreciation of Popper was replaced by excitement at the discovery of Kuhn – and the concomitant realization of the inertia of social structures, which, just like normal science, are incredibly slow to change – and succeeded by light perplexity by Lakatos (research programmes seemed equal parts reassuring and inherently volatile – not unlike political coalitions). At the end, obviously, came infatuation with Feyerabend: like every self-respecting former liberal, I reckoned myself a methodological (and not only methodological) anarchist.
Unsurprisingly, most of the essays I read exhibit the same trajectory. Popper is, quite obviously, passé; his critique of Marxism (and other forms of historicism) not particularly useful, his idea of falsificationism too strict a criterion for demarcation, and his association with the ideologues of neoliberalism did probably not help much either.
Except that…. this is what Popper has to say:
It is undoubtedly true that we have a more direct knowledge of the ‘inside of the human atom’ than we have of physical atoms; but this knowledge is intuitive. In other words, we certainly use our knowledge of ourselves in order to frame hypotheses about some other people, or about all people. But these hypotheses must be tested, they must be submitted to the method of selection by elimination.
(The Poverty of Historicism, 127)
Our knowledge of ourselves: for instance, our knowledge that we could never, ever, elect a racist, misogynist, reality TV star for the president of one of world’s superpowers. That we would never vote to leave the European Union, despite the fact that, like all supranational entities, it has flaws, but look at how much it invests in our infrastructure. Surely – as Popper would argue – we are rational animals: and rational animals would not do anything that puts them in unnecessary danger.
Of course, we are correct. The problem, however, is that we have forgotten about the second part of Popper’s claim: we use knowledge of ourselves to form hypotheses about other people. For instance: since we understand that a rich businessman is not likely to introduce economic policies that harm the elite, the poor would never vote for him. For instance: since we remember the victims of Nazism and fascism, everyone must understand how frail is the liberal consensus in Europe.
This is why the academia came to be “shocked” by Trump’s victory, just like it was shocked by the outcome of the Brexit referendum. This is also the key to the question of why polls “failed” to predict either of these outcomes. Perhaps we were too focused on extrapolating our assumptions to other people, and not enough on checking whether they hold.
By failing to understand that the world is not composed of left-leaning liberals with a predilection for social justice, we commit, time and again, what Bourdieu termed scholastic fallacy – propensity to attribute categories of our own thinking to those we study. Alternatively, and much worse, we deny them common standards of rationality: the voters whose political choices differ from ours are then cast as uneducated, deluded, suffering from false consciousness. And even if they’re not, they must be a small minority, right?
Well, as far as hypotheses are concerned, that one has definitely failed. Maybe it’s time we started considering alternatives.