Subscribe to continue reading
Subscribe to get access to the rest of this post and other subscriber-only content.
Internal conversation, eternal emigration
Subscribe to get access to the rest of this post and other subscriber-only content.
I had a dream.
No, no the Martin Luther King kind of dream, by which I mean I do not think this one will be used to retroactively construct a politics of reconciliation where none are due.
A real dream, like, sleeping n’all. To clarify at the outset, this isn’t in and of itself a matter of particular exceptionality: I do dream relatively regularly, as long as I am not extremely stressed, which now happens rarely and only for very brief periods.
I also pay attention to dreams. This started when I was a child, and reading Freud‘s “Interpretation of dreams” gave me a sense that there is a world beyond our own that we are, nonetheless, (almost) unique authors of. This fascinated me, as it opened the question of existence and multiple planes of reality in ways different than fairytales and fantasy had, and also in ways different than physics (here’s something I’ve written on relational ontology in dreams).
I can also lucid-dream (this isn’t intentional, just something I picked up along the way, mostly as a way of waking myself up from nightmares).
I also learned to ask dreams for guidance.
Yet, my sleep has been surprisingly dream-free of lately. I noticed this about a week ago; I first attributed it to being on holiday (= in Britain, the accepted expression for an out-of-office email responder), which means I have more time for free-flowing thoughts and thus less processing ‘backlog’ to do while asleep, but then I realised it’s probably been close to a month, if not two.
Then I attributed it to the intensity of the events in Belgrade earlier in June (as I’ve described, this did involve a week of severely disrupted sleep), but that in and of itself should only increase the backlog, and thus the quantity of material for processing. More to the point (did I mention I can ask dreams for guidance?), I can ask – by which I mean induce (no, no drugs involved, in case this is what you’re thinking) – dreams, and my subconscious delivers. So I did.
Nada.
Until last night.
The second thing to note is that nothing happens in my dreams – they are usually elements of a conversation or interaction, fragments of a feeling, observations, but there is no major “plot”. I dream of the weather, of course – of storms, hurricanes, floods – but even that is increasingly rare; whatever prophetic meaning those dreams had has been either fulfilled or rendered obsolete. Now, we can talk about whether climate change is an “event” in the Badiousian sense, but at any rate it does not involve (in my dreams, at least) a large amount of conscious agency (I cannot control the weather, for instance).
Most of the time.
Jonathan Lear’s (2006) “Radical hope: Ethics in the face of cultural devastation” opens with these words of Plenty Coups, a Crow elder:
When the buffalo went away the hearts of my people fell to the ground and they could not lift them up again. After this nothing happened.
Lear is exceptional (I mean by standards of white men in the Angloamerican academia) insofar as he refuses to see this as an expression of something else – a reflection of ‘depression’, a metaphor – and asks: what would it mean to take Plenty Coups’ words seriously? In other words, what does it mean for nothing to happen?
What kind of ontological disaster erases the possibility of history?
***
In this dream, I am queuing in a coffee shop. It is a nice place, one of the sort of places people like me tend to feel naturally good in – it’s decked out in wood, spacious, with lots of light & lots of plants. As I approach the counter, though, I realise I am signing up for a petition to obstruct/block a theatre performance – the petition is on a paper napkin and I am signing it in ink, but I already see a few other names (not clear which) above mine. I move back to the front room, as I learn that the venue/café workers are on strike. The tables on the front room are covered with sandwiches – stacks of white subs, mostly wrapped in clear film; these are here to avoid crossing the picket line by getting something from the café.
I don’t know if anything else happens in the dream; my recollection of it ends there.
At first, I was perplexed. I am not a huge fan of theatre – I occasionally see plays but it has certainly not played a major part of my life in any meaningful sense, real (compared to cinema or gigs/concerts, for instance) or symbolic (certainly, I’ve written on performativity and I’ve read my Shakespeare and blah blah but I do not, in fact, see the world – including the social world – as a “stage”).
Sure, I was able to recognise elements of direct organising, but sandwiches?
Digging into the associative chain, sandwiches reminded me of the food redistribution service I was part of while in San Diego; last night, I observed a similar street sleeper distribution service on the rainy, blustery streets of Belfast. Which brought to mind a question:
–> Why have we not organised something similar at Durham University, with food redistribution on the streets of Durham/Newcastle?
Sure, many people take part in different similar initiatives (e.g. Community Kitchen) across the region, but why are we not using the institution as a hub for this kind of thing? It would be a faster, better way to redistribute privilege than either vague gesturing at “widening participation” or feeling upset or guilty about the fact the social composition of the university is at odds with that of the town (something that, by the way, conveniently entirely erases ‘international’ students, who are apparently classless).
Which brought me to the following question:
What would it really mean to interrupt the performance?
–> What if we refused to reproduce the university in response to the human rights violations we see every day?
When students in universities across the UK last summer declared occupations in response to the institutional and national complicity in genocide, there has been some (in my mind, insufficient) support among university staff; but, for most part, the business of the university continued as usual.
What if we stopped the performance?
What if, instead of politely queuing to get our coffees and then sitting back and observing the show, we brought the whole damn thing to a halt?
What if nothing stopped happening, and something happened instead? Would our hearts lift off the ground? (I think so).
***
For a while, there has been – on and off – talk about ‘boycotting the REF’, but it seems that the principal reason (most) people would end the REF is because they do not like it, rather than because ranking and ordering is itself a practice run for ranking and ordering human beings.
Like grading (I’m sorry, Brits, marking).
Like “undeserving migrants”.
Like “sure, Gaza is so sad, but I have to worry about my kids’ scholarships and my mortgage and I’m afraid of getting it wrong if I say something, I mean, this cancel culture has really gone too far and and and”
Which, really, brings me to the following question:
–-> Why have we not declared indefinite strike and refused the reproduction of the university (which, of course, is the means of reproduction of classed, gendered [increasingly, with the new legislation, exclusively cis-gendered] subjects for inclusion into mechanisms and institutions for the (re)production of the militarised state which, as we can so clearly observe, is the guarantor/guardian of the continued extraction of capital, human, fossil, financial, near and far, at no matter what cost?
But anyway, that’s just analysing a dream.
***
One of the usual anarchist/organising principles is to respond with “well, why don’t you do it yourself”. Two observations.
Excusez-moi, but this makes it a bit harder than usual to hear excuses from significantly better-paid university environments that “people just cannot afford to go on strike”.
For that matter, Serbian public universities are not composed uniquely of artistocracy (remember, we had communism) or plutocracy (they don’t work at universities, they run the country) who can safely afford to dispense with a three-month paycheck; of course, class privilege does exist, and is perhaps most obvious in higher education. What does make a difference is a more distributed/equalised access to housing (not necessarily equitable as such, but nothing compared to the classed horror that is British housing), more affordable childcare and, of course, a strategic and solidaristic approach: choosing when, who, and how can take risks, and of what sort, is a vital part of ensuring any action bears results.
But don’t mind me, I’m just recounting a dream.
***
Let me leave you with the words of CrimethInc, rather than my own:
For the civilian born in captivity and raised on spectatorship and submission, direct action changes everything. The morning she arises to put a plan into motion, she awakens under a different sun-if she has been able to sleep at all, that is- and in a different body, attuned to every detail of the world around her and possessed of the power to change it. She finds her companions endowed with tremendous courage and resourcefulness, equal to monumental challenges and worthy of passionate love. Together, they enter a foreign land where outcomes are uncertain but anything is possible and every minute counts.
***
This, for those of you who keep asking, is what it means to stop the performance. This is what it means to refuse to be a spectator, of your own life, of others’, or of deaths, others’ as well as, inevitably and eventually, your own.
All I am saying is: there should be more names on that sign-up sheet.
[These are the more-or-less unedited notes for my speech at the event Multiple Crises of Higher Education, held on 20 May 2025 at Queen Mary's Mile End Institute, and organised by the fantastic Accounting and Accountability Research Group. Queen Mary's branch of UCU have also been at the forefront of fighting and writing about redundancies in the sector, and maintain an excellent and well-organised webpage, so give them a follow alongside AARG (the best acronym in the sector?)]
To start, as philosophers do, from examining a concept, a crisis means a point of shattering; sense of rupture; breaking point, crack in the fabric of reality. To say something has reached a crisis is to recognise that from this point there is a division into multiple paths. From a personal perspective, to reach a crisis means we can no longer go on as before, or as usual; a crisis usually invokes a reconsideration of what the project (whatever project we are committed to – a movement; an ideology; a job; a relationship; an idea) is, and whether it is still worth doing or living.
So when we start from the diagnosis that higher education is in crisis, we are in fact acknowledging that multiple facets of what we thought higher education is are no longer viable. Some of these (also mentioned in the description for this event) include the sector’s funding model; its approach to academic labour, including benefitting from precarity (insecure, temporary contracts) and competition (for research funding, for prestige); and its relationship to other important sectors of society (government, the military, industry, and so on). But where do we go from here?
To foreground the question of where we go from here is also to acknowledge – or argue – that turning back is no longer possible. This is the starting point for my remarks today. I draw inspiration from Adam Phillips’ On Giving Up (am currently reading the book, but the link is to the – open access – essay in the LRB), which opens with Kafka’s Zürau Aphorisms: “from a certain point there is not more turning back. That is the point that must be reached”. To say we are in crisis, among other things, is to say that we have reached that point. From here, Phillips asks: what are we willing to give up in order to go on living?
Philips’ question reframes giving up as a fundamental element of worth. As worth is the key of valuation and as such of any kind of counting, including acc-counting (sorry), I believe it is tantamount to understanding how we talk about value. In this sense, I intend to perform an analysis that sketches in more explicit terms this intersection between moral and economic; between what we give (and expect to receive in return), and what we give up on.
Giving up/going on
I want to argue that any analysis of a ‘crisis’ that harbours the illusion that turning back is a possibility is one that is fundamentally committed to maintenance of the status quo, and thus counterperformatively denies the very diagnosis is purports to establish. Indeed, it is quite possible to argue – in analogy with how some Marxist critics have described the 2008 economic crisis – that there is, in fact, no crisis at all, and that the system is working exactly as intended. The major thing I will be arguing we need to give up on, in this consideration, then, is our commitment to the system as it is, given that as it is it is a system working as intended.
So from there, we need to reorient ourselves, perhaps towards a different system, perhaps towards one working towards different ends. To do this, however, we need to rid ourselves of three ghosts. Three ghosts, a bit like in Dickens’ The Christmas Carol.
The first ghost is the ghost of the Empire. Now, some of you may be surprised by the appearance of this ghost. After all, haven’t we comprehensively purged this ghost by decolonising our curricula, by extensively renaming our halls and libraries, even – gasp! – in some cases, by enquiring into our links with slavery?
But this ghost rests barely disguised in the ideal of the superiority of British higher education, the idea of higher education as an ‘export’, and the almost unquestioned assumption that we should reap profit from international students. For what is the source of appeal of British higher education for (most) foreign students today if not the accessibility and usefulness of an education in English (the language of global trade, the fact we owe to the British Empire) combined with the opportunity to take endless photos in front of different vestiges, artefacts and similes of that very empire, from the Big Ben to Harry Potter-esque halls in Oxford, Cambridge and Durham? What is the ambition to be on top of league tables if not a transmogrification of the desire to sustain a hierarchy that was built on a monopoly on trade routes and cotton mills, and now continues through degree mills? Finally, what is the belief in the ‘superiority’ of British higher education but an inflated ego projection of our own (yes, I am British now) colonial past, which in turn enables if not validates the racialised and classed hierarchy of the UK immigration system, the system that requires people to prove their ‘worth’ in order to be exploited as much as (or more than) British nationals?
The second ghost is the ghost of welfare state. Now, quite contrary to the previous, the ghost of the Empire, this one is the kind of ghost we repeatedly and obsessively summon. We do this in ritual invocations of the famed social contract that created the NHS, or in the postwar (meaning WWII) expansion of higher education, enabled by the Education Act in 1944 but usually attributed to the Robbins report (1963), which opened higher education to “all who qualify, by ability or attainment”.
The most important contribution of the report, perhaps less visible because it was bordering on the obvious, was to, for the first time, conceptualise higher education as a system and thus a distinct domain of public policy. In 1964, the University Grants Committee officially became part of the newly created Department of Education and Science (DES). Instead of a set of disparate universities and colleges, with their histories, traditions (or lack thereof), and institutional trajectories, higher learning became a matter for the nation-state – and, consequently, its development deemed relevant for the well-being of its citizens (citizen-subjects) on the whole, not just for the (still small) proportion of those who attended the (equally relatively small) number of institutions. This is how the massification of higher education became the main ‘lever’ for state intervention into university governance. The essence of the ‘social compact’ between the university and the state, thus, was always a tradeoff between expansion and funding.
It is important to note that this social compact intentionally excluded international students, who were exempt from no tuition fees since 1962. It is also important to understand/acknowledge how it occurred in the first instance. The expansion of higher education was not some benevolent act of enlightenment (well, neither was the Enlightenment a benevolent act of enlightenment, after all 😊); it was a strategic investment into upskilling the workforce so as to enable UK – no longer an empire, at least by its own lights, though still very much keeping overseas territories – to compete in industrial production, including that of weapons and surveillance technologies. This is explicitly acknowledged in EP Thompson’s edited Warwick University Ltd., which documents the analyses and reactions to the revelations made during the student occupation of the Registry of the University of Warwick in 1970. The files students found revealed widespread labour surveillance and military contracting, including to Bristol Sideley Engines, the predecessor to British Aerospace Limited, which which provides jet engines to Saudi Arabia and Israel (if you’d like to see the continuing links between universities in the UK and arms manufacture/trade, I strongly recommend this). This critique, however – just like Thompson himself – stopped short of reimagining higher education that would not be beholden to national (and, increasingly, offshored) industry, even if that means arms industry.
This also tells us something about the vestigial dream of a Labour government restoring this ghost of a welfare state to its former glory. Recent policies suggest Labour has no intention of dusting off this model of the social compact. More importantly, however, it tells us something about the ethical tradeoff involved in the dream of higher education as part of a welfare state – whose welfare?
The third ghost, and this is going to be most difficult for some of you to hear, is the ghost of social mobility.

From this post.
This brings me to the diverging (or converging, if you’re a fan of strict visual metaphors) rates of graduate debt and graduate premium.

There are different policy solutions proposed to address this, and today we have heard some of them. What we fail to comprehend, however, is that the graduate premium itself is based on the idea that there should be an exploited and underpaid class of (under)labourers. It makes sense to remember that the concept of ‘social mobility’ assumes that there is a class to escape from (move out from), usually the working class, and a class to aspire to, usually the middle class.
The fact that the ‘graduate premium’ is stagnating or decreasing apart from in a few professions/sectors (and we know what those sectors are – finance, fossil fuels, big tech) tells us little about the intrinsic ‘value’ of higher education (as if there were a thing such as intrinsic value) and more about wage suppression across sectors.
After all, in an equal society, where we would all be paid the same, what would be the reason to have a graduate premium?
So that people can pay off debt; and this brings me to ‘the system is not in crisis, it is working as intended’.
Why should we expect a graduate premium?
Not long ago, I encountered the same question in a session on cooperatives.
It was run in the local community/anarchist centre, and I came by to hear what people thought setting up a cooperative would really be like. When it came to the distribution of income, I mentioned that I thought it fair that people be paid the same kind of money for the same kind of work, and that that was the principle I tried to institute in one of the collectives I had been part of (The Philosopher).
But what would people with PhDs do? Asked one participant.
Be paid as everyone else, I said. Independently of qualification? They asked. Of course, I said. (They did not know I had a PhD – two, in fact).
But why would people with PhDs agree to that, they protested. After all, they paid so much for their education, they surely have to earn more to pay that off!
And that, my friends, is why we cannot have nice things.
Because as long as we cannot accept – or even conceive – that knowledge (by which we mean tokens or credentials of knowledge) should not bestow material privilege, as long as we accept inequalities in employment, as long as we cannot even imagine that a ‘professor’ could be earning the same as a ‘lecturer’ and as a ‘teaching assistant’; let alone as a cleaner or a nurse, or, if we want to bring this closer to university contexts, as an IT technician – we are both naturalising and reproducing this hierarchy. This hierarchy tells us that of course higher education should confer a privilege, and of course there should be an (over)exploited and (under)paid class, and of course British higher education bestows that privilege (particularly luxuriously), so of course we have the right to ask people to pay for it, and foreigners to pay even more. Unless we are willing to give that up, we are not only tacitly but, what is I hope by now obvious, explicitly accepting that higher education is an instrument that serves to reproduce and maintain the status quo. If anything, it is intended to maintain graduates tied to low-paid, precarious, and exploitative jobs – think Starbucks – that they cannot get out of, even if they would want to, because they have too much debt. And there is one thing people like that are unlikely to do: create any kind of meaningful, longer-lasting, opposition.
So what we need to give up, in order to go on, is the fantasy of exceptionalism – institutional, sectoral, or personal. That universities (as institutions), higher education (as a sector), or the fact we are in them, makes us special. And even if we are committed to status quo – and it remains my belief that many academics who would call themselves ‘radical’ or ‘progressive’ are, in fact, deeply committed to it, not least because they cannot imagine alternatives – it is clearly breaking down. So we cannot go back. The question is, where shall we go forward?
P.S. Some people asked me about other stuff I had written on the topic. Most academic publications are listed (in chronological order) under Articles and books; I also blog and write invited op-eds. Some of the stuff directly relevant for this one are:
On the relationship between academic freedom, autonomy, and the state:
On political economy of higher education, including the relationship between extractivism and knowledge production:
On the relationship between social change, social inequalities, political subjectivities, and education policy:
and, of course, my book:
Bacevic, J. 2014. From Class to Identity: Politics of Education Reforms in Former Yugoslavia. Budapest and New York, NY: Central European University Press.
Hi all,
I want to start by thanking Rachel Brooks and the British Sociological Association for the invitation, as well as my co-panelists for being present. I want to thank all of you who have chosen to be here this afternoon, not only because, as we tend to say in a slightly facetious mode at conferences, there are so many other things you could be doing – by which we tend to mean, not only other panels you could be attending, but also taking a walk outside, catching up with friends, or sleeping – but because, in a slightly different way, there are other things you could be doing. At the very end of my remarks I will come back to what some of these things are.
There are, however, many other people and things that contributed to all of us being here today; the workers involved in organising this conference, from administrative staff to volunteers to cleaners and caterers; cooks making breakfast at the hotel this morning; the pickers at coffee plantations who make our coffee; workers in steel factories who smelted the material that goes into the rail tracks that carried the train that brought me to Manchester. Some of these things we tend to think about as being about higher education; others, less so.
This isn’t, if you were wondering, a covert argument for the ‘agency of things’ or STS-informed approach to higher education. Rather, it is to ask what we are doing when we talk about the future of higher education in a sociological language, in a space such as this, at a conference such as this? My work over the past decade has, among other things, been about how these forms of categorisation, domain-association and positioning – that is, the ‘aboutness’ of things – make certain forms of recognition and or/ignorance and invisibility (im)possible. My remarks today will be building on this.
When we talk about higher education, we tend to talk about funding, by which we mostly mean public, that is, tax-derived state funding, but we do not talk about the amount of funding UK universities are receiving from arms companies & other military technology manufacturers, including those currently involved in the bombing of Gaza, as research, investments and scholarships:
In the UK, the absolute champion in this category is the University of Glasgow, with £115,247,817.20 (value of partnerships with the world’s top 100 arms-producing companies in the last 8 years); Manchester is at £6,700,328.00 (see research from Demilitarise Education https://ded1.co/data/university). That is A LOT of scholarships for Palestinian students, as one of the people interviewed in the excellent documentary The Encampments says: I’d rather you didn’t bomb me, keep your scholarships.
Nor do we talk about the proportion of university staff pensions (yes, USS, the fund many of us defended so vigilantly in 2018 and have been defending since) still invested in fossil fuels.
We talk about the reproduction of social inequalities, by which we mostly mean, in Paul Willis’ perennially-relevant formulation, why working class kids get working class jobs (or why working class kids don’t make it to Oxbridge), but not about the fact that a lot of those other ‘prestigious’, ‘elite’, and non-working class jobs working class kids should presumably aspire to are in finance, digital technologies including surveillance (which I think Janja may be saying more about), or in fossil fuels. So is it OK then – as that butterfly meme says, “is this social mobility?”.

Not least, we talk of decolonising, by which we mostly mean making curricula a tiny bit more reflective of the diversity of knowledge production, usually by wedging in a few nonwhite people – the approach to teaching social theory I”ve described elsewhere as “white boys + DuBois” – but we do not talk about the continuing and new forms of extractive colonialism enabled, among other things, by treating international students as raw resources that can be mined for money (or sometimes money + cheap labour, as in the graduate to job market conversion), something I presume Aline will be addressing.
This, of course, is not a particular moral failure of ours. All forms of knowledge presuppose forms of ignorance. This does not make us ‘bad’ people, or at any rate much worse people than many similarly privileged. As the Buddhist thinker Pema Chödrön once I think said, we are like passengers in the backwards-facing seat on a moving train; we only see what we have just passed, never what is in front of us. Or, if you prefer a more familiar name, you can think of Walter Benjamin’s Angel of History, looking at the past but nonetheless propelled by the winds towards the future.
This connects to one of the main motifs of my academic work over the past decade, the question of non-prediction: what kind of futures do we become unable to see? As I have argued, it is particularly our embedding in institutions of knowledge production and the concomitant commitment to habitual ways of seeing, making, and relating to the world (among other things, by going to conferences) that makes us unable to see some kinds of futures. In the remaining two and a half minutes, I want to try and give you a brief view from the front seat.
The world is now firmly committed to at least 2 degrees C warming by the end of this century, and that is if we left all fossil fuels in the ground tomorrow. We are used to thinking of climate crisis as a crisis of nature, with images of melting ice caps and emaciated polar bears, but this is a social and political crisis. Rising authoritarianism, including Donald Trump’s assault on American democracy is climate crisis; the genocidal destruction of Gaza is climate crisis; and what is known as the refugee crisis is in fact a combination of famine- and industrial agriculture-induced migration combined with a broader drive towards retraditionalization in wealthy countries, including policing of reproduction and gender boundaries, amplifying anti-immigrant resentment and breeding more authoritarianism.
What is the future of higher education in this kind of world? When we talk about ‘higher education’, we have to acknowledge that the idea of higher education as a sector – as an organised and regulated activity distinct from specific institutions such as universities – is supervenient on the idea of a state (first, the imperial/colonial, then, increasingly, nation-state). In this context, the future of higher education involves reconsidering our relationship to the state. Clearly, in this context, just asking for more money from ‘the government’ won’t do. What is there to guarantee ‘higher education’ would not become handmaiden to authoritarianism, funnelling people into extractive jobs and positions (or ensuring their compliance by encapsulating them in cycles of debt), and amplifying racism and environmental degradation? Higher education institutions across the world will, increasingly, face a choice. Remaining part and parcel of the system that (re)produces it, that enables this to function – or?
So, to return to my initial remark, in this context, I want to ask – what else could you be doing? If you weren’t here, where else could you be – at a protest, an occupation, a community food distribution? Or ‘shopping’, digitally consuming/doomscrolling while performing reproductive labour at home? Because your answer to that will determine the future of higher education.
The day after the Conservative Party won the 2019 UK election, I came up with a trick. It was the culmination of several weeks (if not months) of prolonged exhaustion and creeping despair, as I watched parts of the UK’s supposed ‘left’ systematically tear away at the country’s only progressive political current in years, and its only shot at creating a vaguely decent society – you know, a society that would not systematically humiliate its members – dissolve in the toxic concoction of incurable conservatism (small c, this time), parochialism rendered even more obsolete by a self-aggrandizing belief in the centrality of the UK to the world (historical or contemporaneous, irrelevant), and genuflection at the tiniest display of power, authority, or hierarchy so absolute that you would be forgiven for thinking it genetic. In other words, this set of nasty gammonish tendencies – to my surprise, seemingly equitably distributed across the political spectrum – proved a fertile ground for a campaign whose ostensible target was to prove Jeremy Corbyn was unelectable, and the actual to make sure he would not be elected. Add to this that many of us were fighting on several fronts – in one of the rare moves I explicitly disagreed with, my union (UCU) decided to run another round (third or fourth? in two years) of strike action, thus making sure we would have to split time, energy, and resources between election campaigning and picket lines (for why it is impossible to unify the two causes – even in such an obvious case – see above). For me, it was the beginning of a very dark period of five years – of fear (of what was going to happen), of anger (for the fact so many people, including those I believed I shared basic ideological affinities with, allowed it to happen), and disappointment, not so much because Corbyn didn’t get elected but because I saw, possibly for the first time, in such stark terms: most people would rather suffer in a world they know than risk the chance of building a different one. In that moment – the day after, still sniffly from the downpour I got caught in during the last desperate attempt at canvassing the evening of the election – I Tweeted something along the lines of “whatever else you do today, go out and look at some trees. Stand by the river. There are things bigger than politics. There are things older than the Conservative Party.” (I’m paraphrasing, not only because I have no desire to log back into Twitter but also because the actual formulation is not particularly important.)
I remember being slightly surprised by the volume of positive reactions (kindness and self-care, to put it politely, not being exactly my default style of interaction on social media back in the day – the platform has its own dynamics and patriarchy failed spectacularly at instilling in me the tendency to please other people), but mostly because what seemed to me amalgamated commonsensical advice – decenter your own feelings; step away from social media; in times of great turmoil, seek solace in things you know are eternal – seemed to strike a chord with so many people. It was probably the first time I truly understood how many people were never taught basic self-soothing and regulation skills, but also how much of a difference prior experience of system breakdown/state collapse made in terms of knowing how to cope (together with a group of other ex-Yu scholars, we reflected on this here).
Five years later, I know better than to offer unsolicited advice :). I am also aware that people, especially people in the US, have long traditions (obviously, most of these reaching way before settler colonialism) of sacralization and reverence for nature. Hence, it is not my intention to share this particular trick, not least because I tend to think people who are receptive to this mode of relating to the world either already do it and practice in similar (even if superficially or nominally different) ways, or would otherwise need to change perception in order to avoid this becoming yet another ‘trick’ to allow them to stay, in fact, deeply plugged into the circuit of capitalist (over)production, including affective (a bit like fintechbros using mindfulness to continue building extractive infrastructures whereas being actually ‘mindful’ would inevitably lead to the realisation your mode of living is directly harmful to many beings). However, the memory of many positive reactions to that original Tweet in 2019 – at a time that was a time of anger, and disappointment, and despair for many in the UK – made me want to share a different technique, one that I hope can not only help people dealing with anger and despair but which I also believe will come in handy in knowing how to orient ourselves in the future, especially as the world burns.
Why this isn’t a practice of gratitude
The technique is a related one, but different in terms of its orientation. It is called ‘five good things’, and, true to its name, it asks you to think of five things. Do not be tempted, however, to think this is about your life, or what you have, or ‘things’ in the sense of material objects. In other words, you are not meant to name ‘things’ such as “well, I at least have a job” or “my child is healthy” or “I ate a really good burrito yesterday”. All of these examples – while important – are either cases that are meant to invoke gratitude (so, basically, either outsourcing responsibility to a different/higher power, or, in some forms, acknowledging the fundamentally underdetermined nature of the universe and the role of luck – in other words, all of these things could have turned out differently) or to affirm your own capabilities, usually expressed in and measured by money or equivalents/proxies – power, prestige, admiration (under present circumstances, chances are that the burrito you ate is a burrito you bought, in which case your sense of satisfaction comes from your purchasing power. This is somewhat different if the burrito was e.g. part of a community kitchen or food-sharing event, which would hopefully lead you to recognize your relation to and interdependence with others, but this is not clear-cut either – not least because of the often disproportionate amount of cooking/caring labour performed by women in some of these settings).
Instead, think of five things that are beautiful, and good, and true.
It’s really important that they are all three. Many things are beautiful (at least by culturally dominant/conventional standards) without being either good or true. True in the sense of being true to form, rather than correspondent to the actual state of affairs; for instance, it is almost impossible for a rose as a flower to not be ‘true’ (unless it’s an artificial rose), but it is quite possible for a rose being given as a gesture to not be ‘true’ (as in not being motivated by genuine feeling). Similarly, it is possible for a plant to not be ‘good’ – for instance, if it is a plant introduced to eradicate other plants in the area, or to eliminate sources of food for diverse other organisms.
(There is, of course, a humongous philosophical background literature on defining each of these terms, and especially their interrelation/combination. If you are not familiar with it, I could probably suggest starting with Schopenhauer and Kant, or even Plato, but I do not think this is either necessary or helpful. Every human is capable of reflecting on what it means to say something is beautiful, good, and true; namedropping dead white men can sometimes distract from it, and in most cases under present circumstances turns into an exercise of academic prowess, rather than a way to think with others).
You can think about concrete instances as well as things, generically; sometimes, one will be more appropriate, sometimes the other. For instance, I tend to think that bunnies, in general, are good, and beautiful, and true; I may see a particular bunny for which I will think this, but given that bunnies in the form in which I encounter them are relatively hard to individuate, odds are I will continue thinking this for the bunny kind, rather than a particular bunny. Equally, I tend to think friendships are a good thing, but odds are that if I think of any friendship (or relationship) at any point as beautiful, good, and true, I will be thinking about a concrete relation.
(Again, masses of philosophical, theoretical, and empirical work – my own included – on how we categorise groups, genera, members, and how we form judgments based on inclusion/exclusion, derivation, and boundary categories. Much interesting if you are a social ontologist. Possibly less if you’re not).
Have you thought of five things that are beautiful, and good, and true?
Now fight like hell for those things.
Not to ‘have’ them or to keep them; but to enable them to exist in exactly that form, in their own goodness, and truth, and beauty.
If you ever find yourself in a period of despair, think of these five things. At the end of each week, see if you can come up with five. Sooner rather than later, you will begin to realize that the things worth fighting for are not those the majority of the people are taught to fight about.
I may publish my list from time to time.
Further inspiration:
Maria Popova shared at the start of this week Louise Erdrich’s poem “Resistance” – I love Erdrich but I have not read this one before. It is also related to another project I am developing, so seemed perfectly resonant.
Another element of resonance – as I was finishing this blog post I tuned into The Philosopher conversation between Isabelle Laurenzi and Lida Maxwell on Maxwell’s new book, “Rachel Carson and the Power of Queer Love“. It seems that the awe with the world that Maxwell traces in the relationship between Carson and Dorothy Freeman as well as much as their relationship to the natural world very much resonates with the feeling I hope these exercises will inspire – the combination of wonder, dedication, and a commitment to change the (human) world. I’m yet to read the book (as I found out recently, living in the UK while being temporarily in the US creates massive issues in terms of e-book licensing), but very much looking forward to it.
Speaking of books that are about the sense of wonder, this book – “A Reenchanted World: The Quest for a New Kinship With Nature” by James William Gibson (2009) – is probably among the nicest, most beautiful (both in terms of style and in terms of content) books I’ve read recently. I also learned many things (not least, that Paul Watson, the founder of Sea Shepherd, volunteered as a medic for the American Indian Movement protesters at Wounded Knee, as well as about RFK’s environmentalist past, which makes his public health policies even more bizarre); the book is a remarkable feat of sociological analysis that manages to not be condescending. This book is definitely one of the things that are beautiful, and good, and true.
Incidentally, I also ended up watching again Sarah Polley’s Women Talking (2022), which has a scene in which August is asked to make a list of good things. I think this is a good list! It is also one of the best films made about the process of turning anger and frustration into collective will/self-determination; while depicting a truly dark incident, it is a good example of things beautiful, good, and true.
Soundtrack:
Moby ft. Benjamin Zephaniah – Where is your pride?
Yasiin Bey (Mos Def) – New World Water
This is the continuation of the habit I have kept for a few years, which is to write a post on books I have read that year. That said, “habit” is hardly a deserving name for something I did two years I a row – in 2021 and 2022 – and then dropped in 2023: the year had been too filled, both with ups and downs, and the context in which I read some of those books too convoluted; it is also possible I read more than I usually would (having been on research leave in spring) – or less. I wouldn’t know, as I gave up on keeping a list; I gave up on many other things, including possibly the last vestiges of ego-investment in the academia, which also meant I gave up on reading for competitive, pedagogical, or perfunctory reasons. In this context, coming up with a list of all the books I have read would have seemed a bit counterperformative; not least, the time I would have normally spent writing up this post – the quiet days after the end of term, as Xmas and New Year drag themselves over the hill – was spent flat out from Covid (which I finally caught, one and so far only time) and the ongoing pressure at work.
This year, I am coming back to this, but in order to share the books that brought me hope. This may seem like an odd choice for someone whose approach to knowledge always emphasised the ethical and political responsibility of recognising tendencies in the present that may lead to harmful and disastrous futures – even if that entailed coming to terms that, in not-insignificant ways, our present (in)action may be rendering certain kinds of futures impossible. This, for most of my life – starting with the rather famous moment when, aged eight, I argued to my father that Yugoslavia will fall apart – meant having the courage to be a ‘killjoy’, not only (or primarily) in terms of disrupting the cozy consensus that scaffolds some of the most odious things about contemporary social life – consumerism, patriarchy, xenophobia and racism – but also by pointing out, ceaselessly, that bleating starry-eyed about the revolution to come was, in very real ways, preventing us from bringing it about.
To toss the concept of ‘hope’ about might, from this perspective, seem at best a concession to sentimentality, in the same way in which I dutifully bellow ‘Merry Xmas’ back at people; at worst, like capitulation to the abscondence from the daily work of not reproducing the same systems that we (so eloquently) critique, which intellectuals of my sort are prone to, especially when we reach a certain career (st)age. I, at least, have always swatted away questions of hope or ‘exit’ (as one of my PhD examiners exasperatedly sighed towards the end of my viva: “Then there is no Aufhebung?”), in the same way in which I used to swat away questions of the sort of ‘What is to be done?’ before I decided to start doing less of knowledge production and more of…other stuff.
Why hope, then? Put simply, one of the rare justifications I find nowadays for continuing to do “academia” is that (nominally, at least) it entails two things: time to read (one might say, an obligation) and a platform to tell others so (as well as what not to read). And everyone needs a bit of hope. This is particularly important as I see growing numbers of (even educated) people fall for trash arguments along the lines of Stephen Pinker’s ‘Better Angels’ or other kinds of Pollyanna-ish optimism that usually serves to bolster capitalist, extractivist, or neocolonial approaches to ‘business as usual’. Thus, to be able to see ‘hope’ without, at the same time, ‘unseeing’ all things that render it impossible (the war in Gaza; continuous extraction; runaway climate crisis) becomes a difficult exercise in discernment and balancing – something that, in fact, academics of my sort are uniquely trained to do.
That said, not all of the books included in this list count as ‘academic’ – and most would not But they are what sustained me over the past year. I hope they can be of service to you.
Revenant ecologies: defying the violence of extinction and conservation (Audra Mitchell)
This is a book that challenges powerfully the thinking about extinction and conservation that dominates Anglo-academia. Particular points for taking a swipe at the ‘extinction industry’ of academic writing, and the books (many of which I admit I had enjoyed!) that write about extinction from a seemingly universalist perspective. On the other hand, Revenant Ecologies seems at times to take almost excessive care to avoid this. Regardless, it is a careful, engaging, and mobilising analysis that aims to avoid the po
As we have always done: indigenous freedom through radical resistance (Leanne Betasamosake Simpson)
I’ve admired Betasamosake Simpson’s writing for a long time (and also music! How cool it is to be a social theorist who also writes and performs music). This book is a reminder that undercurrents of resistance run deep, but also that freedom is a praxis – a constant one, at that.
Our history is the future: Standing Rock vs. the Dakota Access Pipeline, and the long tradition of Indigenous resistance (Nick Estes)
QED. Well, maybe there is a bit of a theme running through this year’s reading. But in a year that felt so long, and hopeless, and dark, I needed (printed) reminders that people have lived through (and survived) worse ordeals, and that they not only did not accept but actively challenged and fought against the colonial order and its successors, including extractivism.
Ecology of wisdom (Arne Naess)
Naess is one of those people who are larger than history gives them credit for – he is usually styled as the ‘founder of the deep ecology movement’, but Naess was a philosopher (prefiguring a lot of analytical thought), an ecologist, a spiritual thinker (a Buddhist and firmly committed to nonviolent action), and a mountaineer. ‘Ecology of wisdom’ is a compendium of his writings; thanks, in part, to masterful translation, the prose just flies off the page making it more like poetry (I’ll admit that the combination of analytic philosophy, Buddhism, and ecology is particularly likely to chime with how I feel and think about the world). Nonetheless, I find it hard to think anyone would not be charmed – at least anyone who still has a heart, and soul, in some part of the magical world we inhabit.
And if you need a reminder how to (re)discover it, these two are highly recommended: Reclaiming the wild soul (Mary Reynolds Thompson) and Enchanted life: reclaiming the magic and the wisdom of the natural world (Sharon Blackie). I *really* like Sharon Blackie’s writing – it manages to be equal parts environmentalist, witchy, psychoanalytic and folksy without becoming too bound by conventions of any.
Foxfire, wolfskin: and other stories of shapeshifting women (Sharon Blackie) is a wonderful retelling of some of the classical European folk tales, with a gender twist that does not come across as pedagogical. I absolutely adored it – and even got it for a few friends.
A natural history of the future (Rob Dunn)
This is a great (admittedly, popular science) book on the impacts of the ongoing climate change and other human-induced changes on the biosphere. It brings in new arguments and perspectives, even if you’re a seasoned reader of the genre, and I’d say it’s informed by deep ecology whilst retaining a pleasantly matter-of-factly tone.
Claros del Bosque (Maria Zambrano)
I used the two unforeseen trips to Serbia in springtime to delve into the rich body of non-English philosophical and theoretical works in translation, something I dearly miss in bewilderingly anglo-centric UK (even major works in French or German are increasingly translated with a delay, if at all). I chanced upon the Serbian translation of Zambrano’s Claros del Bosque (forest clearings? ) in one of my favourite (independent) bookshops, but given that 2024 was also the year in which I decided to refresh my Spanish, I also got the original (the combination proving the right level for my Spanish reading skill). Zambrano (a metaphysician, essayist, and Spanish republican) was yet another ‘forgotten’ philosopher whose work I enjoyed discovering in the past two years, alongside Anne Dufourmantelle and Mari Ruti; her writing also reminded me of Clarice Lispector, with the combination of the poetic and the philosophical.
Drive your plow over the bones of the dead (Olga Tokarczuk)
I returned to reading Drive your plow…late this year, after a chance encounter on the plane this spring reminded me it was one of the (many) books I had been meaning to come back to. Let’s just say I do not regret the decision: it also linked to the research project I will be working on over the next year and a half – which just goes on to show things tend to come back at exactly the right time.
The Dawn of Everything: A new history of humanity (David Graeber & David Wengrow)
One of the wonderful things about my new research project was returning to the things that excited me about anthropology as an undergrad, including its ability to challenge large-scale (often Eurocentric) generalisations. In this vein, I’ve started reading Graeber and Wengrow’s The Dawn of Everything, which I’m currently enjoying very much; the only downside being that I am beginning to fear they may have already written the book I had been planning to write as the outcome of this research project – but yet, it’s a good problem to have, and I am sure I will still have something to contribute.
Fields, factories, and workshops (Peter Kropotkin)
Another wonderful corollary of this research project is that it allows me to revisit multiple traditions of writing that were foundational to my thinking as an undergrad – not only anthropology, but also (classical) anarchist political theory. In this context, I am (re)reading Kropotkin’s Mutual Aid; given how much anarchist political theory has been discounted and undervalued, not only in mainstream political theory, but also among more progressive forms of reading, this will hopefully play a small part in restoring interest in it.
We do ‘till we free us: abolitionist organizing and transforming justice (Mariame Kaba)
Kaba’s writing is by now semi-legendary, but it also makes sense to remember that it is very down-to-earth, and that it arose from the lived experience of day-to-day abolitionist organising. In the UK context, in which the absence of sustained resistance to forms of exploitation old and new can be dispiriting at best, it is a reminder that forms and practice of resistance do exist elsewhere, and that it’s possible to learn from them.
Climate strike (Derek Wall)
Wall’s book is a really good primer on the relevance of labour organizing, and industrial action, in the face of climate crisis. It is also a potent reminder that problems of climate change and extractivism cannot be addressed separately from questions of labour, which is a much-needed aid in the political context where connecting the two can sometimes feel like an uphill battle.
We are ‘Nature’ defending itself: entagling art, activism, and autonomous zones (Isabelle Fremeaux and Jay Jordan)
This is the story of the French temporary autonomous zone (ZAD) developed at Notre-Dame-des-Landes to stop a proposed airport. More than that, it’s a story about resistance and resilience. It’s a story that tells us that the machine can be stopped.
Constellations of care: anarcha-feminism in practice (Cindy Barukh Milstein)
This is a great compendium of examples, texts, and experiences from different fronts of feminist, queer, and other kinds of intersectional anarchist organising. From infoshops and free libraries to community health initiatives to bike riders, these stories remind us that the world is full of examples of communities existing otherwise, sometimes for longer, sometimes for shorter periods of time, but often all it takes is a few people, a few good ideas, and a commitment to not give up ahead of even trying, to make a lasting contribution to a different world.
Radical Intimacy (Sophie K. Rosa)
I have a long-standing interest in alternative models of relationality (‘alternative’ meaning all that do not privilege heteropatriarchal, monogamous couple-based, reproduction-oriented family) so most of the arguments Rosa writes about are familiar – from Kim TallBear’s writing about non-settler-colonial-normative Indigenous modes of relating, to Sophie Lewis’s take on family abolition – but it is refreshing to see them presented in a succinct, carefully analysed, and user-friendly format. Especially for people who are new to this angle of critique, it’s a really welcome introduction; for others, it’s a handy compendium/reminder of the plethora of the ways in which humans have been relating otherwise – and a powerful primer for ongoing and future attempts to do so.
One of the last books I came to in 2024 (am, in fact, still reading) is also one of the best – Vanessa Machado de Oliveira’s Hospicing Modernity. Earlier this year, prompted, in part, by the war in Gaza and, in part, by the need to explain some of the choices I made in the course of it – including the decision to redirect more of my energy into the activities, goals and values I support – I wrote two posts [1] [2]; let’s just say that reading Machado de Oliveira’s book earlier would have saved me the labour, as she wrote it much better than I ever could.
The penultimate item on the list is not a book, but a magazine – Resurgence & the Ecologist, which I eventually got a subscription for, despite trying to talk myself out of it (youdontneedfeelgoodmagazinesubscriptionsthisisjustmorepapertheworldisonfire) – after all, it is much better than buying The Economist, even if very occasionally.
The final publication of this year, however, is a pamphlet I encountered while visiting one of the student occupations in Belgrade – it was a delight to see it both because I always enjoy CrimethInc materials (returning to reading more anarchism is probably one of the most healing things I have experienced this year) and because I think they are enormously useful for succinctly reminding people why things feel very, very wrong…and what we can do about it.
Happy New Year!
One of the things I most often hear when talking to people about climate change is “but what to do?” This, in and of itself, is good news. Perhaps owing to evidently extreme weather patterns1, perhaps owing to the concentrated efforts of primary/secondary school teachers2, perhaps owing to unceasing (though increasingly brutally repressed, even in the UK & the rest of Europe) efforts of activists, it seems the question whether climate change is ‘real’ has finally taken the back seat to “and what shall we do about it?”.
While climate denialism may have had its day, challenges now come from its cousins (or descendants) in the form of climate optimism, technosolutionism, or – as Linsey McGoey and I have recently argued – the specific kind of ignorance associated with liberal fatalism: using indeterminacy to delay action until certain actions are foreclosed. In the latter context in particular, the sometimes overwhelming question “what to do” can compound and justify, even if unintentionally, the absence of action. The problem is that whilst we are deliberating what to do, certain kinds of action become less possible or more costly, thus limiting the likelihood we will be able to implement them in the future. This is the paradox of inaction.
My interest in this question came from researching the complex relationship between knowledge (and ignorance) and (collective or individual) action. Most commonsense theories assume a relatively linear link between the two: knowing about something will lead you to act on it, especially in the contexts of future risk or harm. This kind of approach shaped information campaigns, or struggles to listen to ‘the science’, from early conversations around climate change to Covid-19. Another kind of approach overrides these information- or education-based incentives in favour of behavioural ‘nudges’; awareness of cognitive processing biases (well-documented and plenty) suggested slightly altering decisional infrastructure would be more efficient than trying to, effectively, persuade people to do the right thing. While I can see sense in both approaches, I became interested instead in the ambiguous role of knowledge. In other words, under what conditions would knowing (about the future) prevent us from acting (on the future)?
There are plenty of examples to choose from: from the critique of neoliberalism to Covid-19 (see also the above) to, indeed, climate change (free version here). In the context of teaching, this question often comes up when students begin to realize the complexity of global economy, and the inextricability of questions of personal agency from what we perceive as systemic change. In other words, they begin to realize that the state of the world cannot be reduced either to individual responsibility nor to the supposedly impersonal forces of “economy”, “politics”, “power” etc. But this rightly leaves them at an impasse; if change is not only about individual agency nor about large-scale system change, how can we make anything happen?
It is true that awareness of complexity can often lead to bewilderment or, at worst, inaction. After all, in view of such extraordinary entanglement of factors – individual, cultural, economic, social, political, geological, physical, biological – it can be difficult to even know how to tackle one without unpicking all others. Higher education doesn’t help with this: most people (not all, but most) are, sadly, trained to see the world from the perspective of one discipline or field of study3, which can rightly make processes that span those fields appear impossible to grasp. Global heating is one such process; it is, at the same time, geological, meteorological, ecological, social, political, medical, economic, etc. As Timothy Morton has argued, climate change is a ‘hyperobject’; it exceeds the regular boundaries of human conceptualization.
Luckily, social theory, and in particular social ontology, is particularly good at analysing objects. Gender – e.g. the notion of ‘woman’ – is an example of such an object. This does not mean, by the way, that ‘deconstructing’ objects, concepts, or notions needs to reduce from the complexity of their interrelation; in some approaches to social ontology, a whole is always more than the sum (or any deducible interrelation) of its parts. In other words, to ‘deconstruct’ climate change is not in any way to deny its effects or the usefulness of the concept; it is to understand how different elements – which we conventionally, and historically, but not-at-all necessarily, associate with disciplines or ‘domains’ – interact and interrelate, and what that means. Differently put, the way disciplines construct climate change as an object (or assemblage) tells us something about the way we are likely to perceive solutions (or ways of addressing it, more broadly). It does not determine what is going to happen, but it points to the venues (and limitations) humans are likely to see in doing something about it.
Why does this matter? Our horizon of agency is limited by what we perceive as subjects, objects, and forms of agency. In less weighty parlance, what (and whom) we perceive as being able to do stuff; and the kind of stuff it (they) can do. This, also, includes what we perceive as limitations on doing stuff, real or not. Two limitations apply to all human beings; time and energy. In other words, doing stuff takes time. It also consumes energy. This has implications for what we perceive as the stuff we can do. So what can we do?
As with so many other things, there are two answers. One is obvious: do anything and everything you can, and do it urgently. Anything other than nothing. (Yes, even recycling, in the sense in which it’s better than not recycling, though obviously less useful than not buying packaging in the first place).
The second answer is also obvious, but perhaps less frequent. Simply, what you aim to do depends on what you aim to achieve. Aiming to feel a bit better? Recycle, put a poster up, maybe plant a tree (or just some bee-friendly plants). Make a bit of a difference to your carbon emissions? Leave the car at home (at least some of the time!), stop buying stuff in packaging, cut on flying, eliminate food waste (yes, this is fact very easy to do). Make a real change? Vote on climate policy; pressure your MP; insulate your home (if you have one); talk to others. Join a group, or participate in any kind of collective action. The list goes on; there are other forms of action that go beyond this. They should not be ranked, not in terms of moral rectitude, nor in terms of efficiency (if you’re thinking of the old ‘limitations of individual agency’ argument, do consider what would happen if everyone *did* stop driving and no, that does not mean ambulance vehicles).
The problem with agency is that our ideas of what we can do are often shaped by what we have been trained, raised, and expected to do. Social spaces, in this sense, also become polygons for action. You can learn to do something by being in a space where you are expected to do (that) something; equally, you learn not to do things by being told, explicitly or implicitly, that it is not the done thing. Institutions of higher education are really bad at fostering certain kinds of action, while rewarding others. What is rewarded is (usually) individual performance. This performance is frequently framed, explicitly or implicitly, as competition: against your peers (in relation to whom you are graded) or colleagues (with whom you are compared when it comes to pay, or promotion); against other institutions (for REF scores, or numbers of international students); against everyone in your field (for grants, or permanent jobs). Even instances of team spirit or collaboration are more likely to be rewarded or recognized when they lead to such outcomes (getting a grant, or supporting someone in achieving individual success).
This poses significant limitations for how most people think about agency, whether in the context of professional identities or beyond (I’ve written before about limits to, and my own reluctance towards, affiliation with any kind of professional let alone disciplinary identity). Agency fostered in most contemporary capitalist contexts is either consumption- or competition-oriented (or both, of course, as in conspicuous consumption). Alternatively, it can also be expressive, in the sense in which it can stimulate feelings of identity or belonging, but it bears remembering these do not in and of themselves translate into action. Absent from these is the kind of agency I, for want of a better term, call world-building: the ability to imagine, create, organize and sustain environments that do more than just support the well-being and survival of one and one’s immediate in-group, regardless how narrowly or broadly we may define it, from nuclear family to humanity itself.
The lack of this capacity is starkly evident in classrooms. Not long ago, I asked one of the groups I teach for an example of a social or political issue they were interested in or would support despite the fact it had no direct or personal bearing on their lives. None could (yes, the war on Gaza was already happening). This is not to say that students do not care about issues beyond their immediate scope of interest, or that they are politically disenchanted: there are plenty of examples to the contrary. But it is to suggest that (1), we are really bad at connecting their concerns to broader social and political processes, especially when it comes to issues on which everyone in the global North is relatively privileged (and climate change is one such issue, compared to effects it is likely to have on places with less resilient infrastructure); and (2), institutions are persistently and systematically (and, one might add, intentionally) failing at teaching how to turn this into action. In other words: many people are fully capable of imagining another world is possible. They just don’t know how to build it.
As I was writing this, I found a quote in Leanne Betasamosake Simpson’s (excellent) As We Have Always Done: Indigenous Freedom Through Radical Resistance that I think captures this brilliantly:
Western education does not produce in us the kinds of effects we like to think it does when we say things like ‘education is the new buffalo’. We learn how to type and how to write. We learn how to think within the confines of Western thought. We learn how to pass tests and get jobs within the city of capitalism. If we’re lucky and we fall into the right programs, we might learn to think critically about colonialism. But postsecondary education provides few useful skill sets to those of us who want to fundamentally change the relationship between the Canadian state and Indigenous peoples, because that requires a sustained, collective, strategic long-term movement, a movement the Canadian state has a vested interest in preventing, destroying, and dividing.
(loc 273/795)
It may be evident that generations that have observed us do little but destroy the world will exhibit an absence of capacity (or will) to build one. Here, too, change starts ‘at home’, by which I mean in the classroom. Are we – deliberately or not – reinforcing the message that performance matters? That to ‘do well’ means to fit, even exceed, the demands of capitalist productivity? That this is how the world is, and the best we can do is ‘just get on with it’?
The main challenge for those of us (still) working in higher education, I think, is how to foster and stimulate world-building capacities in every element of our practice. This, make no mistake, is much more difficult than what usually passes for ‘decolonizing’ (though even that is apparently sometimes too much for white colonial institutions), or inserting sessions, talks, or workshops about the climate crisis. It requires resistance to reproducing the relationship to the world that created and sustains the climate crisis – competition-oriented, extractive, and expropriative. It calls for a refusal to conform to the idea that knowledge should, in the end, serve the needs of (a) labour market, ‘economy’, or the state. It requires us to imagine a world beyond such terms. And then teach students how to build it.
“One is not born a genius, one becomes a genius;”, wrote Simone de Beauvoir in The Second Sex; “and the feminine situation has up to the present rendered this becoming practically impossible.”
Of course, the fact that the book, and its author, are much better known for the other quote on processual/relational ontology – “one is not born a woman, one becomes a woman” – is a self-fulfilling prophecy of the first. A statement about geniuses cannot be a statement about women. A woman writing about geniuses must, in fact, be writing about women. And because women cannot be geniuses, she cannot be writing about geniuses. Nor can she be one herself.
I saw Tár, Todd Field’s lauded drama about the (fictional) first woman conductor of the Berlin Philharmonic Orchestra, earlier this year (most of this blog post was written before the Oscars and reviews). There were many reasons why I was poised to love it: the plot/premise, the scenario, the music (obviously), the visuals (and let’s be honest, Kate Blanchett could probably play a Christmas tree and be brilliant). All the same, it ended up riling me for its unabashed exploitation of most stereotypes in the women x ambition box. Of course the lead character (Lydia Tár, played by Blanchett) is cold, narcissistic, and calculating; of course she is a lesbian; of course she is ruthless towards long-term collaborators and exploitative of junior assistants; of course she is dismissive of identity politics; and of course she is, also, a sexual predator. What we perceive in this equation is that a woman who desires – and attains – power will inevitably end up reproducing exactly the behaviours that define men in those roles, down to the very stereotype of Weinstein-like ogre. What is it that makes directors not be able to imagine a woman with a modicum of talent, determination, or (shhh) ambition as anything other than a monster – or alternatively, as a man, and thus by definition a ‘monster’?
To be fair, this movement only repeats what institutions tend to do with women geniuses: they typecast them; make sure that their contributions are strictly domained; and penalize those who depart from the boundaries of prescribed stereotypical ‘feminine’ behaviour (fickle, insecure, borderline ‘hysterical’; or soft, motherly, caring; or ‘girlbossing’ in a way that combines the volume of the first with the protective urges of the second). Often, like in Tár, by literally dragging them off the stage.
The sad thing is that it does not have to be this way. The opening scene of Tár is a stark contrast with the closing one in this regard. In the opening scene, a (staged) interview with Adam Gopnik, Lydia Tár takes the stage in a way that resists, refuses, and downplays gendered stereotypes. Her demeanor is neither masculine nor feminine; her authority is not negotiated, forced to prove itself, endlessly demonstrated. She handles the interview with an equanimity that does not try to impress, convince, cajole, or amuse; but also not charm, outwit, or patronize. In fact, she does not try at all. She approaches the interviewer from a position of intellectual equality, a position that, in my experience, relatively few men can comfortably handle. But of course, this has to turn out to be a pretense. There is no way to exist as a woman in the competitive world of classical music – or, for that matter, anywhere else – without paying heed to the gendered stereotypes.
A particularly poignant (and, I thought, very successful) depiction of this is in the audition scene, in which Olga – the cellist whose career Tár will help and who will eventually become the object of her predation – plays behind a screen. Screening off performers during auditions (‘blind auditions’) was, by the way, initially introduced to challenge gender bias in hiring musicians to major orchestras – to resounding (sorry) success, making it 50% more likely women would be hired. But Tár recognizes the cellist by her shoes (quite stereotypically feminine shoes, by the way). The implication is that even ‘blind’ auditions are not really blind. You can be either a ‘woman’ (like Olga, young, bold, straight, and feminine); or a ‘man’ (like Lydia, masculine, lesbian, and without scruples). There is no outside, and there is no without.
As entertaining as it is to engage in cultural criticism of stereotypical gendered depiction in cinemas, one question from Tár remains. Is there a way to perform authority and expertise in a gender-neutral way? If so, what would it be?
People often tell me I perform authority in a distinctly non-(stereotypically)-feminine way; this both is and is not a surprise. It is a surprise because I am still occasionally shocked by the degree to which intellectual environments in the UK, and in particular those that are traditionally academic, are structurally, relationally, and casually misogynist, even in contexts supposedly explicitly designed to counter it. It is not a surprise, on the other hand, as I was raised by women who did not desire to please and men who were more than comfortable with women’s intellects, but also, I think, because the education system I grew up in had no problems accepting and integrating these intellects. I attribute this to the competitive streak of Communist education – after all, the Soviets sent the first woman into space. But being (at the point of conception, not reception, sadly) bereft of gendered constraints when it comes to intellect does not solve the other part of the equation. If power is also, always, violence, is there a way to perform power that does not ultimately involve hurting others?
This, I think, is the challenge that any woman – or, for that matter, anyone in a position of power who does not automatically benefit from male privilege – must consider. As Dr Autumn Asher BlackDeer brilliantly summarized it recently, decolonization (or any other kind of diversification) is not about replacing one set of oppressors with another, so having more diverse oppressors. Yet, all too frequently, this kind of work – willingly or not – becomes appropriated and used in exactly these ways.
Working in institutions of knowledge production, and especially working both on and within multiple intersecting structures of oppression – gender, ethnicity/race, ability, nationality, class, you name it – makes these challenges, for me, present on a daily basis in both theoretical and practical work., One of the things I try to teach my students is that, in situations of injustice, it is all too appealing to react to perceived slight or offence by turning it inside out, by perpetuating violence in turn. If we are wronged, it becomes easy to attribute blame and mete out punishment. But real intellectual fortitude lies in resisting this impulse. Not in some meek turning-the-other-cheek kind of way, but in realizing that handing down violence will only, ever, perpetuate the cycle of violence. It is breaking – or, failing that, breaking out of – this cycle we must work towards.
As we do, however, we are faced with another kind of problem. This is something Lauren Berlant explicitly addressed in one of their best texts ever, Feminism and the Institutions of Intimacy: most people in and around institutions of knowledge production find authority appealing. This, of course, does not mean that all intellectual authority lends itself automatically to objectification (on either of the sides), but it does and will happen. Some of this, I think, is very comprehensively addressed in Amia Srinivasan‘s The Right to Sex; some of it is usefully dispensed with by Berlant, who argues against seeing pedagogical relations as indexical for transference (or the other way around?). But, as important as these insights are, questions of knowledge – and thus questions of authority – are not limited to questions of pedagogy. Rather, they are related to the very relational nature of knowledge production itself.
For any woman who is an intellectual, then, the challenge rests in walking the very thin line between seduction and reduction – that is, the degree to which intellectual work (an argument, a book, a work of art) has to seduce, but in turn risks being reduced to an act of seduction (the more successful it is, the more likely this will happen). Virginie Despentes’ King Kong Theory, which I’m reading at the moment (shout out to Phlox Books in London where I bought it), is a case in point. Despentes argues against reducing women’s voices to ‘experience’, or to women as epistemic object (well, OK, the latter formulation is mine). Yet, in the reception of the book, it is often Despentes herself – her clothes, her mannerisms, her history, her sexuality – that takes centre stage.
Come to think of it, this version of ‘damned if you do, damned if you don’t’ applies to all women’s performances: how many times have I heard people say they find, for instance, Judith Butler’s or Lauren Berlant’s arguments or language “too complex” or “too difficult”, but on occasions when they do make an effort to engage with them reduce them to being “about gender” or “about sexuality” (hardly warrants mentioning that the same people are likely to diligently plod through Heidegger, Sartre or Foucault without batting an eyelid and, speaking of sexuality, without reducing Foucault’s work on power to it). The implication, of course, is that writers or thinkers who are not men have the obligation to persuade, to enchant readers/consumers into thinking their argument is worth giving time to.
This is something I’ve often observed in how people relate to the arguments of women and nonbinary intellectuals: “They did not manage to convince me” or “Well, let’s see if she can get away with it”. The problem is not just the casualized use of pronouns (note how men thinkers retain their proper names: Sartre, Foucault, but women slip into being a “she”). It’s the expectation that it is their (her) job to convince you, to lure you. Because, of course, your time is more valuable than hers, and of course, there are all these other men you would/should be reading instead, so why bother? It is not the slightest bit surprising that this kind of intellectual habit lends itself too easily to epistemic positioning that leads to epistemic erasure, but also that it becomes all too easily perpetuated by everyone, including those who claim to care about such things.
One of the things I hope I managed to convey in the Ethics of Ambiguity reading group I ran at the end of 2022 and beginning of 2023 is to not read intellectuals who are not white men in this way. To not sit back with your arms folded and let “her” convince you. Simone Weil, another genius – and a woman – wrote that attention is the primary quality of love we can give to each other. The quality of intellectual attention we give to pieces we read has to be the same to count as anything but a narrow, self-aggrandizing gesture. In other words, a commitment to equality means nothing without a commitment to equality of intellectual attention, and a constant practice and reflection required to sustain and improve it.
Enjoyed this? Try https://journals.sagepub.com/doi/10.1177/00113921211057609
and https://www.thephilosopher1923.org/post/philosophy-herself
At the start of December, I took the boat from Newcastle to Amsterdam. I was in Amsterdam for a conference, but it is also true I used to spend a lot of time in Amsterdam – Holland in general – both for private reasons and for work, between 2010 and 2016. Then, after a while, I took a train to Berlin. Then another, sleeper train, to Budapest. Then, a bus to Belgrade.
To wake up in Eastern Europe is to wake up in a context in which history has always already happened. To state this, of course, is a cliché; thinking, and writing, about Eastern Europe is always already infused with clichés. Those of us who come from this part of the world – what Maria Tumarkin marks so aptly as “Eastern European elsewheres” – know. In England, we exist only as shadow projections of a self, not even important enough to be former victims/subjects of the Empire. We are born into the world where we are the Other, so we learn to think, talk, and write of ourselves as the Other. Simone de Beauvoir wrote about this; Frantz Fanon wrote about this too.
To wake up in Berlin is to already wake up in Eastern Europe. This is where it used to begin. To wake up in Berlin is to know that we are always already living in the aftermath of a separation. In Eastern Europe, you know the world was never whole.
I was eight when the Berlin Wall fell. I remember watching it on TV. Not long after, I remember watching a very long session of the Yugoslav League of Communists (perhaps this is where my obsession with watching Parliament TV comes from?). It seemed to go on forever. My grandfather seemed agitated. My dad – whom I only saw rarely – said “Don’t worry, Slovenia will never secede from Yugoslavia”. “Oh, I think it will”, I said*.
When you ask “Are you going home for Christmas?”, you mean Belgrade. To you, Belgrade is a place of clubs and pubs, of cheap beer and abundant grilled meat**. To me, Belgrade is a long dreadful winter, smells of car fumes and something polluting (coal?) used for fuel. Belgrade is waves of refugees and endless war I felt powerless to stop, despite joining the first anti-regime protest in 1992 (at the age of 11), organizing my class to join one in 1996 (which almost got me kicked out of school, not for the last time), and inhaling oceans of tear gas when the regime actually fell, in 2000.
Belgrade is briefly hoping things would get better, then seeing your Prime Minister assassinated in 2003; seeing looting in the streets of Belgrade after Kosovo declared independence in 2008, and while already watching the latter on Youtube, from England, deciding that maybe there was nowhere to return to. Nowadays, Belgrade is a haven of crony capitalism equally indebted to Russian money, Gulf real estate, and Chinese fossil fuel exploitation that makes its air one of the most polluted in the world. So no, Belgrade never felt like home.
Budapest did, though.
It may seem weird that the place I felt most at home is a place where I barely spent three years. My CV will testify that I lived in Budapest between 2010 and 2013, first as a visiting fellow, then as an adjunct professor at the Central European University (CEU). I don’t have a drop of Hungarian blood (not that I know of, at least, thought with the Balkans you can never tell). My command of language was, at best, perfunctory; CEU is an American university and its official language is English. Among my friends – most of whom were East-Central European – we spoke English; some of us have other languages in common, but we still do. And while this group of friends did include some people who would be described as ‘locals’ – that is, Budapest-born and raised – we were, all of us, outsiders, brought together by something that was more than chance and a shared understanding of what it meant to be part of the city***.
Of course, the CV will say that what brought us together was the fact that we were all affiliated with CEU. But CEU is no longer in Budapest; since 2020, it has relocated to Vienna, forced out by the Hungarian regime’s increasingly relentless pursuit against anything that smacks of ‘progressivism’ (are you listening, fellow UK academics?). Almost all of my friends had left before that, just like I did. In 2012, increasingly skeptical about my chances to acquire a permanent position in Western academia with a PhD that said ‘University of Belgrade’ (imagine, it’s not about merit), I applied to do a second PhD at Cambridge. I was on the verge of accepting the offer, when I also landed that most coveted of academic premia, a Marie Curie postdoc position attached to an offer of a permanent – tenured – position, in Denmark****.
Other friends also left. For jobs. For partners’ jobs. For parenthood. For politics. In academia, this is what you did. You swallowed and moved on. Your CV was your life, not its reflection.
So no, there is no longer a home I can return to.
And yet, once there, it comes back. First as a few casually squeezed out words to the Hungarian conductors on the night train from Berlin, then, as a vocabulary of 200+ items that, though rarely used, enabled me to navigate the city, its subways, markets, and occasionally even public services (the high point of my Hungarian fluency was being able to follow – and even part-translate – the Semmelweis Museum curator’s talk! :)). Massolit, the bookshop which also exists in Krakow, which I’ve visited on a goodbye-to-Eastern-Europe trip from Budapest via Prague and Krakow to Ukraine (in 2013, right before the annexation). Gerlóczy utca, where is the French restaurant in which I once left a massive tip for a pianist who played so beautifully that I was happy to be squeezed at the smallest table, right next to the coat stand. Most, which means ‘bridge’ in Serbian (and Bosnian, and Croatian) and ‘now’ in Hungarian. In Belgrade, I now sometimes rely on Google maps to get around; in Budapest, the map of the city is buried so deep in my mental compass that I end up wherever I am supposed to be going.
This is what makes the city your own. Flow, like the Danube, massive as it meanders between the city’s two halves, which do not exactly make a whole. Like that book by psychologist Mihaly Csikszentmihalyi, which is a Hungarian name, btw. Like my academic writing, which, uncoupled from the strictures of British university term, flows.
Budapest has changed, but the old and the new overlay in ways that make it impossible not to remember. Like the ‘twin’ cities of Besźel and Ul Qoma in the fictional universe of China Miéville’s The City and the City (the universe was, of course, modelled on Berlin, but Besźel is Budapest out and out, save for the sea), the memory and its present overlap in distinct patterns that we are trained not to see. Being in one precludes being in the other. But there are rumours of a third city, Orciny, one that predates both. Believing in Orciny is considered a crime, though. There cannot be a place where the past and the future are equally within touching distance. Right?
CEU, granted, is no longer there as an institution; though the building (and the library) remains, most of its services, students, and staff are now in Vienna. I don’t even dare go into the campus; the last time I was there, in 2017, I gave a keynote about how universities mediate disagreement. The green coffee shop with the perennially grim-faced person behind the counter, the one where we went to get good coffee before Espresso Embassy opened, is no longer there. But Espresso Embassy still stands; bigger. Now, of course, there are places to get good coffee everywhere: Budapest is literally overrun by them. The best I pick up is from the Australian coffee shop, which predates my move. Their shop front celebrates their 10th anniversary. Soon, it will be 10 years since I left Budapest.
Home: the word used to fill me with dread. “When are you going home?”, they would ask in Denmark, perhaps to signify the expectation I would be going to Belgrade for the winter break, perhaps to reflect the idea that all immigrants are, fundamentally, guests. “I live here”, I used to respond. “This is my home”. On bad days, I’d add some of the combo of information I used to point out just how far from assumed identities I was: I don’t celebrate Christmas (I’m atheist, for census purposes); if I did, it would be on a different date (Orthodox Christian holidays in Serbia observe the Julian calendar, which is 10 days behind the Gregorian); thanks, I’ll be going to India (I did, in fact, go to India including over the Christmas holidays the first year I lived in Denmark, though not exactly in order to spite everyone). But above and beyond all this, there was a simpler, flatter line: home is not where you return, it’s the place you never left.
In Always Coming Home, another SF novel about finding the places we (n)ever left, Ursula LeGuin retraces a past from the point of view of a speculative future. This future is one in which the world – in fact, multiple worlds – have failed. Like Eastern Europe, it is a sequence of apocalypses whose relationship can only be discovered through a combination of anthropology and archaeology, but one that knows space and its materiality exist only as we have already left it behind; we cannot dig forwards, as it were.
Am I doing the same, now? Am I coming home to find out why I have left? Or did I return from the future to find out I have, in fact, never left?
Towards the end of The City and the City, the main character, Tyador Borlú, gets apprehended by the secret police monitoring – and punishing – instances of trespass (Breach) between two cities, the two worlds. But then he is taken out by one of the Breach – Ashil – and led through the city in a way that allows him to finally see them not as distinct, but as parts of a whole.
Everything I had been unseeing now jostled into sudden close-up. Sound and smell came in: the calls of Besźel; the ringing of its clocktowers; the clattering and old metal percussion of the trams; the chimney smell; the old smells; they came in a tide with the spice and Illitan yells of Ul Qoma, the clatter of a militsya copter, the gunning of German cars. The colours of Ul Qoma light and plastic window displays no longer effaced the ochres and stone of its neighbour, my home.
‘Where are you?’ Ashil said. He spoke so only I could hear. ‘I . . .’
‘Are you in Besźel or Ul Qoma?’
‘. . . Neither. I’m in Breach.’ ‘You’re with me here.’
We moved through a crosshatched morning crowd. ‘In Breach. No one knows if they’re seeing you or unseeing you. Don’t creep. You’re not in neither: you’re in both.’
He tapped my chest. ‘Breathe.’
(Loc. 3944)
Breathe.
*Maybe this is where the tendency not to be overtly impressed by the authority of men comes from (or authority in general, given my father was a professor of sociology and I was, at that point, nine years old, and also right).
** Which I also do not benefit from, as I do not eat meat.
*** Some years later, I will understand that this is why the opening lines of the Alexandria Quartet always resonated so much.
**** How I ended up doing a second PhD at Cambridge after all and relocating to England permanently is a different story, one that I part-told here.
*This is a more-or-less unedited text of the plenary (keynote) address to the international conference ‘Anthropology of the future/The Future of Anthropology‘, hosted by the Institute of Ethnography of the Serbian Academy of Sciences and Arts, in Viminacium, 8-9 September 2022. If citing, please refer to as Bacevic, J. [Title]. Keynote address, [Conference].
Hi all. It’s odd to be addressing you at a conference entitled ‘Anthropology of the Future/The Future of Anthropology’, as I feel like an outsider for several reasons. Most notably, I am not an anthropologist. This is despite the fact that I have a PhD in anthropology, from the University of Belgrade, awarded in 2008. What I mean is that I do not identify as an anthropologist, I do not work in a department or institute of anthropology, nor do I publish in anthropology journals. In fact, I went so far in the opposite direction that I got another PhD, in sociology, from the University of Cambridge. I work at a department of sociology, at Durham University, which is a university in the north-east of England, which looks remarkably like Oxford and Cambridge. So I am an outsider in two senses: I am not an anthropologist, and I no longer live, reside, or work in Serbia. However, between 2004 and 2007 I taught at the Department of Ethnology and Anthropology of the University of Belgrade, and also briefly worked at the Institute that is organizing this very conference, as part of the research stipend awarded by the Serbian Ministry of Science to young, promising, scientific talent. Between 2005 and 2007, and then again briefly in 2008-9, I was the Programme Leader for Antropology in Petnica Science Centre. I don’t think it would be too exaggerated to say, I was, once, anthropology’s future; and anthropology was mine. So what happened since?
By undertaking a retelling of a disciplinary transition – what would in common parlance be dubbed ‘career change’ or ‘reorientation’ – my intention is not to engage in autoethnography, but to offer a reparative reading. I borrow the concept of reparative reading from the late theorist Eve Kosofsky Sedgwick’s essay entitled “On paranoid reading and reparative reading, or: You’re so paranoid, you probably think this essay is about you”, first published in 1997 and then, with edits, in 2003; I will say more about its content and key concepts shortly.
For the time being, however, I would like to note that the disinclination from autoethnography was one of the major reasons why I left anthropology; it was matched by the desire to do theory, by which I mean the possibility of deriving mid-range generalizations about human behaviour that could aspire not to be merely local, by which I mean not apply only to the cases studied. This, as we know, is not particularly popular in anthropology. This particular brand of ethnographic realism was explicitly targeted for critique during anthropology’s postmodern turn. On the other hand, Theory in anthropology itself had relatively little to commend it, all too easily and too often developing into a totalizing master-narrative of the early evolutionism or, for that matter, its late 20th– and early 21st-century correlates, including what is usually referred to as cognitive psychology, a ‘refresh’ of evolutionary theory I had the opportunity to encounter during my fellowship at the University of Oxford (2007-8). So there were, certainly, a few reasons to be suspicious of theory in anthropology.
For someone theoretically inclined, thus, one option became to flee into another discipline. Doing a PhD in philosophy in the UK is a path only open to people who have undergraduate degrees in philosophy (and I, despite a significant proportion of my undergrad coursework going into philosophy, had not), which is why a lot of the most interesting work in philosophy in the UK happens – or at least used to happen – in other departments, including literature and language studies, the Classics, gender studies, or social sciences like sociology and geography. I chose to work with those theorists who had found their institutional homes in sociology; I found a mentor at the University of Cambridge, and the rest is history (by which I mean I went on to a postdoctoral research fellowship at Cambridge and then on to a permanent position at Durham).
Or that, at any rate, is one story. Another story would tell you that I got my PhD in 2008, the year when the economic crisis hit, and job markets collapsed alongside several other markets. On a slightly precarious footing, freshly back from Oxford, I decided to start doing policy research and advising in an area I had been researching before: education policies, in particular as part of processes of negotiation of multiple political identities and reconciliation in post-conflict societies. Something that had hitherto been a passion, politics, soon became a bona fide object of scholarly interest, so I spent the subsequent few years developing a dual career, eventually a rather high-profile one, as, on the one hand, policy advisor in the area of postconflict higher education, and, on the other, visiting (adjunct) lecturer at the Central European University in Budapest, after doing a brief research fellowship in its institute of advanced study. But because I was not educated as a political scientist – I did not, in other words, have a degree in political science; anthropology was closer to ‘humanities’ and my research was too ‘qualitative’ (this is despite the fact I taught myself basic statistics as well as relatively advanced data analysis) – I could not aspire to a permanent job there. So I started looking for routes out, eventually securing a postdoc position (a rather prestigious Marie Curie, and a tenure-track one) in Denmark.
I did not like Denmark very much, and my boss in this job – otherwise one of the foremost critics of the rise of audit culture in higher education – turned out to be a bully, so I spent most of my time in my two fieldwork destinations, University of Bristol, UK, and University of Auckland, New Zealand. I left after two years, taking up an offer of a funded PhD at Cambridge I had previously turned down. Another story would tell you that I was disappointed with the level of corruption and nepotism in Serbian academia so have decided to leave. Another, with disturbing frequency attached to women scholars, would tell you that being involved in an international relationship I naturally sought to move somewhere I could settle down with my partner, even if that meant abandoning the tenured position I had at Singidunum University in Serbia (this reading is, by the way, so prominent and so unquestioned that after I announced I had got the Marie Curie postdoc and would be moving to Denmark several people commented “Oh, that makes sense, isn’t your partner from somewhere out there” – despite the fact my partner was Dutch).
Yet another story, of course, would join the precarity narrative with the migration/exile and decoloniality narrative, stipulating that as someone who was aspiring to do theory I (naturally) had to move to the (former) colonial centre, given that theory is, as we know, produced in the ‘centre’ whereas countries of the (semi)periphery are only ever tasked with providing ‘examples’, ‘case-‘, or, at best, regional or area studies. And so on and so on, as one of the few people who have managed to trade their regional academic capital for a global (read: Global North/-driven and -defined) one, Slavoj Žižek, would say.
The point here is not to engage in a demonstration of multifocality by showing all these stories could be, and in a certain register, are true. It is also not to point out that any personal life-story or institutional trajectory can be viewed from multiple (possibly mutually irreconcilable) registers, and that we pick a narrative depending on occasion, location, and collocutor. Sociologists have produced a thorough analysis of how CVs, ‘career paths’ or trajectories in the academia are narratively constructed so as to establish a relatively seamless sequence that adheres to, but also, obviously, by the virtue of doing that, reproduces ideas and concepts of ‘success’ (and failure; see also ‘CV of failures‘). Rather, it is to observe something interesting: all these stories, no matter how multifocal or multivocal, also posit master narratives of social forces – forces like neoliberalism, or precarity, for instance; and a master narrative of human motivation – why people do the things they do, and what they desire – things like permanent jobs and high incomes, for instance. They read a direction, and a directionality, into human lives; even if – or, perhaps, especially when – they narrate instances of ‘interruption’, ‘failure’, or inconsistency.
This kind of reading is what Eve Kosofsky Segdwick dubs paranoid reading. Associated with what Paul Ricoeur termed ‘hermeneutics of suspicion’ in Nietzsche, Marx, and Freud, and building on the affect theories of Melanie Klein and Silvan Tomkins, paranoid reading is a tendency that has arguably become synonymous with critique, or critical theory in general: to assume that there is always a ‘behind’, an explanatory/motivational hinterland that, if only unmasked, can not only provide a compelling explanation for the past, but also an efficient strategy for orienting towards the future. Paranoid reading, for instance, characterizes a lot of the critique in and of anthropology, not least of the Writing Culture school, including in the ways the discipline deals with the legacy of its colonial past.
To me, it seems like anthropology in Serbia today is primarily oriented towards a paranoid reading, both in relation to its present (and future) and in relation to its past. This reading of the atmosphere is something it shares with a lot of social sciences and humanities internationally, one of increasing instability/hostility, of the feeling of being ‘under attack’ not only by governments’ neoliberal policies but also by increasingly conservative and reactionary social forces that see any discipline with an openly progressive, egalitarian and inclusive political agenda as leftie woke Satanism, or something. This paranoia, however, is not limited only to those agents or social forces clearly inimical or oppositional to its own project; it extends, sometimes, to proximate and cognate disciplines and forms of life, including sociology, and to different fractions or theoretical schools within anthropology, even those that should be programmatically opposed to paranoid styles of inquiry, such as the phenomenological or ontological turn – as witnessed, for instance, by the relatively recent debate between the late David Graeber and Eduardo Viveiros de Castro on ontological alterity.
Of course, in the twenty-five years that have passed from the first edition of Sedgwick’s essay, many species of theory that explicitly diverge from paranoid style of critique have evolved, not least the ‘postcritical’ turn. But, curiously, when it comes to understanding the conditions of our own existence – that is, the conditions of our own knowledge production – we revert into paranoid readings of not only the social, cultural, and political context, but also of people’s motivations and trajectories. As I argued elsewhere, this analytical gesture reinscribes its own authority by theoretically disavowing it. To paraphrase the title of Sedgwick’s essay, we’re so anti-theoretical that we’re failing to theorize our own inability to stop aspiring to the position of power we believe our discipline, or our predecessors, once occupied, the same power we believe is responsible for our present travails. In other words, we are failing to theorize ambiguity.
My point here is not to chastise anthropology in particular or critical theory in more general terms for failing to live up to political implications of its own ontological commitments (or the other way round?); I have explained at length elsewhere – notably in “Knowing neoliberalism” – why I think this is an impossibility (to summarize, it has to do with the inability to undo the conditions of our own knowledge – to, barely metaphorically, cut our own epistemological branch). Rather, my question is what we could learn if we tried to think of the history and thus future of anthropology, and our position in it, from a reparative, rather than paranoid, position.
This in itself, is a fraught process; not least because anthropology (including in Serbia) has not been exempt from revelations concerning sexual harassment, and it would not be surprising if many more are yet to come. In the context of re-encounter with past trauma and violence, not least the violence of sexual harassment, it is nothing if not natural to re-examine every bit of the past, but also to endlessly, tirelessly scrutinize the present: was I there? Did I do something? Could I have done something? What if what I did made things worse? From this perspective, it is fully justified to ask what could it, possibly, mean to turn towards a reparative reading – can it even, ever, be justified?
Sedgwick – perhaps not surprisingly – has relatively little to say about what reparative reading entails. From my point of view, reparative reading is the kind of reading that is oriented towards reconstructing the past in a way that does not seek to avoid, erase or deny past traumas, but engages with the narrative so as to afford a care of the self and connection – or reconnection – with the past selves, including those that made mistakes or have a lot to answer for. It is, in essence, a profoundly different orientation towards the past as well as the future, one that refuses to reproduce cultures – even if cultures of critique – and to claim that future, in some ways, will be exactly like the past.
Sedgwick aligns this reorientation with queer temporalities, characterized by a relationship to time that refuses to see it in (usually heteronormatively-coded) generationally reproductive terms: my father’s father did this, who in turn passed it to my father, who passed it to me, just like I will pass it to my children. Or, to frame this in more precisely academic terms: my supervisor(s) did this, so I will do it [in order to become successful/recognized like my academic predecessors], and I will teach my students/successors to do it. Understanding that it can be otherwise, and that we can practise other, including non-generational (non-generative?) and non-reproductive politics of knowledge/academic filiation/intellectual friendship is, I think, one important step in making the discussion about the future, including of scientific discipline, anything other than a vague gesturing towards its ever-receding glorious past.
Of course, as a straight and, in most contexts, cis-passing woman, I am a bit reluctant to claim the label of queerness, especially when speaking in Serbia, an intensely and increasingly institutionally homophobic and compulsorily heterosexual society. However, I hope my queer friends, partners, and colleagues will forgive me for borrowing queerness as a term to signify refusal to embody or conform to diagnostic narratives (neoliberalism, precarity, [post]socialism); refusal or disinvestment from normatively and regulatively prescribed vocabularies of motivation and objects of desire – a permanent (tenured) academic position; a stable and growing income; a permanent relationship culminating in children and a house with a garden (I have a house, but I live alone and it does not have a garden). And, of course, the ultimate betrayal for anyone who has come from “here” and ‘made it’ “over there”: the refusal to perform the role of an academic migrant in a way that would allow to once and for all settle the question of whether everything is better ‘over there’ or ‘here’, and thus vindicate the omnipresent reflexive chauvinism (‘corrupt West’) or, alternatively, autochauvinism (‘corrupt Serbia’).
What I hope to have achieved instead, through this refusal, is to offer a postdisciplinary or at least undisciplined narrative and an example of how to extract sustenance from cultures inimical to your lifeplans or intellectual projects. To quote from Sedgwick:
“The vocabulary for articulating any reader’s reparative motive toward a text or a culture has long been so sappy, aestheticizing, defensive, anti-intellectual, or reactionary that it’s no wonder few critics are willing to describe their acquaintance with such motives. The prohibitive problem, however, has been in the limitations of present theoretical vocabularies rather than in the reparative motive itself. No less acute than a paranoid position, no less realistic, no less attached to a project of survival, and neither less nor more delusional or fantasmatic, the reparative reading position undertakes a different range of affects, ambitions, and risks. What we can best learn from such practices are, perhaps, the many ways selves and communities succeed in extracting sustenance from the objects of a culture—even of a culture whose avowed desire has often been not to sustain them.“
All of the cultures I’ve inhabited have been this to some extent – Serbia for its patriarchy, male-dominated public sphere, or excessive gregarious socialisation, something that sits very uncomfortably with my introversion; England for its horrid anti-immigrant attitude only marginally (and not always profitably) mediated by my ostensible ’Whiteness’; Denmark for its oppressive conformism; Hungary, where I was admittedly happiest among the plethora of other English-speaking cosmopolitan academics, which could not provide the institutional home I required (eventually, as is well-known, not even to CEU). But, in a different way, they have also been incredibly sustaining; I love my friends, many of whom are academic friends (former colleagues) in Serbia; I love the Danish egalitarianism and absolute refusal of excess; and I love England in many ways, in no particular order, the most exciting intellectual journey, some great friendships (many of those, I do feel the need to add, with other immigrants), and the most beautiful landscapes, especially in the North-East, where I live now (I also particularly loved New Zealand, but hope to expand on that on a different occasion).
To theorize from a reparative position is to understand that all of these things could be true at the same time. That there is, in other words, no pleasure without pain, that the things that sustain us will, in most cases, also harm us. It is to understand that there is no complete career trajectory, just like there is no position , epistemic or otherwise, from which we could safely and for once answer the question what the future will be like. It is to refuse to pre-emptively know the future, not least so that we could be surprised.
I’ve finished reading Sally Rooney’s most recent novel, Beautiful World, Where are you? It turned out to be much better than I expected – as an early adopter of Conversations with Friends (‘read it – and loved it – before it was cool’), but have subsequently found Normal People quite flat – by which I mean I spent most of the first half struggling, but found the very last bits actually quite good. In an intervening visit to The Bound, I also picked up one of Rooney’s short stories, Mr Salary, and read it on the metro back from Whitley Bay.
I became intrigued by the ‘good boy’ characters of both – Simon in Beautiful World, Nathan in Mr Salary. For context (and hopefully without too many spoilers), Simon is the childhood friend-cum-paramour of Eileen, who is the best friend of Alice (BW’s narrator, and Rooney’s likely alter-ego); Nathan, the titular character of Mr Salary, is clearly a character study for Simon, and in a similar – avuncular – relationship to the story’s narrator. Both Simon and Nathan are older than their (potential) girlfriends in sufficient amounts to make the relationship illegal or at least slightly predatory when they first meet, but also to hold it as a realistic and thus increasingly tantalizing promise once they have grown up a bit. But neither men are predatory creeps; in fact, exactly the opposite. They are kind, understanding, unfailingly supportive, and forever willing to come back to their volatile, indecisive, self-doubting, and often plainly unreliable women.
Who are these fantastic men? Here is an almost perfect reversal of the traditional romance portrayal of gender roles – instead of unreliable, egotistic, unsure-about-their-own-feelings-and-how-to-demonstrate-them guys, we are getting more-or-less the same, but with girls, with the men providing a reliable safe haven from which they can weather their emotional, professional, and sexual storms. This, of course, is not to deny that women can be as indecisive and as fickle as the stereotypical ‘Bad Boys’ of toxic romance; it’s to wonder what this kind of role reversal – even in fantasy, or the para-fantasy para-ethnography that is contemporary literature – does.
On the one hand, men like Simon and Nathan may seem like godsend to anyone who has ever gone through the cycle of emotional exhaustion connected to relationships with people who are, purely, assholes. (I’ve been exceptionally lucky in this regard, insofar as my encounters with the latter kind were blissfully few; but sufficient to be able to confirm that this kind does, indeed, exist in the wild). I mean, who would not want a man who is reliable, supportive of your professional ambitions, patient, organized, good in bed, and does laundry (yours included)? Someone who could withstand your emotional rollercoasters *and* buy you a ticket home when you needed it – and be there waiting for you? Almost like a personal assistant, just with the emotions involved.
And here, precisely, is the rub. For what these men provide is not a model of a partnership; it’s a model of a parent. The way they relate to the women characters – and, obviously, the narrative device of age difference amplifies this – is less that of a partner and more of a benevolent older brother or, in a (n only slight) paraphrase of Winnicott, a good-enough father.
In Daddy Issues, Katherine Angel argues that feminism never engaged fully with the figure of the father – other than as the absent, distant or mildly (or not so mildly) violent and abusive figure. But somewhere outside the axis between Sylvia Plath’s Daddy and Valerie Solanas’ SCUM manifesto is the need to define exactly what the role of the father is once it is removed from its dual shell of object of hate/object of love. Is there, in fact, a role at all?
I have been thinking about this a lot, not only in relation to the intellectual (and political) problem of relationality in theory/knowledge production practices – what Sara Ahmed so poignantly summarized as ‘can one not be in relation to white men?’ – but also personally. Having grown up effectively without a father (who was also unknown to me in my early childhood), what, exactly, was the Freudian triangle going to be in my case? (no this does not mean I believe the Electra complex applies literally; if you’re looking to mansplain psychoanalytic theory, I’d strongly urge you to reconsider, given I’ve read Freud at the age of 13 and have read post-Freudians since; I’d also urge you to read the following paragraph and consider how it relates to the legacies of Anna Freud/Melanie Klein divide, something Adam Philips writes about).
In the domain of theory, claims of originality (or originarity, as in coining or discovering something) is nearly always attributed to men, women’s contributions almost unfailingly framed in terms of ‘application or elaboration of *his* ideas’ or ‘[minor] contribution to the study’ (I’ve written about this in the cases of Sartre/de Beauvoir and Robert Merton/Harriet Zuckerman’s the ‘Matthew Effect’, but other examples abound). As Marilyn Frye points out in “Politics of reality”, the force of genealogy does not necessarily diminish even for those whose criticism of patriarchy extends to refusing anything to do with men altogether; Frye remarks having observed many a lesbian separatist still asking to be recognized – intellectually and academically – by the white men ‘forefathers’ who sit on academic panels. The shadow of the father is a long one. For those of us who have chosen to be romantically involved with men, and have chosen to work in patriarchal mysoginistic institutions that the universities surely are, not relating to men at all is not exactly an option.
It is from this perspective that I think we’d benefit from a discussion on how men can be reliable partners without turning into good-enough daddies, because – as welcome and as necessary as this role sometimes is, especially for women whose own fathers were not – it is ultimately not a relationship between two adults. I remember reading an early feminist critique of the Bridget Jones industry that really hit the nail on the head: it was not so much Jones’ dedication to all things ‘60s and ‘70s feminism abhorred – obsession with weight loss and pursuit of ill-advised men (i.e. Daniel Cleaver); it was even more that when ‘Mr Right’ (Mark Darcy, the barely disguised equivalent of Austen’s Mr Darcy) arrives, he still falls for Bridget – despite the utter absence of anything from elementary competence at her job to the capacity to feed herself in any form that departs from binge eating to recommend her to a seemingly top-notch human rights attorney. Which really begs the question: what is Mr Darcy seeing in Bridget?
Don’t get me wrong: I am sure that there are men who are attracted to the chaotic, manic-pixie-who-keeps-losing-her-credit-card kind of girl. Regardless of what manifestation or point on the irresponsibility spectrum they occupy, these women certainly play a role for such men – allowing them to feel useful, powerful, respected, even perhaps feeding a bit their saviour complex. But ultimately, playing this role leaves these men entirely outside of the relationship; if the only way they relate to their partners is by reacting (to their moods, their needs, their lives), this ultimately absolves them of equal responsibility for the relationship. Sadly, there is a way to avoid equal division of the ‘mental load’ even while doing the dishes.
And I am sure this does something for the women in question too; after all, there is nothing wrong in knowing that there *is* going to be someone to pick you up if you go out and there are no taxis to get you back home, who will always provide a listening ear and a shoulder to cry on, seemingly completely irrespectively of their own needs (Simon is supposed to have a relatively high-profile political job, yet, interestingly, never feels tired when Ellaine calls or offers to come over). But what at first seems like a fantasy come true – a reliable man who is not afraid to show his love and admiration – can quickly turn into a somewhat toxic set of interdependencies: why, for instance, learn to drive if someone is always there to pick you up and drop you off? (honestly: even among the supposedly-super-egalitarian straight partnerships I know, the number of men drivers vastly outstrips that of women). The point is not to always insist on being a jack-of-all-trades (nor on being the designated driver), as much as to realize that most kinds of freedom (for instance, the freedom to drink when out) embed a whole set of dependencies (for instance, dependence on urban networks of taxis/Ubers or kind self-effacing mensaviours there to pick you up – in Cars’ slightly creepy formulation, drive you home).
Of course, as Simone de Beauvoir recognized, there is no freedom without dependency. We cannot, simply, will ourselves free without willing the same for others; but, at the same time, we cannot will them to be free, as this turns them into objects. In Ethics of Ambiguity – one of the finest books of existentialist philosophy – de Beauvoir turns this into the main conundrum (thus: source of ambiguity) for how to act ethically. Acknowledging our fundamental reliance on others does not mean we need to remain locked into the same set of interdependencies (e.g., we could build safe and reliable public transport and then we would not have to rely on people to drive us home?), but it also does not mean we need to kick out of them by denying or reversing their force – not least because it, ultimately, does not work.
The idea that gender equality, especially in heterosexual partnerships, benefits from the reversal of the trope of the uncommitted, eternally unreliable bachelor in the way that tips the balance in an entirely opposite direction (other than for very short periods of time, of course) strikes me as one of the manifestations of the long tail of post- or anti-feminist backlash – admittedly, a mild and certainly less harmful one than, for instance, the idea that feminism means ‘women are better than men’ or that feminists seek to eliminate men from politics, work, or anything else (both, worryingly, have filtered into public discourse). It also strikes me that the long-suffering Sacrificial Men who have politely taken shit from their objects of affection can all-too-easily be converted into Men’s Rights Activists or incels if and when their long suffering fails to yield results – for instance, when their Manic Pixie leaves with someone with a spine (not a Bad Boy, just a man with boundaries) – or when they realize that the person they have been playing Good Daddy to has finally grown up and left home.
I saw ‘A Night of Knowing Nothing’ tonight, probably the best film I’ve seen this year (alongside The Wheel of Fortune and Fantasy, but they’re completely different genres – I could say ‘A Night of Knowing Nothing is the best political film I saw this year, but that would take us down the annoying path of ‘what is political’). There was only one other person in the cinema; this may be a depressing reflection of the local audiences’ autofocus (though this autofocus, at least in my experience, did tend to encompass corners of the former Empire), but given my standard response to the lovely people at Tyneside‘s ‘Where would you like to sit?’ – ‘Close to the aisle, as far away from other people’ – I couldn’t complain.
The film is part-documentary, part fiction, told from the angle of an anonymous woman student (who goes by ‘L.’) whose letters document the period of student strikes at the Film and Television Institute of India (FTII), but also, more broadly, the relationship between the ascendance of Modi’s regime and student protests at Jawaharlal Nehru University (JNU) in New Delhi in 2016, as well as related events – including violent attacks of masked mobs on JNU and arrests at Aligarh Muslim University in 2020*.
Where the (scant) reviews are right, and correct, is that the film is also about religion, caste, and the (both ‘slow’ and rapid) violence unleashed by supporters of the nationalist (‘Hinduttva’) project in the Bharatiya Janata Party (BJP) and its student wing, the Akhil Bharatiya Vidyarthi Parishad (ABVP).
What they don’t mention, however, is that it is also about student (and campus) politics, solidarity, and what to do when your right to protest is literally being crushed (one particularly harrowing scene – at least to anyone who has experienced police violence – consists of CCTV footage of what seem like uniformed men breaking into the premises of one of the universities and then randomly beating students trying to escape through the small door; according to reports, policemen were on site but did nothing). Many of the names mentioned in the film – both through documentary footage and L’s letters – will end up in prison, some possibly tortured (one of L’s interlocutors says he does not want to talk about it for fear of dissuading other students from protest); one will commit suicide. Throughout this, yet, what the footage shows are nights of dancing; impassioned speeches; banners and placards that call out the neo-nationalist government and its complicity not only with violence but also with perpetuating poverty, casteism, and Islamophobia. And solidarity, solidarity, solidarity.
This is the message that transpires most clearly throughout the film. The students have managed to connect two things: the role of perpetuating class/caste divisions in education – dismissiveness and abuse towards Dalit students, the increase of tuition meant to exclude those whose student bursaries support their families too – and the strenghtening of nationalism, or neo-nationalism. That the right-wing rearguard rules through stoking envy and resentment towards ‘undeserving’ poor (e.g. ‘welfare scroungers’) is not new; that it can use higher education, including initiatives aimed at widening participation, to do this, is. In this sense, Modi’s supporters’ strategy seems to be to co-opt the contempt for ‘lazy’ and ‘privileged’ students (particularly those with state bursaries) and turn it into accusation of ‘anti-nationalism’, which is equated with being critical of any governmental policy that deepens existing social inequalities.
It wouldn’t be very anthropological to draw easy parallels with the UK government’s war on Critical Race Theory, which equally tends to locate racism in attempts to call it out, rather than in the institutions – and policies – that perpetuate it; but the analogy almost presents itself. Where it fails, more obviously, is that students – and academics – in the UK still (but just about) have a broader scope for protest than their Indian counterparts. Of course, the new Bill on Freedom of Speech (Academic Freedom) proposes to eliminate some of that, too. But until it does, it makes sense to remember that rights that are not exercised tend to get lost.
Finally, what struck me about A Night of Knowing Nothing is the remarkable show of solidarity not only from workers, actors, and just (‘normal’) people, but also from students across campuses (it bears remembering that in India these are often universities in different states and thousands of miles away from each other). This was particularly salient in relation to the increasingly localized nature of fights for both pensions and ‘Four Fights’ of union members in UK higher education. Of course, union laws make it mandatory that there is both a local and a national mandate for strike action, and it is true that we express solidarity when cuts are threatened to colleagues in the sector (e.g. Goldsmiths, or Leicester a bit before that). But what I think we do not realize is that that is, eventually, going to happen everywhere – there is no university, no job, and no senior position safe enough. The night of knowing nothing has lasted for too long; it is, perhaps, time to stop pretending.
Btw, if you happen to live in Toon, the film is showing tomorrow (4 May) and on a few other days. Or catch it in your local – you won’t regret it.
*If you’re wondering why you haven’t heard of these, my guess is they were obscured by the pandemic; I say this as someone who both has friends from India and as been following Indian HE quite closely between 2013 and 2016, though somewhat less since, and I still *barely* recall reading/hearing about any of these.
I’m reading Christine Korsgaard’s ‘Self-Constitution: Agency, Identity, and Integrity‘ (2009) – I’ve found myself increasingly drawn recently to questions of normative political philosophy or ‘ideal theory’, which I’ve previously tended to analytically eschew, I presume as part-pluralism, part-anthropological reflex.
In chapter 2 (‘The Metaphysics of Normativity’), Korsgaard engages with Aristotle’s analysis of objects as an outcome of organizing principles. For instance, what makes a house a house rather than just a ‘heap of stones and mortar and bricks’ is its function of keeping out the weather, and this is also how we should judge the house – a ‘good’ house is one that fulfils this function, a bad house is one that does not not, or at least not so much.
This argument, of course, is a well-known one and endlessly discussed in social ontology (at least among the Cambridge Social Ontology crowd, which I still visit). But Korsgaard emphasizes something that has previously completely escaped my attention, which is the implicit argument about the relationship between normativity and knowledge:

Now, it is entirely true that ‘seeing what things do’ is a pretty neat description of my work as a theorist. But there is an equally important one, which is seeing what things can or could do. This means looking at (I’m parking the discussion about privileging the visual/observer approach to theory for the time being, as it’s both a well-known criticism in e.g. feminist & Indigenous philosophy *and* other people have written about it much better than I ever could) ‘things’ – in my case, usually concepts – and understanding what using them can do, that is, looking at them relationally. You are not the same person looking at one kind of social object and another, nor is it, importantly, the same social object ‘unproblematically’ (meaning that yes, it is possible to reach consensus about social objects – e.g. what is a university, or a man, or a woman, or fascism, but it is not possible to reach it without disagreement – the only difference being whether it is open or suppressed). I’m also parking the discussion about observer effects, indefinitely: if you’re interested in how that theoretical argument looks without butchering theoretical physics, I’ve written about it here.
This also makes the normative element of the argument more difficult, as it requires delving not only into the ‘satisficing’ or ‘fitness’ analysis (a good house is a house that does the job of being a house), but also into the performative effects analysis (is a good house a house that does its job in a way that eventually turns ‘houseness’ into something bad?). To note, this is distinct from other issues Korsgaard recognizes – e.g. that a house constructed in a place that obscures the neighbours’ view is bad, but not a bad house, as its ‘badness’ is not derived from its being a house, but from its position in space (the ‘where’, not the ‘what’). This analysis may – and I emphasize may – be sufficient for discrete (and Western) ontologies, where it is entirely conceivable of the same house being positioned somewhere else and thus remaining a good house, while no longer being ‘bad’ for the neighbourhood as a whole. But it clearly encounters problems on any kind of relational, environment-based, or contextual ontologies (a house is not a house only by the virtue of being sufficient to keep out elements for the inhabitants, but also – and, possibly, more importantly – by being positioned in a community, and a community that is ‘poisoned’ by a house that blocks everyone’s view is not a good community for houses).
In this sense, it makes sense to ask when what an object does turns into badness for the object itself? I.e., what would it mean that a ‘good’ house is at the same time a bad house? Plot spoiler: I believe this is likely true for all social objects. (I’ve written about ambiguity here and also here). The task of the (social) theorist – what, I think, makes my work social (both in the sense of applying to the domain of interaction between multiple human beings and in the sense of having relevance to someone beyond me) is to figure out what kind of contexts make one more likely than the other. Under what conditions do mostly good things (like, for instance, academic freedom) become mostly bad things (like, for instance, a form of exclusion)?
I’ve been thinking about this a lot in relation to what constitutes ‘bad’ scholarship (and, I guess, by extension, a bad scholar). Having had the dubious pleasure of encountering people who teach different combinations of neocolonial, right-wing, and anti-feminist ‘scholarship’ over the past couple of years (England, and especially the place where I work, is a trove of surprises in this sense), it strikes me that the key question is under what conditions this kind of work – which universities tend to ignore because it ‘passes’ as scholarship and gives them the veneer of presenting ‘both sides’ – turns the whole idea of scholarship into little more than competition for followers on either of the ‘sides’. This brings me to the question which, I think, should be the source of normativity for academic speech, if anything: when is ‘two-sideism’ destructive to knowledge production as a whole?
This is what Korsgaard says:

During the last #USSstrike, on non-picketing days, I practiced working to contract. Working to contract is part of the broader strategy known as ASOS – action short of a strike – and it means fulfilling your contractual obligations, but not more than that. Together with many other UCU members, I will be moving to ASOS from Thursday. But how does one actually practice ASOS in the neoliberal academia?
I am currently paid to work 2.5 days a week. Normally, I am in the office on Thursdays and Fridays, and sometimes half a Monday or Tuesday. The rest of the time, I write and plan my own research, supervise (that’s Cambridgish for ‘teaching’), or attend seminars and reading groups. Last year, I was mostly writing my dissertation; this year, I am mostly panickedly filling out research grant and job applications, for fear of being without a position when my contract ends in August.
Yet I am also, obviously, not ‘working’ only when I do these things. Books that I read are, more often than not, related to what I am writing, teaching, or just thinking about. Often, I will read ‘theory’ books at all times of day (a former partner once raised the issue of the excess of Marx on the bedside table), but the same can apply to science fiction (or any fiction, for that matter). Films I watch will make it into courses. Even time spent on Twitter occasionally yields important insights, including links to articles, events, or just generic mood of a certain category of people.
I am hardly exceptional in this sense. Most academics work much more than the contracted hours. Estimates vary from 45 to as much as 100 hours/week; regardless of what is a ‘realistic’ assessment, the majority of academics report not being able to finish their expected workload within a 37.5-40hr working week. Working on weekends is ‘industry standard’; there is even a dangerous overwork ethic. Yet increasingly, academics have begun to unite around the unsustainability of the system in which we are increasingly feeling overwhelmed, underpaid, and with mental and other health issues on the rise. This is why rising workloads are one of the key elements of the current wave of UCU strikes. It also led to coining of a parallel hashtag: #ExhaustionRebellion. It seems like the culture is slowly beginning to shift.
From Thursday onwards, I will be on ASOS. I look forward to it: being precarious makes not working sometimes almost as exhausting as working. Yet, the problem with the ethic of overwork is not only that is is unsustainable, or that is directly harmful to the health and well-being of individuals, institutions, and the environment. It is also that it is remarkably resilient: and it is resilient precisely because it relies on some of the things academics value the most.
Marx’s theory of value* tells us that the origins of exploitation in industrial capitalism lie in the fact workers do not have ownership over means of production; thus, they are forced to sell their labour. Those who own means of production, on the other hand, are driven by the need to keep capital flowing, for which they need profit. Thus, they are naturally inclined to pay their workers as little as possible, as long as that is sufficient to actually keep them working. For most universities, a steady supply of newly minted graduate students, coupled with seemingly unpalatable working conditions in most other branches of employment, means they are well positioned to drive wages further down (in the UK, 17.5% in real terms since 2009).
This, however, is where the usefulness of classical Marxist theory stops. It is immediately obvious that many of the conditions the late 19th-century industrial capitalism no longer apply. To begin with, most academics own the most important means of production: their minds. Of course, many academics use and require relatively expensive equipment, or work in teams where skills are relatively distributed. Yet, even in the most collective of research teams and the most collaborative of labs, the one ingredient that is absolutely necessary is precisely human thoughts. In social sciences and humanities, this is even more the case: while a lot of the work we do is in libraries, or in seminars, or through conversations, ultimately – what we know and do rests within us**.
Neither, for that matter, can academics simply written off as unwitting victims of ‘false consciousness’. Even if the majority could have conceivably been unaware of the direction or speed of the transformation of the sector in the 1990s or in the early 2000s, after the last year’s industrial action this is certainly no longer the case. Nor is this true only of those who are certainly disproportionately affected by its dual face of exploitation and precarity: even academics on secure contracts and in senior positions are increasingly viewing changes to the sector as harmful not only to their younger colleagues, but to themselves. If nothing else, what USS strikes achieved was to help the critique of neoliberalism, marketization and precarity migrate from the pages of left-leaning political periodicals and critical theory seminars into mainstream media discourse. Knowing that current conditions of knowledge production are exploitative, however, does not necessarily translate into knowing what to do about them.
This is why contemporary academic knowledge production is better characterized as extractive or rentier capitalism. Employers, in most cases, do not own – certainly not exclusively – the means of production of knowledge. What they do instead is provide the setting or platform through which knowledge can be valorized, certified, and exchanged; and charge a hefty rent in the process (this is one part of what tuition fees are about). This ‘platform’ can include anything from degrees to learning spaces; from labs and equipment to email servers and libraries. It can also be adjusted, improved, fitted to suit the interests of users (or consumers – in this case, students); this is what endless investment in buildings is about.
The cunning of extractive capitalism lies in the fact that it does not, in fact, require workers to do very much. You are a resource: in industrial capitalism, your body is a resource; in cognitive capitalism, your mind is a resource too. In extractive capitalism, it gets even better: there is almost nothing you do, a single aspect of your thoughts, feelings, or actions, that the university cannot turn into profit. Reading Marxist theory on the side? It will make it into your courses. Interested in politics? Your awareness of social inequalities will be reflected in your teaching philosophy. Involved in community action? It will be listed in your online profile under ‘public engagement and impact’. It gets better still: even your critique of extractive, neoliberal conditions of knowledge production can be used to generate value for your employer – just make sure it is published in the appropriate journals, and before the REF deadline.
This is the secret to the remarkable resilience of extractive capitalism. It feeds on exactly what academics love most: on the desire to know more, to explore, to learn. This is, possibly, one of the most basic human needs past the point of food, shelter, and warmth. The fact that the system is designed to make access to all of the latter dependent on being exploited for the former speaks, I think, volumes (it also makes The Matrix look like less of a metaphor and more of an early blueprint, with technology just waiting to catch up). This makes ‘working to contract’ quite tricky: even if you pack up and leave your office at 16.38 on the dot, Monday to Friday, your employer will still be monetizing your labour. You are probably, even if unwittingly, helping them do so.
What, then, are we to do? It would be obviously easy to end with a vague call a las barricadas, conveniently positioned so as to boost one’s political cred. Not infrequently, my own work’s been read in this way: as if it ‘reminds academics of the necessity of activism’ or (worse) ‘invites to concrete action’ (bleurgh). Nothing could be farther from the truth: I absolutely disagree with the idea that critical analysis somehow magically transmigrates into political action. (In fact, why we are prone to mistaking one for the other is one of the key topics of my work, but this is an ASOS post, so I will not be writing about it). In other words, what you will do – tomorrow, on (or off?) the picket line, in a bit over a week, in the polling booth, in the next few months, when you are asked to join that and that committee or to a review a junior colleague’s tenure/promotion folder – is your problem and yours alone. What this post is about, however, is what to do when you’re on ASOS.
Therefore, I want to propose a collective reclaiming of the life of the mind. Too much of our collective capacity – for thinking, for listening, for learning, for teaching – is currently absorbed by institutions that turn it, willy-nilly, into capital. We need to re-learn to draw boundaries. We need thinking, learning, and caring to become independent of process that turns them into profit. There are many ways to do it – and many have been tried before: workers and cooperative universities; social science centres; summer schools; and, last but not least, our own teach-outs and picket line pedagogy. But even when these are not happening, we need to seriously rethink how we use the one resource that universities cannot replace: our own thoughts.
So from Thursday next week, I am going to be reclaiming my own. I will do the things I usually do – read; research; write; teach and supervise students; plan and attend meetings; analyse data; attend seminars; and so on – until 4.40. After that, however, my mind is mine – and mine alone.
*Rest assured that the students I teach get treated to a much more sophisticated version of the labour theory of value (Soc1), together with variations and critiques of Marxism (Soc2), as well as ontological assumptions of heterodox vs. ‘neoclassical’ economics (Econ8). If you are an academic bro, please resist the urge to try to ‘explain’ any of these as you will both waste my time and not like the result. Meanwhile, I strongly encourage you to read the *academic* work I have published on these questions over the past decade, which you can find under Publications.
**This is one of the reasons why some of the most interesting debates about knowledge production today concern ownership, copyright, or legal access. I do not have time to enter into these debates in this post; for a relatively recent take, see here.
This summer, I spent almost a month in Serbia and Montenegro (yes, these are two different countries, despite the New York Times still refusing to acknowledge this). This is about seven times as long as I normally would. The two principal reasons are that my mother, who lives in Belgrade, is ill, and that I was planning to get a bit of time to quietly sit and write my thesis on the Adriatic coast of Montenegro. How the latter turned out in light of the knowledge of the former I leave to imagination (tl;dr: not well). It did, however, give me ample time to reflect on the post-socialist condition, which I haven’t done in a while, and to get outside Belgrade, to which I normally confine my brief visits.
The way in which perverse necro/bio-politics of post-socialism obtain in my mother’s illness, in the landscape, and in the socio-material, fits almost too perfectly into what has been for years the dominant style of writing about places that used to be behind the Iron Curtain (or, in the case of Yugoslavia, on its borders). Social theory’s favourite ruins – the ruins of socialism – are repeatedly re-valorised through being dusted off and resurrected, as yet another alter-world to provide the mirror image to the here and the now (the here and the now, obviously, being capitalism). During the Cold War, the Left had its alter-image in the Soviet Union; now, the antidote to neoliberalism is provided not through the actual ruins of real socialism – that would be a tad too much to handle – but through the re-invention of the potential of socialism to provide, in a tellingly polysemic title of MoMA’s recently-opened exhibition on architecture in Yugoslavia, concrete utopias.
Don’t get me wrong: I would love to see the exhibition, and I am sure that it offers much to learn, especially for those who did not have the dubious privilege of having grown up on both sides of socialism. It’s not the absence of nuance that makes me nauseous in encounters with socialist nostalgia: a lot of it, as a form of cultural production, is made by well-meaning people and, in some cases, incredibly well-researched. It’s that resurrecting hipsterified golems of post-socialism serves little purpose other than to underline their ontological status as a source of comparison for the West, cannon-fodder for imaginaries of the world so bereft of hope that it would rather replay its past dreams than face the potential waking nightmare of its future.
It’s precisely this process that leaves them unable to die, much like the ghosts/apparitions/copies in Lem’s (and Tarkovsky’s) Solaris, and in VanderMeer’s Southern Reach trilogy. In VanderMeer’s books, members of the eleventh expedition (or, rather, their copies) who return to the ‘real world’ after exposure to the Area X develop cancer and die pretty quickly. Life in post-socialism is very much this: shadows or copies of former people confusedly going about their daily business, or revisiting the places that once made sense to them, which, sometimes, they have to purchase as repackaged ‘post-socialism’; in this sense, the parable of Roadside Picnic/Stalker as the perennial museum of post-communism is really prophetic.
The necropolitical profile of these parts of former Yugoslavia, in fact, is pretty unexceptional. For years, research has shown that rapid privatisation increases mortality, even controlled for other factors. Obviously, the state still feigns perfunctory care for the elderly, but healthcare is cumbersome, inefficient and, in most cases, barely palliative. Smoking and heavy drinking are de rigueur: in winter, Belgrade cafés and pubs turn into proper smokehouses. Speaking of that, vegetarianism is still often, if benevolently, ridiculed. Fossil fuel extraction is ubiquitous. According to this report from 2014, Serbia had the second highest rate of premature deaths due to air pollution in Europe. That’s not even getting closer to the Thing That Can’t Be Talked About – the environmental effects of the NATO intervention in 1999.
An apt illustration comes as I travel to Western Serbia to give a talk at the anthropology seminar at Petnica Science Centre, where I used to work between 2000 and 2008. Petnica is a unique institution that developed in the 1980s and 1990s as part science camp, part extracurricular interdisciplinary research institute, where electronics researchers would share tables in the canteen with geologists, and physicists would talk (arguably, not always agreeing) to anthropologists. Founded in part by the Young Researchers of Serbia (then Yugoslavia), a forward-looking environmental exploration and protection group, the place used to float its green credentials. Today, it is funded by the state – and fully branded by the Oil Industry of Serbia. The latter is Serbian only in its name, having become a subsidiary of the Russian fossil fuel giant Gazpromneft. What could arguably be dubbed Serbia’s future research elite, thus, is raised in view of full acceptance of the ubiquity of fossil fuels not only for providing energy, but, literally, for running the facilities they need to work.
These researchers can still consider themselves lucky. The other part of Serbian economy that is actually working are factories, or rather production facilities, of multinational companies. In these companies, workers are given 12-hour shifts, banned from unionising, and, as a series of relatively recent reports revealed, issued with adult diapers so as to render toilet breaks unnecessary.
As Elizabeth Povinelli argued, following Achille Mbembe, geontopower – the production of life and nonlife, and the creation of the distinction between them, including what is allowed to live and what is allowed to die – is the primary mode of exercise of power in late liberalism. Less frequently examined way of sustaining the late liberal order is the production of semi-dependent semi-peripheries. Precisely because they are not the world’s slums, and because they are not former colonies, they receive comparatively little attention. Instead, they are mined for resources (human and inhuman). That the interaction between the two regularly produces outcomes guaranteed to deplete the first is of little relevance. The reserves, unlike those of fossil fuels, are almost endless.
Serbian government does its share in ensuring that the supply of cheap labour force never runs out, by launching endless campaigns to stimulate reproduction. It seems to be working: babies are increasingly the ‘it’ accessory in cafés and bars. Officially, stimulating the birth rate is to offset the ‘cost’ of pensions, which IMF insists should not increase. Unofficially, of course, the easiest way to adjust for this is to make sure pensioners are left behind. Much like the current hype about its legacy, the necropolitics of post-socialism operates primarily through foregrounding its Instagrammable elements, and hiding the ugly, non-productive ones.
Much like in VanderMeer’s Area X, knowledge that the border is advancing could be a mixed blessing: as Danowski and Viveiros de Castro argued in a different context, end of the world comes more easily to those for whom the world has already ended, more than once. Not unlike what Scranton argued in Learning to Die in the Anthropocene – this, perhaps, rather than sanitised dreams of a utopian future, is one thing worth resurrecting from post-socialism.
[Shortened version of this blog post was published on Times Higher Education blog on 14 March under the title ‘USS strike: picket line debates will reenergise scholarship’].
Until recently, Professor Marenbon writes, university strikes in Cambridge were a hardly noticeable affair. Life, he says, went on as usual. The ongoing industrial action that UCU members are engaging in at UK’s universities has changed all that. Dons, rarely concerned with the affairs of the lesser mortals, seem to be up in arms. They are picketing, almost every day, in the wind and the snow; marching; shouting slogans. For Heaven’s sake, some are even dancing. Cambridge, as pointed out on Twitter, has not seen such upheaval ever since we considered awarding Derrida an honorary degree.
This is possibly the best thing that has happened to UK higher education, at least since the end of the 1990s. Not that there’s much competition: this period, after all, brought us the introduction, then removal of tuition fee caps; abolishment of maintenance grants; REF and TEF; and as crowning (though short-lived) glory, appointment of Toby Young to the Office for Students. Yet, for most of this period, academics’ opposition to these reforms conformed to ‘civilised’ ways of protest: writing a book, giving a lecture, publishing a blog post or an article in Times Higher Education, or, at best, complaining on Twitter. While most would agree that British universities have been under threat for decades, concerted effort to counter these reforms – with a few notable exceptions – remained the provenance of the people Professor Marenbon calls ‘amiable but over-ideological eccentrics’.
This is how we have truly let down our students. Resistance was left to student protests and occupations. Longer-lasting, transgenerational solidarity was all but absent: at the end of the day, professors retreated to their ivory towers, precarious academics engaged in activism on the side of ever-increasing competition and pressure to land a permanent job. Students picked up the tab: not only when it came to tuition fees, used to finance expensive accommodation blocks designed to attract more (tuition-paying) students, but also when it came to the quality of teaching and learning, increasingly delivered by an underpaid, overworked, and precarious labour force.
This is why the charge that teach-outs of dubious quality are replacing lectures comes across as particularly disingenuous. We are told that ‘although students are denied lectures on philosophy, history or mathematics, the union wants them to show up to “teach-outs” on vital topics such as “How UK policy fuels war and repression in the Middle East” and “Neoliberal Capitalism versus Collective Imaginaries”’. Although this is but one snippet of Cambridge UCU’s programme of teach-outs, the choice is illustrative.
The link between history and UK’s foreign policy in the Middle East strikes me as obvious. Students in philosophy, politics or economics could do worse than a seminar on the development of neoliberal ideology (the event was initially scheduled as part of the Cambridge seminar in political thought). As for mathematics – anybody who, over the past weeks, has had to engage with the details of actuarial calculation and projections tied to the USS pension scheme has had more than a crash refresher course: I dare say they learned more than they ever hoped they would.
Teach-outs, in this sense, are not a replacement for education “as usual”. They are a way to begin bridging the infamous divide between “town and gown”, both by being held in more open spaces, and by, for instance, discussing how the university’s lucrative development projects are impacting on the regional economy. They are not meant to make up for the shortcomings of higher education: if anything, they render them more visible.
What the strikes have made clear is that academics’ ‘life as usual’ is vice-chancellors’ business as usual. In other words, it is precisely the attitude of studied depoliticisation that allowed the marketization of higher education to continue. Markets, after all, are presumably ‘apolitical’. Other scholars have expanded considerable effort in showing how this assumption had been used to further policies whose results we are now seeing, among other places, in the reform of the pensions system. Rather than repeat their arguments, I would like to end with the words of another philosopher, Hannah Arendt, who understood well the ambiguous relationship between the academia and politics:
‘Very unwelcome truths have emerged from the universities, and very unwelcome judgments have been handed down from the bench time and again; and these institutions, like other refuges of truth, have remained exposed to all the dangers arising from social and political power. Yet the chances for truth to prevail in public are, of course, greatly improved by the mere existence of such places and by the organization of independent, supposedly disinterested scholars associated with them.
This authentically political significance of the Academe is today easily overlooked because of the prominence of its professional schools and the evolution of its natural science divisions, where, unexpectedly, pure research has yielded so many decisive results that have proved vital to the country at large. No one can possibly gainsay the social and technical usefulness of the universities, but this importance is not political. The historical sciences and the humanities, which are supposed to find out, stand guard over, and interpret factual truth and human documents, are politically of greater relevance.’
In this sense, teach-outs, and industrial action in general, are a way to for us to recognise our responsibility to protect the university from the undue incursion of political power, while acknowledging that such responsibility is in itself political. At this moment in history, I can think of no service to scholarship greater than that.
The critique of neoliberalism in academia is almost as old as its object. Paradoxically, it is the only element of the ‘old’ academia that seems to be thriving amid steadily worsening conditions: as I’ve argued in this book review, hardly a week goes by without a new book, volume, or collection of articles denouncing the neoliberal onslaught or ‘war’ on universities and, not less frequently, announcing their (untimely) death.
What makes the proliferation of critique of the transformation of universities particularly striking is the relative absence – at least until recently – of sustained modes of resistance to the changes it describes. While the UCU strike in reaction to the changes to the universities’ pension scheme offers some hope, by and large, forms of resistance have much more often taken the form of a book or blog post than strike, demo, or occupation. Relatedly, given the level of agreement among academics about the general direction of these changes, engagement with developing long-term, sustainable alternatives to exploitative modes of knowledge production has been surprisingly scattered.
It was this relationship between the abundance of critique and paucity of political action that initially got me interested in arguments and forms of intellectual positioning in what is increasingly referred to as the ‘[culture] war on universities’. Of course, the question of the relationship between critique and resistance – or knowledge and political action – concerns much more than the future of English higher education, and reaches into the constitutive categories of Western political and social thought (I’ve addressed some of this in this talk). In this post, however, my intention is to focus on its implications for how we can conceive critique in and of neoliberal academia.
Varieties of neoliberalism, varieties of critique?
While critique of neoliberalism in the academia tends to converge around the causes as well as consequences of this transformation, this doesn’t mean that there is no theoretical variation. Marxist critique, for instance, tends to emphasise the changes in working conditions of academic staff, increased exploitation, and growing commodification of knowledge. It usually identifies precarity as the problem that prevents academics from exercising the form of political agency – labour organizing – that is seen as the primary source of potential resistance to these changes.
Poststructuralist critique, most of it drawing on Foucault, tends to focus on changing status of knowledge, which is increasingly portrayed as a private rather than a public good. The reframing of knowledge in terms of economic growth is further tied to measurement – reduction to a single, unitary, comparable standard – and competition, which is meant to ensure maximum productivity. This also gives rise to mechanisms of constant assessment, such as the TEF and the REF, captured in the phrase ‘audit culture‘. Academics, in this view, become undifferentiated objects of assessment, which is used to not only instill fear but also keep them in constant competition against each other in hope of eventual conferral of ‘tenure’ or permanent employment, through which they can be constituted as full subjects with political agency.
Last, but not least, the type of critique that can broadly be referred to as ‘new materialist’ shifts the source of political power directly to instruments for measurement and sorting, such as algorithms, metrics, and Big Data. In the neoliberal university, the argument goes, there is no need for anyone to even ‘push the button’; metrics run on their own, with the social world already so imbricated by them that it becomes difficult, if not entirely impossible, to resist. The source of political agency, in this sense, becomes the ‘humanity’ of academics, what Arendt called ‘mere’ and Agamben ‘bare’ life. A significant portion of new materialist critique, in this vein, focuses on emotions and affect in the neoliberal university, as if to underscore the contrast between lived and felt experiences of academics on the one hand, and the inhumanity of algorithms or their ‘human executioners’ on the other.
Despite possibly divergent theoretical genealogies, these forms of critique seem to move in the same direction. Namely, the object or target of critique becomes increasingly elusive, murky, and de-differentiated: but, strangely enough, so does the subject. As power grows opaque (or, in Foucault’s terms, ‘capillary’), the source of resistance shifts from a relatively defined position or identity (workers or members of the academic profession) into a relatively amorphous concept of humanity, or precarious humanity, as a whole.
Of course, there is nothing particularly original in the observation that neoliberalism has eroded traditional grounds for solidarity, such as union membership. Wendy Brown’s Undoing the Demos and Judith Butler’s Notes towards a performative theory of assembly, for instance, address the possibilities for political agency – including cross-sectional approaches such as that of the Occupy movement – in view of this broader transformation of the ‘public’. Here, however, I would like to engage with the implications of this shift in the specific context of academic resistance.
Nerdish subject? The absent centre of [academic] political ontology
The academic political subject, which is why the pun on Žižek, is profoundly haunted by its Cartesian legacy: the distinction between thinking and being, and, by extension, between subject and object. This is hardly surprising: critique is predicated on thinking about the world, which proceeds through ‘apprehending’ the world as distinct from the self; but the self is also predicated on thinking about that world. Though they may have disagreed on many other things, Boltanski and Bourdieu – both feature prominently in my work – converge on the importance of this element for understanding the academic predicament: Bourdieu calls it the scholastic fallacy, and Boltanski complex exteriority.
Nowhere is the Cartesian legacy of critique more evident than in its approach to neoliberalism. From Foucault onwards, academic critique has approached neoliberalism as an intellectual project: the product of a ‘thought collective’ or a small group of intellectuals, initially concentrated in the Mont Pelerin society, from which they went on to ‘conquer’ not only economics departments but also, more importantly, centres of political power. Critique, in other words, projects back onto neoliberalism its own way of coming to terms with the world: knowledge. From here, the Weberian assumption that ideas precede political action is transposed to forms of resistance: the more we know about how neoliberalism operates, the better we will be able to resist it. This is why, as neoliberalism proliferates, the books, journal articles, etc. that somehow seek to ‘denounce’ it multiply as well.
Speech acts: the lost hyphen
The fundamental notion of critique, in this sense, is (J.L Austin‘s and Searle’s) notion of speech acts: the assumption that words can have effects. What gets lost in dropping the hyphen in speech(-)acts is a very important bit in the theory of performativity: that is, the conditions under which speech does constitute effective action. This is why Butler in Performative agency draws attention to Austin’s emphasis on perlocution: speech-acts that are effective only under certain circumstances. In other words, it’s not enough to exclaim: “Universities are not for sale! Education is not a commodity! Students are not consumers!” for this to become the case. For this begs the question: “Who is going to bring this about? What are the conditions under which this can be realized?” In other words: who has the power to act in ways that can make this claim true?
What critique bounces against, thus, is thinking its own agency within these conditions, rather than trying to paint them as if they are somehow on the ‘outside’ of critique itself. Butler recognizes this:
“If this sort of world, what we might be compelled to call ‘the bad life’, fails to reflect back my value as a living being, then I must become critical of those categories and structures that produce that form of effacement and inequality. In other words, I cannot affirm my own life without critically evaluating those structures that differentially value life itself [my emphasis]. This practice of critique is one in which my own life is bound up with the objects that I think about” (2015: 199).
In simpler terms: my position as a political subject is predicated on the practice of critique, which entails reflecting on the conditions that make my life difficult (or unbearable). Yet, those conditions are in part what constitutes my capacity to engage in critique in the first place, as the practice of thinking (critically) is, especially in the case of academic critique, inextricably bound up in practices, institutions, and – not least importantly – economies of academic knowledge production. In formal terms, critique is a form of a Russell’s paradox: a set that at the same time both is and is not a member of itself.
Living with (Russell) paradoxes
This is why academic critique of neoliberalism has no problem with thinking about governing rationalities, exploitation of workers in Chinese factories, or VC’s salaries: practices that it perceives as outside of itself, or in which it can conceive of itself as an object. But it faces serious problems when it comes to thinking itself as a subject, and even more, acting in this context, as this – at least according to its own standards – means reflecting on all the practices that make it ‘complicit’ in exactly what it aims to expunge, or criticize.
This means coming to terms with the fact that neoliberalism is the Research Excellence Framework, but neoliberalism is also when you discuss ideas for a super-cool collaborative project. Neoliberalism is the requirement to submit all your research outputs to the faculty website, but neoliberalism is also the pride you feel when your most recent article is Tweeted about. Neoliberalism is the incessant corporate emails about ‘wellbeing’, but it is also the craft beer you have with your friends in the pub. This is why, in the seemingly interminable debates about the ‘validity’ of neoliberalism as an analytical term, both sides are right: yes, on the one hand, the term is vague and can seemingly be applied to any manifestation of power, but, on the other, it does cover everything, which means it cannot be avoided either.
This is exactly the sort of ambiguity – the fact that things can be two different things at the same time – that critique in neoliberalism needs to come to terms with. This could possibly help us move beyond the futile iconoclastic gesture of revealing the ‘true nature’ of things, expecting that action will naturally follow from this (Martijn Konings’ Capital and Time has a really good take on the limits of ‘ontological’ critique of neoliberalism). In this sense, if there is something critique can learn from neoliberalism, it is the art of speculation. If economic discourses are performative, then, by definition, critique can be performative too. This means that futures can be created – but the assumption that ‘voice’ is sufficient to create the conditions under which this can be the case needs to be dispensed with.

A serious line of division runs through my household. It does not concern politics, music, or even sports: it concerns the possibility of large-scale collapse of social and political order, which I consider very likely. Specific scenarios aside for the time being, let’s just say we are talking more human-made climate-change-induced breakdown involving possibly protracted and almost certainly lethal conflict over resources, than ‘giant asteroid wipes out Earth’ or ‘rogue AI takes over and destroys humanity’.
Ontological security or epistemic positioning?
It may be tempting to attribute the tendency towards catastrophic predictions to psychological factors rooted in individual histories. My childhood and adolescence took place alongside the multi-stage collapse of the country once known as the Socialist Federal Republic of Yugoslavia. First came the economic crisis, when the failure of ‘shock therapy’ to boost stalling productivity (surprise!) resulted in massive inflation; then social and political disintegration, as the country descended into a series of violent conflicts whose consequences went far beyond the actual front lines; and then actual physical collapse, as Serbia’s long involvement in wars in the region was brought to a halt by the NATO intervention in 1999, which destroyed most of the country’s infrastructure, including parts of Belgrade, where I was living at the time*. It makes sense to assume this results in quite a different sense of ontological security than one, say, the predictability of a middle-class English childhood would afford.
But does predictability actually work against the capacity to make accurate predictions? This may seem not only contradictory but also counterintuitive – any calculation of risk has to take into account not just the likelihood, but also the nature of the source of threat involved, and thus necessarily draws on the assumption of (some degree of) empirical regularity. However, what about events outside of this scope? A recent article by Faulkner, Feduzi and Runde offers a good formalization of this problem (the Black Swans and ‘unknown unknowns’) in the context of the (limited) possibility to imagine different outcomes (see table below). Of course, as Beck noted a while ago, the perception of ‘risk’ (as well as, by extension, any other kind of future-oriented thinking) is profoundly social: it depends on ‘calculative devices‘ and procedures employed by networks and institutions of knowledge production (universities, research institutes, think tanks, and the like), as well as on how they are presented in, for instance, literature and the media.

Unknown unknowns
In The Great Derangement (probably the best book I’ve read in 2017), Amitav Gosh argues that this can explain, for instance, the surprising absence of literary engagement with the problem of climate change. The problem, he claims, is endemic to Western modernity: a linear vision of history cannot conceive of a problem that exceeds its own scale**. This isn’t the case only with ‘really big problems’ such as economic crises, climate change, or wars: it also applies to specific cases such as elections or referendums. Of course, social scientists – especially those qualitatively inclined – tend to emphasise that, at best, we aim to explain events retroactively. Methodological modesty is good (and advisable), but avoiding thinking about the ways in which academic knowledge production is intertwined with the possibility of prediction is useless, for at least two reasons.
One is that, as reflected in the (by now overwrought and overdetermined) crisis of expertise and ‘post-truth’, social researchers increasingly find themselves in situations where they are expected to give authoritative statements about the future direction of events (for instance, about the impact of Brexit). Even if they disavow this form of positioning, the very idea of social science rests on (no matter how implicit) assumption that at least some mechanisms or classes or objects will exhibit the same characteristics across cases; consequently, the possibility of inference is implied, if not always practised. Secondly, given the scope of challenges societies face at present, it seems ridiculous to not even attempt to engage with – and, if possibly, refine – the capacity to think how they will develop in the future. While there is quite a bit of research on individual predictive capacity and the way collective reasoning can correct for cognitive bias, most of these models – given that they are usually based on experiments, or simulations – cannot account for the way in which social structures, institutions, and cultures of knowledge production interact with the capacity to theorise, model, and think about the future.
The relationship between social, political, and economic factors, on the one hand, and knowledge (including knowledge about those factors), on the other, has been at the core of my work, including my current PhD. While it may seem minor compared to issues such as wars or revolutions, the future of universities offers a perfect case to study the relationship between epistemic positioning, positionality, and the capacity to make authoritative statements about reality: what Boltanski’s sociology of critique refers to as ‘complex externality’. One of the things it allowed me to realise is that while there is a good tradition of reflecting on positionality (or, in positivist terms, cognitive ‘bias’) in relation to categories such as gender, race, or class, we are still far from successfully theorising something we could call ‘ontological bias’: epistemic attachment to the object of research.
The postdoctoral project I am developing extends this question and aims to understand its implications in the context of generating and disseminating knowledge that can allow us to predict – make more accurate assessments of – the future of complex social phenomena such as global warming or the development of artificial intelligence. This question has, in fact, been informed by my own history, but in a slightly different manner than the one implied by the concept of ontological security.
Legitimation and prediction: the case of former Yugoslavia
Socialist Federal Republic of Yugoslavia had a relatively sophisticated and well developed networks of social scientists, which both of my parents were involved in***. Yet, of all the philosophers, sociologists, political scientists etc. writing about the future of the Yugoslav federation, only one – to the best of my knowledge – predicted, in eerie detail, the political crisis that would lead to its collapse: Bogdan Denitch, whose Legitimation of a revolution: the Yugoslav case (1976) is, in my opinion, one of the best books about former Yugoslavia ever written.
A Yugoslav-American, Denitch was a professor of sociology at the City University of New York. He was also a family friend, a fact I considered of little significance (having only met him once, when I was four, and my mother and I were spending a part of our summer holiday at his house in Croatia; my only memory of it is being terrified of tortoises roaming freely in the garden), until I began researching the material for my book on education policies and the Yugoslav crisis. In the years that followed (I managed to talk to him again in 2012; he passed away in 2016), I kept coming back to the question: what made Denitch more successful in ‘predicting’ the crisis that would ultimately lead to the dissolution of former Yugoslavia than virtually anyone writing on Yugoslavia at the time?
Denitch had a pretty interesting trajectory. Born in 1929 to Croat Serb parents, he spent his childhood in a series of countries (including Greece and Egypt), following his diplomat father; in 1946, the family emigrated to the United States (the fact his father was a civil servant in the previous government would have made it impossible for them to continue living in Yugoslavia after the Communist regime, led by Josip Broz Tito, formally took over). There, Denitch (in evident defiance of his upper-middle-class legacy) trained as a factory worker, while studying for a degree in sociology at CUNY. He also joined the Democratic Socialist Alliance – one of American socialist parties – whose member (and later functionary) he would remain for the rest of his life.
In 1968, Denitch was awarded a major research grant to study Yugoslav elites. The project was not without risks: while Yugoslavia was more open to ‘the West’ than other countries in Eastern Europe, visits by international scholars were strictly monitored. My mother recalls receiving a house visit from an agent of the UDBA, the Yugoslav secret police – not quite the KGB but you get the drift – who tried to elicit the confession that Denitch was indeed a CIA agent, and, in the absence of that, the promise that she would occasionally report on him****.
Despite these minor throwbacks, the research continued: Legitimation of a revolution is one of its outcomes. In 1973, Denitch was awarded a PhD by the Columbia University and started teaching at CUNY, eventually retiring in 1994. His last book, Ethnic nationalism: the tragic death of Yugoslavia came out in the same year, a reflection on the conflict that was still going on at the time, and whose architecture he had foreseen with such clarity eighteen years earlier (the book is remarkably bereft of “told-you-so”-isms, so warmly recommended for those wishing to learn more about Yugoslavia’s dissolution).
Did personal history, in this sense, have a bearing on one’s epistemic position, and by extension, on the capacity to predict events? One explanation (prevalent in certain versions of popular intellectual history) would be that Denitch’s position as both a Yugoslav and an American would have allowed him to escape the ideological traps other scholars were more likely to fall into. Yugoslavs, presumably, would be at pains to prove socialism was functioning; Americans, on the other hand, perhaps egalitarian in theory but certainly suspicious of Communist revolutions in practice, would be looking to prove it wasn’t, at least not as an economic model. Yet this assumption hardly stands even the lightest empirical interrogation. At least up until the show trials of Praxis philosophers, there was a lively critique of Yugoslav socialism within Yugoslavia itself; despite the mandatory coating of jargon, Yugoslav scholars were quite far from being uniformly bright-eyed and bushy-tailed about socialism. Similarly, quite a few American scholars were very much in favour of the Yugoslav model, eager, if anything, to show that market socialism was possible – that is, that it’s possible to have a relatively progressive social policy and still be able to afford nice things. Herein, I believe, lies the beginning of the answer as to why neither of these groups was able to predict the type or the scale of the crisis that will eventually lead to the dissolution of former Yugoslavia.
Simply put, both groups of scholars depended on Yugoslavia as a source of legitimation of their work, though for different reasons. For Yugoslav scholars, the ‘exceptionality’ of the Yugoslav model was the source of epistemic legitimacy, particularly in the context of international scientific collaboration: their authority was, in part at least, constructed on their identity and positioning as possessors of ‘local’ knowledge (Bockman and Eyal’s excellent analysis of the transnational roots of neoliberalism makes an analogous point in terms of positioning in the context of the collaboration between ‘Eastern’ and ‘Western’ economists). In addition to this, many of Yugoslav scholars were born and raised in socialism: while, some of them did travel to the West, the opportunities were still scarce and many were subject to ideological pre-screening. In this sense, both their professional and their personal identity depended on the continued existence of Yugoslavia as an object; they could imagine different ways in which it could be transformed, but not really that it could be obliterated.
For scholars from the West, on the other hand, Yugoslavia served as a perfect experiment in mixing capitalism and socialism. Those more on the left saw it as a beacon of hope that socialism need not go hand-in-hand with Stalinist-style repression. Those who were more on the right saw it as proof that limited market exchange can function even in command economies, and deduced (correctly) that the promise of supporting failing economies in exchange for access to future consumer markets could be used as a lever to bring the Eastern Bloc in line with the rest of the capitalist world. If no one foresaw the war, it was because it played no role in either of these epistemic constructs.
This is where Denitch’s background would have afforded a distinct advantage. The fact his parents came from a Serb minority in Croatia meant he never lost sight of the salience of ethnicity as a form of political identification, despite the fact socialism glossed over local nationalisms. His Yugoslav upbringing provided him not only with fluency in the language(s), but a degree of shared cultural references that made it easier to participate in local communities, including those composed of intellectuals. On the other hand, his entire professional and political socialization took place in the States: this meant he was attached to Yugoslavia as a case, but not necessarily as an object. Not only was his childhood spent away from the country; the fact his parents had left Yugoslavia after the regime change at the end of World War II meant that, in a way, for him, Yugoslavia-as-object was already dead. Last, but not least, Denitch was a socialist, but one committed to building socialism ‘at home’. This means that his investment in the Yugoslav model of socialism was, if anything, practical rather than principled: in other words, he was interested in its actual functioning, not in demonstrating its successes as a marriage of markets and social justice. This epistemic position, in sum, would have provided the combination needed to imagine the scenario of Yugoslav dissolution: a sufficient degree of attachment to be able to look deeply into a problem and understand its possible transformations; and a sufficient degree of detachment to be able to see that the object of knowledge may not be there forever.
Onwards to the…future?
What can we learn from the story? Balancing between attachment and detachment is, I think, one of the key challenges in any practice of knowing the social world. It’s always been there; it cannot be, in any meaningful way, resolved. But I think it will become more and more important as the objects – or ‘problems’ – we engage with grow in complexity and become increasingly central to the definition of humanity as such. Which means we need to be getting better at it.
———————————-
(*) I rarely bring this up as I think it overdramatizes the point – Belgrade was relatively safe, especially compared to other parts of former Yugoslavia, and I had the fortune to never experience the trauma or hardship people in places like Bosnia, Kosovo, or Croatia did.
(**) As Jane Bennett noted in Vibrant Matter, this resonates with Adorno’s notion of non-identity in Negative Dialectics: a concept always exceeds our capacity to know it. We can see object-oriented ontology, (e.g. Timothy Morton’s Hyperobjects) as the ontological version of the same argument: the sheer size of the problem acts as a deterrent from the possibility to grasp it in its entirety.
(***) This bit lends itself easily to the Bourdieusian “aha!” argument – academics breed academics, etc. The picture, however, is a bit more complex – I didn’t grow up with my father and, until about 16, had a very vague idea of what my mother did for a living.
(****) Legend has it my mother showed the agent the door and told him never to call on her again, prompting my grandmother – her mother – to buy funeral attire, assuming her only daughter would soon be thrown into prison and possibly murdered. Luckily, Yugoslavia was not really the Soviet Union, so this did not come to pass.
[Note: a shorter version of this post was published in Times Higher Education’s online edition, 26 December 2017]
The Government’s most recent proposal to introduce the possibility of two-year (‘accelerated’) degrees has already attracted quite a lot of criticism. One aspect is student debt: given that universities will be allowed to charge up to £2,000 more for these ‘fast-track’ degrees, there are doubts in terms of how students will be able to afford them. Another concerns the lack of mobility: since the Bologna Process assumes comparability of degrees across European higher education systems, students in courses shorter than three or four years would find it very difficult to participate in Erasmus or other forms of student exchange. Last, but not least, many academics have said the idea of ‘accelerated’ learning is at odds with the nature of academic knowledge, and trivializes or debases the time and effort necessary for critical reflection.
However, perhaps the most curious element of the proposal is its similarity to the Diploma of Higher Education (DipHE), a two-year qualification proposed by Mrs Thatcher at the time when she was State Secretary for Education and Science. Of course, DipHE had a more vocational character, meant to enable access equally to further education and the labour market. In this sense, it was both a foundation degree and a finishing qualification. But there is no reason to believe those in new two-year programmes would not consider continuing their education through a ‘top-up’ year, especially if the labour market turns out not to be as receptive for their qualification as the proposal seems to hope. So the real question is: why introduce something that serves no obvious purpose – for the students or, for that matter, for the economy – and, furthermore, base it on resurrecting a policy that proved unpopular in 1972 and was abandoned soon after introduction?
One obvious answer is that the Conservative government is desperate for a higher education policy to match Labour’s proposal to abolish tuition fees (despite the fact that, no matter how commendable, abolishing tuition fees is little but a reversal of measures put in place by the last Labour government). But the case of higher education in Britain is more curious than that. If one sees policy as a set of measures designed to bring about a specific vision of society, Britain never had much of a higher education policy to begin with.
Historically, British universities evolved as highly autonomous units, which meant that the Government felt little need to regulate them until well into the 20th century. Until the 1960s, the University Grants Committee succeeded in maintaining the ‘gentlemanly conversation’ between the universities and the Government. The 1963 report of the Robbins Committee, thus, was to be the first serious step into higher education policy-making. Yet, despite the fact that the Robbins report was more complex than many who cite it approvingly give it credit for, its main contribution was to open the door of universities for, in the memorable phrase, “all who qualify by ability and attainment”. What it sought to regulate was thus primarily who should access higher education – not necessarily how it should be done, nor, for that matter, what the purpose of this was.
Even the combined pressures of the economic crisis and an uneven rate of expansion in the 1970s and the 1980s did little to orient the government towards a more coherent strategy for higher education. This led Peter Scott to comment in 1982 “so far as we have in Britain any policy for higher education it is the binary policy…[it] is the nearest thing we have to an authoritative statement about the purposes of higher education”. The ‘watershed’ moment of 1992, abolishing the division between universities and polytechnics, was, in that sense, less of a policy and more of an attempt to undo the previous forays into regulating the sector.
Two major reviews of higher education since Robbins, the Dearing report and the Browne review, represented little more than attempts to deal with the consequences of massification through, first, tying education more closely to the supposed needs of the economy, and, second, introducing tuition fees. The difference between Robbins and subsequent reports in terms of scope of consultation and collected evidence suggests there was little interest in asking serious questions about the strategic direction of higher education, the role of the government, and its relationship to universities. Political responsibility was thus outsourced to ‘the Market’, that rare point of convergence between New Labour and Conservatives – at best a highly abstract aggregate of unreliable data concerning student preferences, and, at worst, utter fiction.
Rather than as a policy in a strict sense of the term, this latest proposal should be seen as another attempt at governing populations, what Michel Foucault called biopolitics. Of course, there is nothing wrong with the fact that people learn at different speeds: anyone who has taught in a higher education institution is more than aware that students have varying learning styles. But the Neo-Darwinian tone of “highly motivated students hungry for a quicker pace of learning” combined with the pseudo-widening-participation pitch of “mature students who have missed out on the chance to go to university as a young person” neither acknowledges this, nor actually engages with the need to enable multiple pathways into higher education. Rather, funneling students through a two-year degree and into the labour market is meant to ensure they swiftly become productive (and consuming) subjects.

Of course, whether the labour market will actually have the need for these ‘accelerated’ subjects, and whether universities will have the capacity to teach them, remains an open question. But the biopolitics of higher education is never about the actual use of degrees or specific forms of learning. As I have shown in my earlier work on vocationalism and ‘education for labour’, this type of political technology is always about social control; in other words, it aims to prevent potentially unruly subjects from channeling their energy into forms of action that could be disruptive of the political order.
Education – in fact, any kind of education policy – is perfect in this sense because it is fundamentally oriented towards the future. It occupies the subject now, but transposes the horizon of expectation into the ever-receding future – future employment, future fulfillment, future happiness. The promise of quicker, that is, accelerated delivery into this future is a particularly insidious form of displacement of political agency: the language of certainty (“when most students are completing their third year of study, an accelerated degree student will be starting work and getting a salary”) is meant to convey that there is a job and salary awaiting, as it were, at the end of the proverbial rainbow.
The problem is not simply that such predictions (or promises) are based on an empty rhetoric, rather than any form of objective assessment of the ‘needs’ of the labour market. Rather, it is that future needs of the labour market are notoriously difficult to assess, and even more so in periods of economic contraction. Two-year degrees, in this sense, are just a way to defer the compounding problems of inequality, unemployment, and social insecurity. Unfortunately, to this date, no higher education qualification has proven capable of doing that.

Hardly anyone needs convincing that the university today is in deep crisis. Critics warn that the idea of the University (at least in the form in which it emerged from Western modernity) is endangered, under attack, under fire; that governments or corporations are waging a war against them. Some even pronounce public university already dead, or at least lying in ruins. The narrative about the causes of the crisis is well known: shift in public policy towards deregulation and the introduction of market principles – usually known as neoliberalism – meant the decline of public investment, especially for social sciences and humanities, introduction of performance-based funding dependent on quantifiable output, and, of course, tuition fees. This, in turn, led to the rising precarity and insecurity among faculty and students, reflected, among other things, in a mental health crisis. Paradoxically, the only surviving element of the public university that seems to be doing relatively well in all this is critique. But what if the crisis of the university is, in fact, a crisis of imagination?
Don’t worry, this is not one of those posts that try to convince you that capitalism can be wished away by the power of positive thinking. Nor is it going to claim that neoliberalism offers unprecedented opportunities, if only we would be ‘creative’ enough to seize them. The crisis is real, it is felt viscerally by almost everyone in higher education, and – importantly – it is neither exceptional nor unique to universities. Exactly because it cannot be wished away, and exactly because it is deeply intertwined with the structures of the current crisis of capitalism, opposition to the current transformation of universities would need to involve serious thinking about long-term alternatives to current modes of knowledge production. Unfortunately, this is precisely the bit that tends to be missing from a lot of contemporary critique.
Present-day critique of neoliberalism in higher education often takes the form of nostalgic evocation of the glory days when universities were few, and funds for them plentiful. Other problems with this mythical Golden Age aside, what this sort of critique conveniently omits to mention is that institutions that usually provide the background imagery for these fantastic constructs were both highly selective and highly exclusionary, and that they were built on the back of centuries of colonial exploitation. If it seemed like they imparted a life of relatively carefree privilege on those who studied and worked in them, that is exactly because this is what they were designed to do: cater to the “life of the mind” via excluding all forms of interference, particularly if they took the form of domestic (or any other material) labour, women, or minorities. This tendency is reproduced in Ivory Tower nostalgia as a defensive strategy: the dominant response to what critics tend to claim is the biggest challenge to universities since their founding (which, as they like to remind us, was a long, long time ago) is to stick their head in the sand and collectively dream back to the time when, as Pink Floyd might put it, grass was greener and lights were brighter.
Ivory Tower nostalgia, however, is just one aspect of this crisis of imagination. A much broader symptom is that contemporary critique seems unable to imagine a world without the university. Since ideas of online disembedded learning were successfully monopolized by technolibertarian utopians, the best most academics seem to be able to come up with is to re-erect the walls of the institution, but make them slightly more porous. It’s as if the U of University and the U of Utopia were somehow magically merged. To extend the oft-cited and oft-misattributed saying, if it seems easier to imagine the end of the world than the end of capitalism, it is nonetheless easier to imagine the end of capitalism than the end of universities.
Why does the institution like a university have such a purchase on (utopian and dystopian) imagination? Thinking about universities is, in most cases, already imbued by the university, so one element pertains to the difficulty of perceiving conditions of reproduction of one’s own position (this mode of access from the outside, as object-oriented ontologists would put it, or complex externality, as Boltanski does, is something I’m particularly interested in). However, it isn’t the case just with academic critique; fictional accounts of universities or other educational institutions are proliferating, and, in most cases (as I hope to show once I finally get around to writing the book on magical realism and universities), they reproduce the assumption of the value of the institution as such, as well as a lot of associated ideas, as this tweet conveys succinctly:

This is, unfortunately, often the case even with projects whose explicit aim is to subvert existing inequalities in the context of knowledge production, including open, free, and workers’ universities (Social Science Centre in Lincoln maintains a useful map of these initiatives globally). While these are fantastic initiatives, most either have to ‘piggyback’ on university labour – that is, on the free or voluntary labour of people employed or otherwise paid by universities – or, at least, rely on existing universities for credentialisation. Again, this isn’t to devalue those who invest time, effort, and emotions into such forms of education; rather, it is to flag that thinking about serious, long-term alternatives is necessary, and quickly, at that. This is a theme I spend a lot of time thinking about, and I hope to make one of central topics in my work in the future.
So what are we to do?
There’s an obvious bit of irony in suggesting a panel for a conference in order to discuss how the system is broken, but, in the absence of other forms, I am thinking of putting together a proposal for a workshop for Sociological Review’s 2018 “Undisciplining: Conversations from the edges” conference. The good news is that the format is supposed to go outside of the ‘orthodox’ confines of panels and presentations, which means we could do something potentially exciting. The tentative title Thinking about (sustainable?) alternatives to academic knowledge production.
I’m particularly interested in questions such as:
The format would need to be interactive – possibly a blend of on/off-line conversations – and can address the above, or any of the other questions related to thinking about alternatives to current modes of knowledge production.
If you’d like to participate/contribute/discuss ideas, get in touch by the end of October (the conference deadline is 27 November).
[UPDATE: Our panel got accepted! See you at Undisciplining conference, 18-21 June, Newcastle, UK. Watch this space for more news].
[These are my thoughts/notes for the “Practice of Social Theory“, which Mark Carrigan and I are running at the Department of Sociology of the University of Cambridge from 4 to 6 September, 2017].
Revival of theory?
It seems we are witnessing something akin to a revival of theory, or at least of an interest in it. In 2016, the British Journal of Sociology published Swedberg’s “Before theory comes theorizing, or how to make social sciences more interesting”, a longer version of its 2015 Annual public lecture, followed by responses from – among others – Krause, Schneiderhan, Tavory, and Karleheden. A string of recent books – including Matt Dawson’s Social Theory for Alternative Societies, Alex Law’s Social Theory for Today, and Craig Browne’s Critical Social Theory, to name but a few – set out to consider the relevance or contribution of social theory to understanding contemporary social problems. This is in addition to the renewal of interest in biography or contemporary relevance of social-philosophical schools such as Existentialism (1, 2) and the Frankfurt School [1, 2].
To a degree, this revival happens on the back of the challenges posed to the status of theory by the rise of data science, leading Lizardo and Hay to engage in defense of the value and contributions of theory to sociology and international relations, respectively. In broader terms, however, it addresses the question of the status of social sciences – and, by extension, academic knowledge – more generally; and, as such, it brings us back to the justification of expertise, a question of particular relevance in the current political context.
The meaning of theory
Surely enough, theory has many meanings (Abend, 2008), and consequently many forms in which it is practiced. However, one of the characteristics that seem to be shared across the board is that it is part of (under)graduate training, after which it gets bracketed off in the form of “the theory chapter” of dissertations/theses. In this sense, theory is framed as foundational in terms of socialization into a particular discipline, but, at the same time, rarely revisited – at least not explicitly – after the initial demonstration of aptitude. In other words, rather than doing, theory becomes something that is ‘done with’. The exception, of course, are those who decide to make theory the centre of their intellectual pursuits; however, “doing theory” in this sense all too often becomes limited to the exegesis of existing texts (what Krause refers to as ‘theory a’ and Abend as ‘theory 4’) that leads to the competition among theorists for the best interpretation of “what theorist x really wanted to say”, or, alternatively, the application of existing concepts to new observations or ‘problems’ (‘theory b and c’, in Krause’s terms). Either way, the field of social theory resembles less the groves of Plato’s Academy, and more a zoo in which different species (‘Marxists’, ‘critical realists’, ‘Bourdieusians’, ‘rational-choice theorists’) delve in their respective enclosures or fight with members of the same species for dominance of a circumscribed domain.

This summer school started from the ambition to change that: to go beyond rivalries or allegiances to specific schools of thought, and think about what doing theory really means. I often told people that wanting to do social theory was a major reason why I decided to do a second PhD; but what was this about? I did not say ‘learn more’ about social theory (my previous education provided a good foundation), ‘teach’ social theory (though supervising students at Cambridge is really good practice for this), read, or even write social theory (though, obviously, this was going to be a major component). While all of these are essential elements of becoming a theorist, the practice of social theory certainly isn’t reducible to them. Here are some of the other aspects I think we need to bear in mind when we discuss the return, importance, or practice of theory.
Theory is performance
This may appear self-evident once the focus shifts to ‘doing’, but we rarely talk about what practicing theory is meant to convey – that is, about theorising as a performative act. Some elements of this are not difficult to establish: doing theory usually means identification with a specific group, or form of professional or disciplinary association. Most professional societies have committees, groups, and specific conference sessions devoted to theory – but that does not mean theory is exclusively practiced within them. In addition to belonging, theory also signifies status. In many disciplines, theoretical work has for years been held in high esteem; the flipside, of course, is that ‘theoretical’ is often taken to mean too abstract or divorced from everyday life, something that became a more pressing problem with the decline of funding for social sciences and the concomitant expectation to make them socially relevant. While the status of theory is a longer (and separate) topic, one that has been discussed at length in the history of sociology and other social sciences, it bears repeating that asserting one’s work as theoretical is always a form of positioning: it serves to define the standing of both the speaker, and (sometimes implicitly) others contributors. This brings to mind that…
Theory is power
Not everyone gets to be treated as a theorist: it is also a question of recognition, and thus, a question of political (and other) forms of power. ‘Theoretical’ discussions are usually held between men (mostly, though not exclusively, white men); interventions from women, people of colour, and persons outside centres of epistemic power are often interpreted as empirical illustrations, or, at best, contributions to ‘feminist’ or ‘race’ theory*. Raewyn Connell wrote about this in Southern Theory, and initiatives such as Why is my curriculum white? and Decolonizing curriculum in theory and practice have brought it to the forefront of university struggles, but it speaks to the larger point made by Spivak: that the majority of mainstream theory treats the ‘subaltern’ as only empirical or ethnographic illustration of the theories developed in the metropolis.
The problem here is not only (or primarily) that of representation, in the sense in which theory thus generated fails to accurately depict the full scope of social reality, or experiences and ideas of different people who participate in it. The problem is in a fundamentally extractive approach to people and their problems: they exist primarily, if not exclusively, in order to be explained. This leads me to the next point, which is that…
Theory is predictive
A good illustration for this is offered by pundits and political commentators’ surprise at events in the last year: the outcome of the Brexit referendum (Leave!), US elections (Donald Trump!), and last but not least, the UK General Election (surge in votes for Corbyn!). Despite differences in how these events are interpreted, they in most cases convey that, as one pundit recently confessed, nobody has a clue about what is going on. Does this mean the rule of experts really is over, and, with it, the need for general theories that explain human action? Two things are worth taking into account.
To begin with, social-scientific theories enter the public sphere in a form that’s not only simplified, but also distilled into ‘soundbites’ or clickbait adapted to the presumed needs and preferences of the audience, usually omitting all the methodological or technical caveats they normally come with. For instance, the results of opinion polls or surveys are taken to presented clear predictions, rather than reflections of general statistical tendencies; reliability is rarely discussed. Nor are social scientists always innocent victims of this media spin: some actively work on increase their visibility or impact, and thus – perhaps unwittingly – contribute to the sensationalisation of social-scientific discourse. Second, and this can’t be put delicately, some of these theories are just not very good. ‘Nudgery’ and ‘wonkery’ often rest on not particularly sophisticated models of human behaviour; which is not saying that they do not work – they can – but rather that theoretical assumptions underlying these models are rarely accessible to scrutiny.
Of course, it doesn’t take a lot of imagination to figure out why this is the case: it is easier to believe that selling vegetables in attractive packaging can solve the problem of obesity than to invest in long-term policy planning and research on decision-making that has consequences for public health. It is also easier to believe that removing caps to tuition fees will result in universities charging fees distributed normally from lowest to highest, than to bother reading theories of organizational behaviour in different economic and political environments and try to understand how this maps onto the social structure and demographics of a rapidly changing society. In other words: theories are used to inform or predict human behaviour, but often in ways that reinforce existing divisions of power. So, just in case you didn’t see this coming…
Theory is political
All social theories are about constraints, including those that are self-imposed. From Marx to Freud and from Durkheim to Weber (and many non-white, non-male theorists who never made it into ‘the canon’), theories are about what humans can and cannot do; they are about how relatively durable relations (structures) limit and enable how they act (agency). Politics is, fundamentally, about the same thing: things we can and things we cannot change. We may denounce Bismarck’s definition of politics as the art of the possible as insufficiently progressive, but – at the risk of sounding obvious – understanding how (and why) things stay the same is fundamental to understanding how to go about changing them. The history of social theory, among other things, can be read as a story about shifting the boundaries of what was considered fixed and immutable, on the one hand, and constructed – and thus subject to change – on the other.
In this sense, all social theory is fundamentally political. This isn’t to license bickering over different historical materialisms, or to stimulate fantasies – so dear to intellectuals – of ‘speaking truth to power’. Nor should theories be understood as weapons in the ‘war of time’, despite Débord’s poetic formulation: this is but the flipside of intellectuals’ dream of domination, in which their thoughts (i.e. themselves) inspire masses to revolt, usually culminating in their own ascendance to a position of power (thus conveniently cutting out the middleman in ‘speaking truth to power’, as they become the prime bearers of both).
Theory is political in a much simpler sense, in which it is about society and elements that constitute it. As such, it has to be about understanding what is it that those we think of as society think, want, and do, even – and possibly, especially – when we do not agree with them. Rather than aiming to ‘explain away’ people, or fit their behaviour into pre-defined social models, social theory needs to learn to listen to – to borrow a term from politics – its constituents. This isn’t to argue for a (not particularly innovative) return to grounded theory, or ethnography (despite the fact both are relevant and useful). At the risk of sounding pathetic, perhaps the next step in the development of social theory is to really make it a form of social practice – that is, make it be with the people, rather than about the people. I am not sure what this would entail, or what it would look like; but I am pretty certain it would be a welcome element of building a progressive politics. In this sense, doing social theory could become less of the practice of endlessly revising a blueprint for a social theory zoo, and more of a project of getting out from behind its bars.
*The tendency to interpret women’s interventions as if they are inevitably about ‘feminist theory’ (or, more frequently, as if they always refer to empirical examples) is a trend I have been increasingly noticing since moving into sociology, and definitely want to spend more time studying. This is obviously not to say there aren’t women in the field of social theory, but rather that gender (and race, ethnicity, and age) influence the level of generality at which one’s claims are read, thus reflecting the broader tendency to see universality and Truth as coextensive with the figure of the male and white academic.

[This review of “Democratic problem-solving” (Cruickshank and Sassower eds., 2017) was first published in Social Epistemology Review and Reply Collective, 26 May 2017].
It is a testament to the lasting influence of Karl Popper and Richard Rorty that their work continues to provide inspiration for debates concerning the role and purpose of knowledge, democracy, and intellectuals in society. Alternatively, it is a testament to the recurrence of the problem that continues to lurk under the glossy analytical surface or occasional normative consensus of these debates: the impossibility to reconcile the concepts of liberal and epistemic democracy. Essays collected under the title Democratic Problem-Solving (Cruickshank and Sassower 2017) offer grounds for both assumptions, so this is what my review will focus on.
Boundaries of Rational Discussion
Democratic Problem-Solving is a thorough and comprehensive (if at times seemingly meandering) meditation on the implications of Popper’s and Rorty’s ideas for the social nature of knowledge and truth in contemporary Angloamerican context. This context is characterised by combined forces of neoliberalism and populism, growing social inequalities, and what has for a while now been dubbed, perhaps euphemistically, the crisis of democracy. Cruickshank’s (in other contexts almost certainly heretical) opening that questions the tenability of distinctions between Popper and Rorty, then, serves to remind us that both were devoted to the purpose of defining the criteria for and setting the boundaries of rational discussion, seen as the road to problem-solving. Jürgen Habermas, whose name also resonates throughout this volume, elevated communicative rationality to the foundational principle of Western democracies, as the unifying/normalizing ground from which to ensure the participation of the greatest number of members in the public sphere.
Intellectuals were, in this view, positioned as guardians—epistemic police, of sorts—of this discursive space. Popper’s take on epistemic ‘policing’ (see DPS, 42) was to use the standards of scientific inquiry as exemplars for maintaining a high level, and, more importantly, neutrality of public debates. Rorty saw it as the minimal instrument that ensured civility without questioning, or at least without implicitly dismissing, others’ cultural premises, or even ontological assumptions. The assumption they and authors in this volume have in common is that rational dialogue is, indeed, both possible and necessary: possible because standards of rationality were shared across humanity, and necessary because it was the best way to ensure consensus around the basic functioning principles of democracy. This also ensured the pairing of knowledge and politics: by rendering visible the normative (or political) commitments of knowledge claims, sociology of knowledge (as Reed shows) contributed to affirming the link between the epistemic and the political. As Agassi’s syllogism succinctly demonstrates, this link quickly morphed from signifying correlation (knowledge and power are related) to causation (the more knowledge, the more power), suggesting that epistemic democracy was if not a precursor, then certainly a correlate of liberal democracy.
This is why Democratic Problem-Solving cannot avoid running up against the issue of public intellectuals (qua epistemic police), and, obviously, their relationship to ‘Other minds’ (communities being policed). In the current political context, however, to the well-exercised questions Sassower raises such as—
should public intellectuals retain their Socratic gadfly motto and remain on the sidelines, or must they become more organically engaged (Gramsci 2011) in the political affairs of their local communities? Can some academics translate their intellectual capital into a socio-political one? Must they be outrageous or only witty when they do so? Do they see themselves as leaders or rather as critics of the leaders they find around them (149)?
—we might need to add the following: “And what if none of this matters?”
After all, differences in vocabularies of debate matter only if access to it depends on their convergence to a minimal common denominator. The problem for the guardians of public sphere today is not whom to include in these debates and how, but rather what to do when those ‘others’ refuse, metaphorically speaking, to share the same table. Populist right-wing politicians have at their disposal the wealth of ‘alternative’ outlets (Breitbart, Fox News, and increasingly, it seems, even the BBC), not to mention ‘fake news’ or the ubiquitous social media. The public sphere, in this sense, resembles less a (however cacophonous) town hall meeting than a series of disparate village tribunals. Of course, as Fraser (1990) noted, fragmentation of the public sphere has been inherent since its inception within the Western bourgeois liberal order.
The problem, however, is less what happens when other modes of arguing emerge and demand to be recognized, and more what happens when they aspire for redistribution of political power that threatens to overturn the very principles that gave rise to them in the first place. We are used to these terms denoting progressive politics, but there is little that prevents them from being appropriated for more problematic ideologies: after all, a substantial portion of the current conservative critique of the ‘culture of political correctness’, especially on campuses in the US, rests on the argument that ‘alternative’ political ideologies have been ‘repressed’, sometimes justifying this through appeals to the freedom of speech.
Dialogic Knowledge
In assuming a relatively benevolent reception of scientific knowledge, then, appeals such as Chis and Cruickshank’s to engage with different publics—whether as academics, intellectuals, workers, or activists—remain faithful to Popper’s normative ideal concerning the relationship between reasoning and decision-making: ‘the people’ would see the truth, if only we were allowed to explain it a bit better. Obviously, in arguing for dialogical, co-produced modes of knowledge, we are disavowing the assumption of a privileged position from which to do so; but, all too often, we let in through the back door the implicit assumption of the normative force of our arguments. It rarely, if ever, occurs to us that those we wish to persuade may have nothing to say to us, may be immune or impervious to our logic, or, worse, that we might not want to argue with them.
For if social studies of science taught us anything, it is that scientific knowledge is, among other things, a culture. An epistemic democracy of the Rortian type would mean that it’s a culture like any other, and thus not automatically entitled to a privileged status among other epistemic cultures, particularly not if its political correlates are weakened—or missing (cf. Hart 2016). Populist politics certainly has no use for critical slow dialogue, but it is increasingly questionable whether it has use for dialogue at all (at the time of writing of this piece, in the period leading up to the 2017 UK General Election, the Prime Minister is refusing to debate the Leader of the Opposition). Sassower’s suggestion that neoliberalism exhibits a penchant for justification may hold a promise, but, as Cruickshank and Chis (among others) show on the example of UK higher education, ‘evidence’ can be adjusted to suit a number of policies, and political actors are all too happy to do that.
Does this mean that we should, as Steve Fuller suggested in another SERRC article see in ‘post-truth’ the STS symmetry principle? I am skeptical. After all, judgments of validity are the privilege of those who can still exert a degree of control over access to the debate. In this context, I believe that questions of epistemic democracy, such as who has the right to make authoritative knowledge claims, in what context, and how, need to, at least temporarily, come second in relation to questions of liberal democracy. This is not to be teary-eyed about liberal democracy: if anything, my political positions lie closer to Cruickshank and Chis’ anarchism. But it is the only system that can—hopefully—be preserved without a massive cost in human lives, and perhaps repurposed so as to make them more bearable.
In this sense, I wish the essays in the volume confronted head-on questions such as whether we should defend epistemic democracy (and what versions of it) if its principles are mutually exclusive with liberal democracy, or, conversely, would we uphold liberal democracy if it threatened to suppress epistemic democracy. For the question of standards of public discourse is going to keep coming up, but it may decreasingly have the character of an academic debate, and increasingly concern the possibility to have one at all. This may turn out to be, so to speak, a problem that precedes all other problems. Essays in this volume have opened up important venues for thinking about it, and I look forward to seeing them discussed in the future.
References
Cruickshank, Justin and Raphael Sassower. Democratic Problem Solving: Dialogues in Social Epistemology. London: Rowman & Littlefield, 2017.
Fraser, Nancy. “Rethinking the Public Sphere: A Contribution to the Critique of Actually Existing Democracy.” Social Text 25/26 (1990): 56-80.
Fuller, Steve. “Embrace the Inner Fox: Post-Truth as the STS Symmetry Principle Universalized.” Social Epistemology Review and Reply Collective, December 25, 2016. http://wp.me/p1Bfg0-3nx
Hart, Randle J. “Is a Rortian Sociology Desirable? Will It Help Us Use Words Like ‘Cruelty’?” Humanity and Society, 40, no. 3 (2016): 229-241.
Prologue
One Saturday in late January, I go to the PhD office at the Department of Sociology at the University of Cambridge’s New Museums site (yes, PhD students shouldn’t work on Saturdays, and yes, we do). I swipe my card at the main gate of the building. Nothing happens.
I try again, and again, and still nothing. The sensor stays red. An interaction with a security guard who seems to appear from nowhere conveys there is nothing wrong with my card; apparently, there has been a power outage and the whole system has been reset. A rather distraught-looking man from the Department History and Philosophy of Science appears around the corner, insisting to be let back inside the building, where he had left a computer on with, he claims, sensitive data. The very amicable security guard apologises. There’s nothing he can do to let us in. His card doesn’t work, either, and the system has to be manually reset from within the computers inside each departmental building.
You mean the building noone can currently access, I ask.
I walk away (after being assured the issue would be resolved on Monday) plotting sci-fi campus novels in which Skynet is not part of a Ministry of Defense, but of a university; rogue algorithms claim GCSE test results; and classes are rescheduled in a way that sends engineering undergrads to colloquia in feminist theory, and vice versa (the distances one’ s mind will go to avoid thinking about impending deadlines)*. Regretfully pushing prospective pitches to fiction publishers aside (temporarily)**, I find the incident particularly interesting for the perspective it offers on how we think about the university as an institution: its spatiality, its materiality, its boundaries, and the way its existence relates to these categories – in other words, its social ontology.
War on universities?
Critiques of the current transformation of higher education and research in the UK often frame it as an attack, or ‘war’, on universities (this is where the first part of the title of my thesis comes from). Exaggeration for rhetorical purposes notwithstanding, being ‘under attack’ suggests is that it is possible to distinguish the University (and the intellectual world more broadly) from its environment, in this case at least in part populated by forces that threaten its very existence. Notably, this distinction remains almost untouched even in policy narratives (including those that seek to promote public engagement and/or impact) that stress the need for universities to engage with the (‘surrounding’) society, which tend to frame this imperative as ‘going beyond the walls of the Ivory Tower’.
The distinction between universities and the society has a long history in the UK: the university’s built environment (buildings, campuses, gates) and rituals (dress, residence requirements/’keeping term’, conventions of language) were developed to reflect the separateness of education from ordinary experience, enshrined in the dichotomies of intellectual vs. manual labour, active life vs. ‘life of the mind’ and, not least, Town vs. Gown. Of course, with the rise of ‘redbrick’, and, later, ‘plateglass’ universities, this distinction became somewhat less pronounced. Rather than in terms of blurring, however, I would like to suggest we need to think of this as a shift in scale: the relationship between ‘Town’ and ‘Gown’, after all, is embedded in the broader framework of distinctions between urban and suburban, urban and rural, regional and national, national and global, and the myriad possible forms of hybridisation between these (recent work by Addie, Keil and Olds, as well as Robertson et al., offers very good insights into issues related to theorising scale in the context of higher education).
Policing the boundaries: relational ontology and ontological (in)security
What I find most interesting, in this setting, is the way in which boundaries between these categories are maintained and negotiated. In sociology, the negotiation of boundaries in the academia has been studied in detail by, among others, Michelle Lamont (in How Professors Think, as well as in an overview by Lamont and Molnár), Thomas Gieryn (both in Cultural Boundaries of Science and few other texts), Andrew Abbott in The Chaos of Disciplines (and, of course, in sociologically-inclined philosophy of science, including Feyerabend’s Against Method, Lakatos’ work on research programmes, and Kuhn’s on scientific revolutions, before that). Social anthropology has an even longer-standing obsession with boundaries, symbolic as well as material – Mary Douglas’ work, in particular, as well as Augé’s Non-Places offer a good entry point, converging with sociology on the ground of neo-Durkheimian reading of the distinction between the sacred and profane.
My interest in the cultural framing of boundaries goes back to my first PhD, which explored the construal of the category of (romantic) relationship through the delineation of its difference from other types of interpersonal relations. The concept resurfaced in research on public engagement in UK higher education: here, the negotiation of boundaries between ‘inside’ (academics) and ‘outside’ (different audiences), as well as between different groups within the university (e.g. administrators vs. academics) becomes evident through practices of engaging in the dissemination and, sometimes, coproduction of knowledge, (some of this is in my contribution to this volume). The thread that runs through these cases is the importance of positioning in relation to a (relatively) specified Other; in other words, a relational ontology.
It is not difficult to see the role of negotiating boundaries between ‘inside’ and ‘outside’ in the concept of ontological security (e.g. Giddens, 1991). Recent work in IR (e.g. Ejdus, 2017) has shifted the focus from Giddens’ emphasis on social relations to the importance of stability of material forms, including buildings. I think we can extend this to universities: in this case, however, it is not (only) the building itself that is ‘at risk’ (this can be observed in intensified securitisation of campuses, both through material structure such as gates and cards-only entrances, and modes of surveillance such as Prevent – see e.g. Gearon, 2017), but also the materiality of the institution itself. While the MOOC hype may have (thankfully) subsided (though not dissappeared) there is the ubiquitous social media, which, as quite a few people have argued, tests the salience of the distinction between ‘inside’ and ‘outside’ (I’ve written a bit about digital technologies as mediating the boundary between universities and the ‘outside world’ here as well in an upcoming article in Globalisation, Education, Societies special issue that deals with reassembling knowledge production with/out the university).
Barbarians at the gates
In this context, it should not be surprising that many academics fear digital technologies: anything that tests the material/symbolic boundaries of our own existence is bound to be seen as troubling/dirty/dangerous. This brings to mind Kavafy’s poem (and J.M. Coetzee’s novel) Waiting for the Barbarians, in which an outpost of the Empire prepares for the attack of ‘the barbarians’ – that, in fact, never arrives. The trope of the university as a bulwark against and/or at danger of descending into barbarism has been explored by a number of writers, including Thorstein Veblen and, more recently, Roy Coleman. Regardless of the accuracy or historical stretchability of the trope, what I am most interested in is its use as a simultaneously diagnostic and normative narrative that frames and situates the current transformation of higher education and research.
As the last line of Kavafy’s poem suggests, barbarians represent ‘a kind of solution’: a solution for the otherwise unanswered question of the role and purpose of universities in the 21st century, which began to be asked ever more urgently with the post-war expansion of higher education, only to be shut down by the integration/normalization of the soixante-huitards in what Boltanski and Chiapello have recognised as contemporary capitalism’s almost infinite capacity to appropriate critique. Disentangling this dynamic is key to understanding contemporary clashes and conflicts over the nature of knowledge production. Rather than locating dangers to the university firmly beyond the gates, then, perhaps we could use the current crisis to think about how we perceive, negotiate, and preserve the boundaries between ‘in’ and ‘out’. Until we have a space to do that, I believe we will continue building walls only to realise we have been left on the wrong side.
(*) I have a strong interest in campus novels, both for PhD-related and unrelated reasons, as well as a long-standing interest in Sci-Fi, but with the exception of DeLillo’s White Noise can think of very few works that straddle both genres; would very much appreciate suggestions in this domain!
(**) I have been thinking for a while about a book that would be a spin-off from my current PhD that would combine social theory, literature, and critical cultural political economy, drawing on similarities and differences between critical and magical realism to look at universities. This can be taken as a sketch for one of the chapters, so all thoughts and comments are welcome.
[This post was originally published on 03/01 2017 in Discover Society Special Issue on Digital Futures. I am also working on a longer (article) version of it, which will be uploaded soon].
It is by now commonplace to claim that digital technologies have fundamentally transformed knowledge production. This applies not only to how we create, disseminate, and consume knowledge, but also who, in this case, counts as ‘we’. Science and technology studies (STS) scholars argue that knowledge is an outcome of coproduction between (human) scientists and objects of their inquiry; object-oriented ontology and speculative realism go further, rejecting the ontological primacy of humans in the process. For many, it would not be overstretching to say machines do not only process knowledge, but are actively involved in its creation.
What remains somewhat underexplored in this context is the production of critique. Scholars in social sciences and humanities fear that the changing funding and political landscape of knowledge production will diminish the capacity of their disciplines to engage critically with the society, leading to what some have dubbed the ‘crisis’ of the university. Digital technologies are often framed as contributing to this process, speeding up the rate of production, simultaneously multiplying and obfuscating the labour of academics, perhaps even, as Lyotard predicted, displacing it entirely. Tensions between more traditional views of the academic role and new digital technologies are reflected in, often heated, debates over academics’ use of social media (see, for instance, #seriousacademic on Twitter). Yet, despite polarized opinions, there is little systematic research into links between the transformation of the conditions of knowledge production and critique.
My work is concerned with the possibility – that is, the epistemological and ontological foundations – of critique, and, more precisely, how academics negotiate it in contemporary (‘neoliberal’) universities. Rather than trying to figure out whether digital technologies are ‘good’ or ‘bad’, I think we need to consider what it is about the way they are framed and used that makes them either. From this perspective, which could be termed the social ontology of critique, we can ask: what is it about ‘the social’ that makes critique possible, and how does it relate to ‘the digital’? How is this relationship constituted, historically and institutionally? Lastly, what does this mean for the future of knowledge production?
Between pre-digital and post-critical
There are a number of ways one can go about studying the relationship between digital technologies and critique in the contemporary context of knowledge production. David Berry and Christian Fuchs, for instance, both use critical theory to think about the digital. Scholars in political science, STS, and sociology of intellectuals have written on the multiplication of platforms from which scholars can engage with the public, such as Twitter and blogs. In “Uberfication of the University”, Gary Hall discusses how digital platforms transform the structure of academic labour. This joins the longer thread of discussions about precarity, new publishing landscapes, and what this means for the concept of ‘public intellectual’.
One of the challenges of theorising this relationship is that it has to be developed out of the very conditions it sets out to criticise. This points to limitations of viewing ‘critique’ as a defined and bounded practice, or the ‘public intellectual’ as a fixed and separate figure, and trying to observe how either has changed with the introduction of the digital. While the use of social media may be a more recent phenomenon, it is worth recalling that the bourgeois public sphere that gave rise to the practice of critique in its contemporary form was already profoundly mediatised. Whether one thinks of petitions and pamphlets in the Dreyfus affair, or discussions on Twitter and Facebook – there is no critique without an audience, and digital technologies are essential in how we imagine them. In this sense, grounding an analysis of the contemporary relationship between the conditions of knowledge production and critique in the ‘pre-digital’ is similar to grounding it in the post-critical: both are a technique of ‘ejecting’ oneself from the confines of the present situation.
The dismissiveness Adorno and other members of the Frankfurt school could exercise towards mass media, however, is more difficult to parallel in a world in which it is virtually impossible to remain isolated from digital technologies. Today’s critics may, for instance, avoid having a professional profile on Twitter or Facebook, but they are probably still using at least some type of social media in their private lives, not to mention responding to emails, reading articles, and searching and gathering information through online platforms. To this end, one could say that academics publicly criticising social media engage, in fact, in a performative contradiction: their critical stance is predicated on the existence of digital technologies both as objects of critique and main vehicles for its dissemination.
This, I believe, is an important source of perceived tensions between the concept of critique and digital technologies. Traditionally, critique implies a form of distancing from one’s social environment. This distancing is seen as both spatial and temporal: spatial, in the sense of providing a vantage point from which the critic can observe and (choose to) engage with the society; temporal, in the sense of affording shelter from the ‘hustle and bustle’ of everyday life, necessary to stimulate critical reflection. Universities, at least in a good part of 20th century, were tasked with providing both. Lukács, in his account of the Frankfurt school, satirized this as “taking residence in the ‘Grand Hotel Abyss’”: engaging in critique from a position of relative comfort, from which one can stare ‘into nothingness’. Yet, what if the Grand Hotel Abyss has a wifi connection?
Changing temporal frames: beyond the Twitter intellectual?
Some potential perils of the ‘always-on’ culture and contracting temporal frames for critique are reflected in the widely publicized case of Steven Salaita, an internationally recognized scholar in the field of Native American studies and American literature. In 2013, Salaita was offered a tenured position at the University of Illinois. However, in 2014 the Board of Trustees withdrew the offer, citing Salaita’s “incendiary” posts on Twitter as the reason. Salaita is a vocal critic of Israel, and his Tweets at the time concerned Israeli military offensive in the Gaza Strip; some of the University’s donors found this problematic and pressured the Board to withdraw the offer. Salaita has in the meanwhile appealed the decision and received a settlement from the University of Illinois, but the case – though by no means unique – drew attention to the issue of the (im)possibility of separating the personal, political and professional on social media.
At the same time, social media can provide venues for practicing critique in ways not confined by the conventions or temporal cycles of the academia. The example of Eric Jarosinski, “The rock star philosopher of Twitter”, shows this clearly. Jarosinski is a Germanist whose Tweets contain clever puns on the Frankfurt school, as well as, among others, Hegel and Nietzsche. In 2013, he took himself out of consideration for tenure at the University of Pennsylvania, but continued to compose philosophically-inspired Tweets, eventually earning a huge following, as well as a column in the two largest newspapers in Germany and The Netherlands. Jarosinski’s moniker, #failedintellectual, is an auto-ironic reminder that it is possible to succeed whilst deviating from the established routes of intellectual critique.
Different ways in which it can be performed on Twitter should not, however, detract from the fact that critique operates in fundamentally politicized and stratified spaces; digital technologies can render them more accessible, but that does not mean that they are more democratic or offer a better view of ‘the public’. This is particularly worth remembering in the light of recent political events in the UK and the US. Once the initial shock following the US election and the British EU referendum had subsided, many academics (and intellectuals more broadly) have taken to social media to comment, evaluate, or explain what had happened. Yet, for the most part, these interventions end exactly where they began – on social media. This amounts to live Tweeting from the balcony of the Grand Hotel Abyss: the view is good, but the abyss no less gaping for it.
By sticking to critique on social media, intellectuals are, essentially, doing what they have always been good at – engaging with audiences and in ways they feel comfortable with. To this end, criticizing the ‘alt-right’ on Twitter is not altogether different from criticising it in lecture halls. Of course, no intellectual critique can aspire to address all possible publics, let alone equally. However, it makes sense to think how the ways in which we imagine our publics influences our capacity to understand the society we live in; and, perhaps more importantly, how it influences our ability to predict – or imagine – its future. In its present form, critique seems far better suited to an idealized Habermasian public sphere, than to the political landscape that will carry on in the 21st century. Digital technologies can offer an approximation, perhaps even a good simulation, of the former; but that, in and of itself, does not mean that they can solve problems of the latter.
Jana Bacevic is a PhD researcher at the Department of Sociology at the University of Cambridge. She works on social theory and the politics of knowledge production; her thesis deals with the social, epistemological and ontological foundations of the critique of neoliberalism in higher education and research in the UK. Previously, she was Marie Curie fellow at the University of Aarhus in Denmark at Universities in Knowledge Economies (UNIKE). She tweets at @jana_bacevic

Last month, I attended the symposium on Anxiety and Work in the Accelerated Academy, the second in the Accelerated Academy series that explores the changing scapes of time, work, and productivity in the academia. Given that my research is fundamentally concerned with the changing relationships between universities and publics, and the concomitant reframing of the subjectivity, agency, and reflexivity of academics, I naturally found the question of the intersection of academic labour and time relevant. One particular bit resonated for a long time: in her presentation, Maggie O’Neill from the University of York suggested anxiety has become the primary structure of feeling in the neoliberal academia. Having found myself, in the period leading up to the workshop, increasingly reflecting on the structures of feeling, I was intrigued by the salience of the concept. Is there a place for theoretical concepts such as this in research on the transformations of knowledge production in contemporary capitalism, and where is it?
All the feels
“Structure of feeling” may well be one of those ideas whose half-life way superseded their initial purview. Raymond Williams introduced it in a brief chapter included in Marxism and Literature, contributing to carving out what would become known as the distinctly British take on the relationship between “base” and “superstructure”: cultural studies. In it, he says:
Specific qualitative changes are not assumed to be epiphenomena of changed institutions, formations, and beliefs, or merely secondary evidence of changed social and economic relations between and within classes. At the same time they are from the beginning taken as social experience, rather than as ‘personal’ experience or as the merely superficial or incidental ‘small change’ of society. They are social in two ways that distinguish them from reduced senses of the social as the institutional and the formal: first, in that they are changes of presence (while they are being lived this is obvious; when they have been lived it is still their substantial characteristic); second, in that although they are emergent or pre-emergent, they do not have to await definition, classification, or rationalization before they exert palpable pressures and set effective limits on experience and on action. Such changes can be defined as changes in structures of feeling. (Williams, 1977:130).
Williams thus introduces structures of feeling as a form of social diagnostic; he posits it against the more durable but also more formal concepts of ‘world-view’ or ‘ideology’. Indeed, the whole chapter is devoted to the critique of the reificatory tendencies of Marxist social analysis: the idea of things (or ideas) being always ‘finished’, always ‘in the past’, in order for them to be subjected to analytical scrutiny. The concept of “structure of feeling” is thus invoked in order to keep tabs on social change and capture the perhaps less palpable elements of transformation as they are happening.
Emotions and the scholastic disposition
Over the past years, discourse of feelings has certainly become more prominent in the academia. Just last week, Cambridge’s Festival of Ideas featured a discussion on the topic, framing it within issues of free speech and trigger warnings on campus. While the debate itself has a longer history in the US, it had begun to attract more attention in the UK – most recently in relation to challenging colonial legacies at both Oxford and Cambridge.
Despite multiple nuances of political context and the complex interrelation between imperialism and higher education, the debate in the media predominantly plays out in dichotomies of ‘thinking’ and ‘feeling’. Opponents tend to pit trigger warnings or the “culture of offence” against the concept of academic freedom, arguing that today’s students are too sensitive and “coddled” which, in their view, runs against the very purpose of university education. From this perspective, education is about ‘cultivating’ feelings: exercising control, submerging them under the strict institutional structures of the intellect.
Feminist scholars, in particular, have extensively criticised this view for its reductionist properties and, not least, its propensity to translate into institutional and disciplinary policies that seek to exclude everything framed as ‘emotional’, bodily, or material (and, by association, ‘feminine’) from academic knowledge production. But the cleavage runs deeper. Research in social sciences is often framed in the dynamic of ‘closeness’ and ‘distancing’, ‘immersion’ and ‘purification’: one first collects data by aiming to be as close as possible to the social context of the object of research, but then withdraws from it in order to carry out analysis. While approaches such as grounded theory or participatory methods (cl)aim to transcend this boundary, its echoes persist in the structure of presentation of academic knowledge (for instance, the division between data and results), as well as the temporal organisation of graduate education (for instance, the idea that the road to PhD includes a period of training in methods and theories, followed by data collection/fieldwork, followed by analysis and the ‘writing up’ of results).
The idea of ‘distanced reflection’ is deeply embedded in the history of academic knowledge production. In Pascalian Meditations, Bourdieu relates it to the concept of skholē – the scholarly disposition – predicated on the distinction between intellectual and manual labour. In other words, in order for reflection to exist, it needed to be separated from the vagaries of everyday existence. One of its radical manifestations is the idea of the university as monastic community. Oxford and Cambridge, for instance, were explicitly constructed on this model, giving rise to animosities between ‘town’ and ‘gown’: concerns of the ‘lay’ folk were thought to be diametrically opposed to those of the educated. While arguably less prominent in (most) contemporary institutions of knowledge production, the dichotomy is still unproblematically transposed in concepts such as “university’s contribution to society”, which assumes universities are distinct from the society, or at least their interests radically different from those of “the society” – raising obvious questions about who, in fact, is this society.
Emotions, reason, and critique
Paradoxically, perhaps, one of the strongest reverberations of the idea is to be found in the domain of social critique. On the one hand, this sounds counter-intuitive – after all, critical social science should be about abandoning the ‘veneer’ of neutrality and engaging with the world in all of its manifestations. However, establishing the link between social science and critique rests on something that Boltanski, in his critique of Bourdieu’s sociology of domination, calls the metacritical position:
For this reason we shall say that critical theories of domination are metacritical in order. The project of taking society as an object and describing the components of social life or, if you like, its framework, appeals to a thought experiment that consists in positioning oneself outside this framework in order to consider it as a whole. In fact, a framework cannot be grasped from within. From an internal perspective, the framework coincides with reality in its imperious necessity. (Boltanski, 2011:6-7)
Academic critique, in Boltanski’s view, requires assuming a position of exteriority. A ‘simple’ form of exteriority rests on description: it requires ‘translation’ of lived experience (or practices) into categories of text. However, passing the kind of moral judgements critical theory rests on calls for, he argues, a different form of distancing: complex exteriority.
In the case of sociology, which at this level of generality can be regarded as a history of the present, with the result that the observer is part of what she intends to describe, adopting a position of exteriority is far from self-evident… This imaginary exit from the viscosity of the real initially assumes stripping reality of its character of implicit necessity and proceeding as if it were arbitrary (as if it could be other than it is or even not be);
This “exit from the viscosity of the real” (a lovely phrase!) proceeds in two steps. The first takes the form of “control of desire”, that is, procedural distancing from the object of research. The second is the act of judgement by which a social order is ‘ejected’, seen in its totality, and as such evaluated from the outside:
In sociology the possibility of this externalization rests on the existence of a laboratory – that is to say, the employment of protocols and instructions respect for which must constrain the sociologist to control her desires (conscious or unconscious). In the case of theories of domination, the exteriority on which critique is based can be called complex, in the sense that it is established at two different levels. It must first of all be based on an exteriority of the first kind to equip itself with the requisite data to create the picture of the social order that will be submitted to critique. A meta critical theory is in fact necessarily reliant on a descriptive sociology or anthropology. But to be critical, such a theory also needs to furnish itself, in ways that can be explicit to very different degrees, with the means of passing a judgement on the value of the social order being described. (ibid.)
Critique: inside, outside, in-between?
To what degree could we say that this categorisation can be applied to the current critique of conditions of knowledge production in the academia? After all, most of those who criticize the neoliberal transformation of higher education and research are academics. In this sense, it would make sense to question the degree to which they can lay claims to a position of exteriority. However, more problematically (or interestingly), it is also questionable to which degree a position of exteriority is achievable at all.
Boltanski draws attention to this problem by emphasising the distinction between the cognition – awareness – of ‘ordinary’ actors, and that of sociologists (or other social scientists), the latter, presumably, able to perceive structures of domination that the subjects of their research do not:
Metacritical theories of domination tackle these asymmetries from a particular angle – that of the miscognition by the actors themselves of the exploitation to which they are subject and, above all, of the social conditions that make this exploitation possible and also, as a result, of the means by which they could stop it. That is why they present themselves indivisibly as theories of power, theories of exploitation and theories of knowledge. By this token, they encounter in an especially vexed fashion the issue of the relationship between the knowledge of social reality which is that of ordinary actors, reflexively engaged in practice, and the knowledge of social reality conceived from a reflexivity reliant on forms and instruments of totalization – an issue which is itself at the heart of the tensions out of which the possibility of a social science must be created (Boltanski, 2011:7)
Hotel Academia: you can check out any time you like, but you can never leave?
How does one go about thinking about the transformation of the conditions of knowledge production when one is at the same time reflexively engaged in practice and relying on the reflexivity provided by sociological instruments? Is it at all possible? The feelings of anxiety, to this end, could be provoked exactly by this lack of opportunity to step aside – to disembed oneself from the academic life and reflect on it at the leisurely pace of skholē. On the one hand, this certainly has to do with the changing structure and tempo of academic life – acceleration and demands for increased output: in this sense, anxiety is a reaction to the changes perceived and felt, the feeling that the ground is no longer stable, like a sense of vertigo. On the other hand, however, this feeling of decentredness could be exactly what contemporary critique calls for.
The challenge, of course, is how to turn this “structure of feeling” into something that has analytical as well as affective power – and can transform the practice itself. Stravinsky’s Rite of Spring, I think, is a wonderful example of this. As a melody, it is fundamentally disquieting: its impact primarily drawn from the fact that it disrupted what were, at the time, expectations of the (musical) genre, and in the process, rewrote them.
In other words, anxiety could be both creative and destructive. This, however, is not some broad call to “embrace anxiety”. There is a clear and pertinent need to understand the way in which the transformations of working conditions – everywhere, and also in the context of knowledge production – are influencing the sense of self and what is commonly referred to as mental health or well-being.
However, in this process, there is no need to externalise anxiety (nor other feelings): that is, frame it as if caused by forces outside of, or completely independent from, human influence, including within the academia itself (for instance, government policies, or political changes on supranational level). Conversely, there is no need to completely internalise it, in the sense of ascribing it to the embodied experience of individuals only. If feelings occupy the unstable ‘middle ground’ between institutions and individuals, this is the position from which they will have to be thought. If anxiety is an interpretation of the changes of the structures of knowledge production, its critique cannot but stem from the same position. This position is not ‘outside’, but rather ‘in-between’; insecure and thought-provoking, but no less potent for that.
Which, come to think of it, may be what Williams was trying to say all along.
This poster drew my attention while I was working in the library of Cambridge University a couple of weeks ago:

For a while now, I have been fascinated with the way in which the language of emotions, or affect, has penetrated public discourse. People ‘love’ all sorts of things: the way a film uses interior light, the icing on a cake, their friend’s new hairstyle. They ‘hate’ Donald Trump, the weather, next door neighbours’ music. More often than not, conversations involving emotions would not be complete without mentioning online expressions of affect, such as ‘likes’ or ‘loves’ on Facebook or on Twitter.
Of course, the presence of emotions in human communication is nothing new. Even ‘ordinary’ statements – such as, for instance, “it’s going to rain tomorrow” – frequently entail an affective dimension (most people would tend to get at least slightly disappointed at the announcement). Yet, what I find peculiar is that the language of affect is becoming increasingly present not only in non-human-mediated communication, but also in relation to non-human entities. Can you really ‘love’ a library? Or be ‘friends’ with your local coffee place?
This isn’t to in any way concede ground to techno-pessimists who blame social media for ‘declining’ standards in human communication, nor even to express concern over the ways in which affective ‘reaction’ buttons allow tracking online behaviour (privacy is always a problem, and ‘unmediated’ communication largely a fiction). Even if face-to-face is qualitatively different from online interaction, there is nothing to support the claim that makes it inherently more valuable, or, indeed, ‘real’ (see: “IRL fetish” [i]). It is the social and cultural framing of these emotions, and, especially, the way social sciences think about it – the social theory of affect, if you wish – that concerns me here.
Fetishism and feeling
So what is different about ‘loving’ your library as opposed to, say, ‘loving’ another human being? One possible way of going about this is to interpret expressions of emotion directed at or through non-human entities as ‘shorthand’ for those aimed at other human beings. The kernel of this idea is contained in Marx’s concept of commodity fetishism: emotion, or affect, directed at an object obscures the all-too-human (in his case, capital) relationship behind it. In this sense, ‘liking’ your local coffee place would be an expression of appreciation for the people who work there, for the way they make double macchiato, or just for the times you spent there with friends or other significant others. In human-to-human communication, things would be even more straightforward: generally speaking, ‘liking’ someone’s status updates, photos, or Tweets would signify appreciation of/for the person, agreement with, or general interest in, what they’re saying.
But what if it is actually the inverse? What if, in ‘liking’ something on Facebook or on Twitter, the human-to-human relationship is, in fact, epiphenomenal to the act? The prime currency of online communication is thus the expenditure of (emotional) energy, not the relationship that it may (or may not) establish or signify. In this sense, it is entirely irrelevant whether one is liking an inanimate object (or concept), or a person. Likes or other forms of affective engagement do not constitute any sort of human relationship; the only thing they ‘feed’ is the network itself. The network, at the same time, is not an expression, reflection, or (even) simulation of human relationships: it is the primary structure of feeling.
All hail…
Yuval Noah Harari’s latest book, Homo Deus, puts the issue of emotions at the centre of the discussion of the relationship between human and AI. In a review in The Guardian, David Runciman writes:
“Human nature will be transformed in the 21st century because intelligence is uncoupling from consciousness. We are not going to build machines any time soon that have feelings like we have feelings: that’s consciousness. Robots won’t be falling in love with each other (which doesn’t mean we are incapable of falling in love with robots). But we have already built machines – vast data-processing networks – that can know our feelings better than we know them ourselves: that’s intelligence. Google – the search engine, not the company – doesn’t have beliefs and desires of its own. It doesn’t care what we search for and it won’t feel hurt by our behaviour. But it can process our behaviour to know what we want before we know it ourselves. That fact has the potential to change what it means to be human.”
On the surface level, this makes sense. Algorithms can measure our ‘likes’ and other emotional reactions and combine them into ‘meaningful’ patterns – e.g., correlate them with specific background data (age, gender, location), time of day, etc., and, on the basis of this, predict how you will act (click, shop) in specific situations. However, does this amount to ‘knowledge’? In other words, if machines cannot have feelings – and Harari seems adamant that they cannot – how can they actually ‘know’ them?
Frege on Facebook
This comes close to a philosophical problem I’ve been trying to get a grip on recently: the Frege-Geach (alternatively, the embedding, or Frege-Geach-Searle) problem. It is comprised of two steps. The first is to claim that there is a qualitative difference between moral and descriptive statements – for instance, between saying “It is wrong to kill” and “It is raining”. Most humans, I believe, would agree with this. The second is to observe that there is no basis for claiming this sort of difference based on sentence structure alone, which then leads to the problem of explaining its source – how do we know there is one? In other words, how it could be that moral and descriptive terms have exactly the same sort of semantic properties in complex sentences, even though they have different kinds of meaning? Where does this difference stem from?
The argument can be extended to feelings: how do we know that there is a qualitative difference between statements such as “I love you” and “I eat apples”? Or loving someone and ‘liking’ an online status? From a formal (syntactic) perspective, there isn’t. More interestingly, however, there is no reason why machines should not be capable of such a form of expression. In this sense, there is no way to reliably establish that likes coming from a ‘real’ person and, say, a Twitterbot, are qualitatively different. As humans, of course, we would claim to know the difference, or at least be able to spot it. But machines cannot. There is nothing inherent in the expression of online affect that would allow algorithms to distinguish between, say, the act of ‘loving’ the library and the act of loving a person. Knowledge of emotions, in other words, is not reducible to counting, even if counting takes increasingly sophisticated forms.
How do you know what you do not know?
The problem, however, is that humans do not have superior knowledge of emotions, their own or other people’s. I am not referring to situations in which people are unsure or ‘confused’ about how they feel [ii], but rather to the limited language – forms of expression – available to us. The documentary “One More Time With Feeling”, which I saw last week, engages with this issue in a way I found incredibly resonant. Reflecting on the loss of his son, Nick Cave relates how the words that he or people around him could use to describe the emotions seemed equally misplaced, maladjusted and superfluous (until the film comes back into circulation, Amanda Palmer’s review which addresses a similar question is here) – not because they couldn’t reflect it accurately, but because there was no necessary link between them and the structure of feeling at all.
Clearly, the idea that language does not reflect, but rather constructs – and thus also constrains – human reality is hardly new: Wittgenstein, Lacan, and Rorty (to name but a few) have offered different interpretations of how and why this is the case. What I found particularly poignant about the way Cave frames it in the film is that it questions the whole ontology of emotional expression. It’s not just that language acts as a ‘barrier’ to the expression of grief; it is the idea of the continuity of the ‘self’ supposed to ‘have’ those feelings that’s shattered as well.
Love’s labour’s lost (?): between practice and theory
This brings back some of my fieldwork experiences from 2007 and 2008, when I was doing a PhD in anthropology, writing on the concept of romantic relationships. Whereas most of my ‘informants’ – research participants – could engage in lengthy elaboration of the criteria they use in choosing (‘romantic’) partners (as well as, frequently, the reasons why they wouldn’t designate someone as a partner), when it came to emotions their narratives could frequently be reduced to one word: love (it wasn’t for lack of expressive skills: most were highly educated). It was framed as a binary phenomenon: either there or not there. At the time, I was more interested in the way their (elaborated) narratives reflected or coded markers of social inequality – for instance, class or status. Recently, however, I have been going back more to their inability (or unwillingness) to elaborate on the emotion that supposedly underpins, or at least buttresses, those choices.
Theoretical language is not immune to these limitations. For instance, whereas social sciences have made significant steps in deconstructing notions such as ‘man’, ‘woman, ‘happiness’, ‘family’, we are still miles away from seriously examining concepts such as ‘love’, ‘hate’, or ‘fear’. Moira Weigel’s and Eva Illouz’ work are welcome exceptions to the rule: Weigel uses the feminist concept of emotional labour to show how the responsibility for maintaining relationships tends to be unequally distributed between men and women, and Illouz demonstrates how modern notions of dating come to define subjectivity and agency of persons in ways conducive to the reproduction of capitalism. Yet, while both do a great job in highlighting social aspects of love, they avoid engaging with its ontological basis. This leaves the back door open for an old-school dualism that either assumes there is an (a- or pre-social?) ‘basis’ to human emotions, which can be exploited or ‘harvested’ through relationships of power; or, conversely, that all emotional expression is defined by language, and thus its social construction the only thing worth studying. It’s almost as if ‘love’ is the last construct left standing, and we’re all too afraid to disenchant it.
For a relational ontology
A relational ontology of human emotions could, in principle, aspire to de-throne this nominalist (or, possibly worse, truth-proceduralist) notion of love in favour of one that sees it as a by-product of relationality. This isn’t claiming that ‘love’ is epiphenomenal: to the degree to which it is framed as a motivating force, it becomes part and parcel of the relationship itself. However, not seeing it as central to this inquiry would hopefully allow us to work on the diversification of the language of emotions. Instead of using a single marker (even as polysemic as ‘love’) for the relationship with one’s library and one’s significant other, we could start thinking about ways in which they are (or are not) the same thing. This isn’t, of course, to sanctify ‘live’ human-to-human emotion: I am certain that people can feel ‘love’ for pets, places, or deceased ones. Yet, calling it all ‘love’ and leaving it at that is a pretty shoddy way of going about feelings.
Furthermore, a relational ontology of human emotions would mean treating all relationships as unique. This isn’t, to be clear, a pseudoanarchist attempt to deny standards of or responsibility for (inter)personal decency; and even less a default glorification of long-lasting relationships. Most relationships change over time (as do people inside them), and this frequently means they can no longer exist; some relationships cannot coexist with other relationships; some relationships are detrimental to those involved in them, which hopefully means they cease to exist. Equally, some relationships are superficial, trivial, or barely worth a mention. However, this does not make them, analytically speaking, any less special.
This also means they cannot be reduced to the same standard, nor measured against each other. This, of course, runs against one of capitalism’s dearly-held assumptions: that all humans are comparable and, thus, mutually replaceable. This assumption is vital not only for the reproduction of labour power, but also, for instance, for the practice of dating [iii], whether online or offline. Moving towards a relational concept of emotions would allow us to challenge this notion. In this sense, ‘loving’ a library is problematic not because the library is not a human being, but because ‘love’, just like other human concepts, is a relatively bad proxy. Contrary to what pop songs would have us believe, it’s never an answer, and, quite possibly, neither the question.
Some Twitter wisdom for the end….
————————————————————————–
[i] Thanks go to Mark Carrigan who sent this to me.
[ii] While I am very interested in the question of self-knowledge (or self-ignorance), for some reason, I never found this particular aspect of the question analytically or personally intriguing.
[iii] Over the past couple of years, I’ve had numerous discussions on the topic of dating with friends, colleagues, but also acquaintances and (almost) strangers (the combination of having a theoretical interest in the topic and not being in a relationship seem to be particularly conducive to becoming involved in such conversations, regardless of whether one wants it or not). I feel compelled to say that my critique of dating (and the concomitant refusal to engage in it, at least as far as its dominant social forms go) does not, in any way, imply a criticism of people who do. There is quite a long list of people whom I should thank for helping me clarify this, but instead I promise to write another longer post on the topic, as well as, finally, develop that app :).
[This post originally appeared on the Sociological Review blog on 3 August, 2016].
Why do we need academic celebrities? In this post, I would like to extend the discussion of academic celebrities from the focus on these intellectuals’ strategies, or ‘acts of positioning’, to what makes them possible in the first place, in the sense of Kant’s ‘conditions of possibility’. In other words, I want to frame the conversation in the broader framework of a critical cultural political economy. This is based on a belief that, if we want to develop an understanding of knowledge production that is truly relational, we need to analyse not only what public intellectuals or ‘academic celebrities’ do, but also what makes, maintains, and, sometimes, breaks, their wider appeal, including – not least importantly – our own fascination with them.
To begin with, an obvious point is that academic stardom necessitates a transnational audience, and a global market for intellectual products. As Peter Walsh argues, academic publishers play an important role in creating and maintaining such a market; Mark Carrigan and Eliran Bar-El remind us that celebrities like Giddens or Žižek are very good at cultivating relationships with that side of the industry. However, in order for publishers to operate at an even minimal profit, someone needs to buy the product. Simply put, public intellectuals necessitate a public.
While intellectual elites have always been to some degree transnational, two trends associated with late modernity are, in this sense, of paramount importance. One is the expansion and internationalization of higher education; the other is the supremacy of English as the language of global academic communication, coupled with the growing digitalization of the process and products of intellectual labour. Despite the fact that access to knowledge still remains largely inequitable, they have contributed to the creation of an expanded potential ‘customer base’. And yet – just like in the case of MOOCs – the availability or accessibility of a product is not sufficient to explain (or guarantee) interest in it. Regardless of whether someone can read Giddens’ books in English, or is able to watch Žižek’s RSA talk online, their arguments, presumably, still need to resonate: in other words, there must be something that people derive from them. What could this be?
In ‘The Existentialist Moment’, Patrick Baert suggests the global popularity of existentialism can be explained by Sartre’s (and other philosophers’ who came to be identified with it, such as De Beauvoir and Camus) successful connecting of core concepts of existentialist philosophy, such as choice and responsibility, to the concerns of post-WWII France. To some degree, this analysis could be applied to contemporary academic celebrities – Giddens and Bauman wrote about the problems of late or liquid modernity, and Žižek frequently comments on the contradictions and failures of liberal democracy. It is not difficult to see how they would strike a chord with the concerns of a liberal, educated, Western audience. Yet, just like in the case of Sartre, this doesn’t mean their arguments are always presented in the most palatable manner: Žižek’s writing is complex to the point of obscurantism, and Bauman is no stranger to ‘thick description’. Of the three, Giddens’ work is probably the most accessible, although this might have more to do with good editing and academic English’s predilection for short sentences, than with the simplicity of ideas themselves. Either way, it could be argued that reading their work requires a relatively advanced understanding of the core concepts of social theory and philosophy, and the patience to plough through at times arcane language – all at seemingly no or very little direct benefit to the audience.
I want to argue that the appeal of star academics has very little to do with their ideas or the ways in which they are framed, and more to do with the combination of charismatic authority they exude, and the feeling of belonging, or shared understanding, that the consumption of their ideas provides. Similarly to Weber’s priests and magicians, star academics offer a public performance of the transfiguration of abstract ideas into concrete diagnosis of social evils. They offer an interpretation of the travails of late moderns – instability, job insecurity, surveillance, etc. – and, at the same time, the promise that there is something in the very act of intellectual reflection, or the work of social critique, that allows one to achieve a degree of distance from their immediate impact. What academic celebrities thus provide is – even if temporary – (re)‘enchantment’ of the world in which the production of knowledge, so long reserved for the small elite of the ‘initiated’, has become increasingly ‘profaned’, both through the massification of higher education and the requirement to make the stages of its production, as well as its outcomes, measurable and accountable to the public.
For the ‘common’ (read: Western, left-leaning, highly educated) person, the consumption of these celebrities’ ideas offers something akin to the combination of a music festival and a mindfulness retreat: opportunity to commune with the ‘like-minded’ and take home a piece of hope, if not for salvation, then at least for temporary exemption from the grind of neoliberal capitalism. Reflection is, after all, as Marx taught us, the privilege of the leisurely; engaging in collective acts of reflection thus equals belonging to (or at least affinity with) ‘the priesthood of the intellect’. As Bourdieu noted in his reading of Weber’s sociology of religion, laity expect of religion “not only justifications of their existence that can offer them deliverance from the existential anguish of contingency or abandonment, [but] justification of their existence as occupants of a particular position in the social structure”. Thus, Giddens’ or Žižek’s books become the structural or cultural equivalent of the Bible (or Qur’an, or any religious text): not many people know what is actually in them, even fewer can get the oblique references, but everyone will want one on the bookshelf – not necessarily for what they say, but because of what having them signifies.
This helps explain why people flock to hear Žižek or, for instance, Yannis Varoufakis, another leftist star intellectual. In public performances, their ideas are distilled to the point of simplicity, and conveniently latched onto something the public can relate to. At the Subversive Festival in Zagreb, Croatia in 2013, for instance, Žižek propounded the idea of the concept of ‘love’ as a political act. Nothing new, one would say – but who in the audience would not want to believe their crush has potential to turn into an act of political subversion? Therefore, these intellectuals’ utterances represent ‘speech acts’ in quite a literal sense of the term: not because they are truly (or consequentially) performative, but because they offer the public an illusion that listening (to them) and speaking (about their work) represents, in itself, a political act.
From this perspective, the mixture of admiration, envy and resentment with which these celebrities are treated in the academic establishment represents a reflection of their evangelical status. Those who admire them quarrel about the ‘correct’ interpretation of their works and vie for the status of the nominal successor, which would, of course, also feature ritualistic patricide – which may be the reason why, although surrounded by followers, so few academic celebrities actually elect one. Those who envy them monitor their rise to fame in hope of emulating it one day. Those who resent them, finally, tend to criticize their work for intellectual ‘baseness’, an argument that is in itself predicated on the distinction between academic (and thus ‘sacred’) and popular, ‘common’ knowledge.
Many are, of course, shocked when their idols turn out not to be ‘original’ thinkers channeling divine wisdom, but plagiarists or serial repeaters. Yet, there is very little to be surprised by; academic celebrities, after all, are creatures of flesh and blood. Discovering their humanity and thus ultimate fallibility – in other words, the fact that they cheat, copy, rely on unverified information, etc. – reminds us that, in the final instance, knowledge production is work like any other. In other words, it reminds us of our own mortality. And yet, acknowledging it may be the necessary step in dismantling the structures of rigid, masculine, God-like authority that still permeate the academia. In this regard, it makes sense to kill your idols.