Night(mare) in Michaelmas*: or, an academic Halloween tale

Halloween, as the tradition goes, is the time when the curtain between the two worlds opens. Of course, in anthropology you learn that this is not a tradition at all – they are all invented, it just depends how long ago. This Halloween, however, I would like to tell you a story about boundaries between worlds, and about those who stand, simultaneously, on both sides.

  1. Straw (wo)men

Scarecrow, effigy, straw man: they are remarkably similar. Made of dried grass, leaves, and branches, sometimes dressed in rags, but rarely with recognizable personal characteristics. Personalizing is the providence of Voodoo dolls, or those who use them, dark magic, and violence, which can sometimes be serious and political. Yet, they are all unmistakeably human: in this sense, they serve to attune us to the ordinariness – the unremarkability – of everyday violence.  

Scarecrows stand on ‘our’ side, and guard our world – that is, the world that relies on agricultural production – against ‘theirs’ (of crows, other birds, and non-human animals: they are, we are told, enemies). The sympathy and even pity we feel for scarecrows (witness The Wizard of Oz) shields us from knowledge that scarecrows bear the disproportionate brunt of the violence we do to Others, and to other worlds. We made it the object of crows’ fear and hatred, so that it protects us from what we do not want to acknowledge: that our well-being, and our food, comes only at the cost of destroying others’.

Effigies are less unambiguously ‘ours’. Regardless of whether they are remnants of *actual* human sacrifice (evidence for this is somewhat thin), they belong both to ‘their’ world and ‘ours’. ‘Theirs’ is the non-human world of fire, ash, and whatever remains once human artifices burn down. ‘Ours’ is the world of ritual, collectivity, of the safe reinstatement of order. Effigies are thus simultaneously dead and alive. We construct them, but not to keep the violence – of Others, and towards Others, like with scarecrows – at bay; we construct them in order to restrain and absorb the violence that is towards our own kind. When we burn effigies, we aim to destroy what is evil, rotten, and polluting amongst ourselves. This is why effigies are such a threatening political symbol: they always herald violence in our midst.

Straw men, by contrast, are neither scarecrows nor effigies: we construct them so that we may – selfishly – live. A ‘straw man’ argument is one we use in order to make it easier to win. We do not engage with actual critique, or possible shortfalls, of our own reasoning: instead, we construct an imaginary opponent to make ourselves appear stronger. This is why it makes no sense to fear straw men, though there are good reasons to be suspicious of those who fashion them all too often. They do not cross boundaries between worlds: they belong fully, and exclusively, to this one.

Straw men are not the stuff of horror. Similarly, there is no reason to fear the scarecrow, unless you are a crow. Effigies, however, are different.

2. Face(mask) to face(mask)

Universities in the UK insist on face-to-face teaching, despite the legal challenge from the University and College Union, protests from individual academics, as well as by now overwhelming evidence that there is no way to make classrooms fully ‘Covid-secure’. The justification for this has usually taken the form ‘students expect *some* face-to-face teaching’. This, I believe, means university leadership fears that students (or, more likely, their parents, possibly encouraged by the OfS and/or The Daily Mail) would request tuition fee reimbursements in case all teaching were to shift online. A more coherent interpretation of the stubborn insistence on f2f teaching is that shifting teaching online would mean many students would elect not to live in student accommodation. Student accommodation, in turn, is not only a major source of profit (and employment) for universities, but also for private landlords, businesses, and different kinds of services in cities that happen to have a significant student population.

In essence, then, f2f teaching serves to secure two sources of income, both disproportionately benefitting the propertied class. In this sense, it remains completely irrelevant who teaches face-to-face or, indeed, what is taught. This is obvious from the logic of guaranteeing face-to-face provision in all disciplines, not only those that might have demonstrable need for some degree of physical co-presence (I’m thinking those that use laboratories, or work with physical material). The content, delivery, and, supremely, rationale for maintaining face-to-face teaching remain unjustified. “They” (students?) expect to see “us” (teachers?) in flesh, blood, and, of course, facemask – which we hope will prevent the airborne particles of Coronavirus from infecting us, and thus from getting ill, suffering consequences, and potentially dying.

That this kind of risk would be an acceptable price for perfunctorily parading behind Perspex screens can only seem odd if we believe that what is being involved in face-to-face teaching is us as human beings and individuals. But it is not: when we walk into the classroom, we are not individual academics, teachers, thinkers, writers, or whatever else we may be. We are the ‘face’ of ‘face-to-face’ teaching. We are the effigies.

3. On institutional violence

On Monday, I am teaching a seminar in social theory. Under ‘normal’ circumstances, this would mean leading small group discussions on activities, and readings, that students have engaged with. Under these circumstances, it will mean groupings of socially distant students trying to have a discussion about readings struggling to hear each other through face masks. Given that I struggle to communicate ‘oat milk flat white’ from behind a mask, I have serious doubts that I will manage to convey particularly sophisticated insights into social theory.

But this does not matter: I am not there as a lecturer, as a human being, as a theorist. I am there to sublimate the violence that we are all complicit in. This violence concerns not only the systematic exposure to harm created by the refusal to acknowledge the risks of cramming human beings unnecessarily into closed spaces during the pandemic of an airborne disease, but also forms of violence specific to higher education. The sporadic violence of the curriculum, still overwhelmingly white, male, and colonial (incidentally, I am teaching exactly such a session). More importantly, it includes the violence that we tacitly accept when we overlook the fact that ‘our’ universities subsist on student fees, and that fees are themselves products of violence. The capital that fees depend on are either a product of exploitation in the past, or of student debt, and thus exploitation in the future.

When I walk into the classroom on Monday, I will want my students to remember that every lecturer stands on the boundary between two worlds, simultaneously dead and alive. Sure, we all hope everyone makes it out of there alive, but that’s not the point: the point is how close to the boundary we get. When I walk into the classroom on Monday, I will remind my students that what they see is not me, but the effigy constructed to obscure the violence of the intersection between academic and financial capital. When I walk into the classroom on Monday, I will want my students to know that the boundary between two worlds is very, very thin, and not only on Halloween.

  • Michaelmas, for those who do not know, is the name of Autumn (first) term of academic year at Oxford, Cambridge, and, incidentally, Durham.

Women and space

Mum in space

Recently, I saw two portrayals of women* in space, Proxima – starring Eva Green as the female member of the crew training for the first mission to Mars; and Away, starring Hilary Swank as the commander of the crew on the first mission to Mars (disclaimer: I have only seen the first two episodes of Away, so I’m not sure what happens in the rest of the series). Both would have been on my to-watch list even under normal circumstances; I grew up on science fiction, and, as any woman who, in Rebecca West’s unsurpassed formulation, expresses opinions that distinguish her from a doormat, have spent a fair bit of time thinking about gender, achievement, and leadership. This time, an event coloured my perception of both: my mum’s death in October.

My mother was 80; she died of complications related to metastatic cancer, which had started as breast cancer but had at this point spread to her liver. She had dealt with cancer intermittently since 2009; had had a double mastectomy and repeated chemotherapy/radiation at relatively regular intervals since – in 2011, 2015, 2018 and, finally, 2020 – the last one stopping shortly after it started, as it became evident that it could not reverse the course of Mum’s illness and was, effectively, making it worse.

As anyone living with this kind of illness knows, it’s always a long game of predicting and testing, waiting when the next one will come up; it’s possible that the cancer that eventually killed my mum was missed because of flaky screening in November, or because of delays at the height of the pandemic. What matters is that, by the time they discovered it during a regular screening in June, it was already too late.

What matters is that, because living with this sort of illness entails living in segments of time between two appointments, two screenings, two test results, we had kind of expected this. We had time to prepare. My mother had time to prepare. I had time to prepare. What also matters is that I was able to travel, to leave the country in time to see my mother still alive, despite the fact that at that time the Home Office had been sitting on my Tier 2 visa application since the start of July, and on the request for expedition due to compassionate circumstances for three weeks. This matters, because many other women are not so lucky as to have the determination to call the Home Office visa processing centre three times, the cultural capital to contact their MP when it seemed like time was running out, nor, for that matter, an MP (also a woman) who took on the case. It matters, because I was able to be there for the last two weeks of my mother’s life. I was there when she died.

But this essay isn’t about me, or my mum. It’s about women, and the stars.

Women and the stars

Every story about the stars is, in essence, a story of departure from Earth, and thus a story of separation, and thus a story of leaving, and what’s left behind. This doesn’t mean that these themes need to be parsed via the tired dichotomy of the ‘masculine-proactive-transcendent’ principle pitted against the ‘feminine-grounded-immanent’, but they often are, and both Proxima and Away play out this tension.

For those who had not seen either or both, Proxima and Away are about women who are travelling into space. Proxima’s Sarah (Eva Green) is the French member of the international crew of astronauts spending a year at the International Space Station in preparation for the first mission to Mars.  

The central tension develops along two vectors: the characters’ relation to their male partners (Sarah’s – ex – Thomas, Emma’s Matt); and their relationship to their daughters – Sarah’s Stella, and Emma’s Alexis (‘Lex’). While the relationship to their partners is not irrelevant, it is obvious that the mother-daughter relationship is central to the plot. Neither is it accidental that both (and only) children are girls: in this sense, the characters’ relationship to their daughters is not only the relationship to the next generation of women, it is also the relationship to their ‘little’ selves. In this sense, the daughters’ desire for their mother’s to return – or to stay, to never leave for the stars – is also a reflection of their own desire to give up, to stay in the comfort of the ground, the Earth, the safe (if suffocating) embrace of family relations and gender roles, in which ‘She’ is primarily, after all, a Mother.

It is interesting that both characters, in Proxima and in Away, find similar ‘solutions’ – or workarounds – for this central tension. In Proxima, Sarah leaves her daughter, but betrays her own commitment by violating pre-flight quarantine regulations, sneaking out the night before departure to take her daughter to see the rocket from up close. In Away, Emma decides to return from pre-flight Lunar base after her husband has a heart attack, only to be persuaded to stay, both by the (slowly recovering) husband and, more importantly, by the daughter, who – at the last minute – realizes the importance of the mission and says she wants her mum to stay, rather than return to Earth. The guilt both women feel over ‘abandoning’ their daughters (and thus their own traditional roles) is thus compensated or resolved by inspiring the next generation of women to ‘look at the stars’: to aim higher, and to prioritize transcendence at the cost of immanence, even when the price is pain.     

We might scoff at the simple(ish) juxtaposition of Earth and the stars, but the essence of that tension is still there, no matter how we choose to frame it. It is the basic tension explored in Simone de Beauvoir’s existentialist philosophy – the tension between being-for-themselves and being-in-relation to. It’s the unforgiving push and pull that leads so many women to take on disproportionate amounts of emotional, care, and organizational labour. It’s a tension you can’t resolve, no matter how queer, trans, or childless. Even outside of ‘traditional’ gender roles, women are still judged first and foremost on their ability to conceive and retain relationships; research on women leaders, for instance, shows they are required to consistently demonstrate a ‘collective’ spirit of the sort not expected of their male counterparts.

A particularly brutal version of this tension presented itself in the months before my mother died. I was stuck in England, not being able to leave before my Tier 2 visa was approved, and her condition was getting worse. Home Office was already behind their 8-week timeframe due to the pandemic; the official guidance – confirmed by the University – was that, if I chose to leave the country before the decision had been made, not only would I automatically forfeit my application, I would also be banned for a year from re-entering the country, and for a further year from re-applying for the same sort of permit. In essence, this meant I was choosing between my job – which I love – and my mother, which I loved too.

Luckily, I never had to make this choice; after a lot of intervention, my visa came through, and I was able to travel. I am not sure what kind of decision I would have made.

Mum and daughter in space

I saw Proxima in August, shortly after moving from Cambridge to Durham to start my job at the University. It was only the second time I was able to cry after having learned of my mum’s most recent, and final, diagnosis. I saw Away after returning from the funeral in early October, having acquired a Netflix account in a vague attempt at ‘self-care’ that didn’t involve reading analytic philosophy.

My mum saw neither, and I am not sure if she would have recognized herself in them. Hers was a generation of transcendence, buttressed by post-war recovery and socialism’s early successes in eradicating gender inequality. She introduced me to science fiction, but it was primarily Arthur Clarke, Isaac Assimov, and Stanislav Lem, my mum having no problems recognizing herself in the characters of Dave Bowman, Hari Seldon, or Rohan from ‘The Invincible’. Of course, as I was growing up, neither did I: it was only after I had already reached a relatively advanced career stage – and, it warrants mentioning, in particular after I began full-time living in the UK – that I started realizing how resolute the steel grip of patriarchy is in trying to make sure we never reach for the stars.

My mother famously said that she never considered herself a feminist, but had led a feminist life. By this, she meant that she had an exceptional career including a range of leadership positions, first in research, then in political advising, and finally in diplomacy; and that she had a child – me – as a single mother, without a partner involved. What she didn’t stop to think about was that, throughout this process, she had the support not only of two loving parents (both of my grandparents had already retired when I was born), but also of socialist housing, childcare, and education policies.I would point this fact out to her on the rare occasions when she would bring up her one remaining regret, which was that I choose not to have children. Though certainly aided by the fact I never felt the desire to, this decision was buttressed by my belief (and observation) that, no matter how dedicated, egalitarian, etc. etc. a partner can be, it is always mothers who end up carrying a greater burden of childcare, organization, and planning. I hope that she, in the end, understood this decision.

In one of the loveliest messages** I got after my mum died, a friend wrote that he believed my mum was now a star watching over me. As much as I would like to think that, if anything, the experience of a death has resolutely convinced me there is no ‘thereafter’, no space, place, or plane where we go after we die.

But I’m still watching the stars.  

This image is, sadly, a pun that’s untraslateable into English; sorry

* For avoidance of doubt, trans women are women

**Throughout the period, I’ve received absolutely stellar messages of love and support. Among these, it warrants saying, quite a few came from men, but those that came from women were exceptional in striking the balance between giving me space to think my own thoughts and sit with my own grief, while also making sure I knew I could rely on their support if I wanted to. This kind of balance, I think, comes partly out of having to always negotiate being-for-oneself and being-for-others, but there is a massive lesson in solidarity right in there.

The King’s Two(ish) Bodies

Contemporary societies, as we know, rest on calculation. From the establishment of statistics, which was essential to the construction of the modern state, to double-entry bookkeeping as the key accounting technique for ‘rationalizing’ capitalism and colonial trade, the capacity to express quality (or qualities, to be more precise) through numbers is at the core of the modern world.

From a sociological perspective, this capacity involves a set of connected operations. One is valuation, the social process through which entities (things, beings) come to (be) count(ed); the other is commensuration, or the establishment of equivalence: what counts as or for what, and under what circumstances. Marion Fourcade specifies three steps in this process: nominalization, the establishment of ‘essence’ (properties); cardinalization, the establishment of quantity (magnitude); and ordinalization, the establishment of relative position (e.g. position on a scale defined by distance from other values). While, as Mauss has demonstrated, none of these processes are unique to contemporary capitalism – barter, for instance, involves both cardinalization and commensuration – they are both amplified by and central to the operation of global economies.

Given how central the establishment of equivalence is to contemporary capitalism, it is not a little surprising that we seem so palpably bad at it. How else to explain the fact that, on the day when 980 people died from Coronavirus, the majority of UK media focused on the fact that Boris Johnson was recovering in hospital, reporting in excruciating detail the films he would be watching. While some joked about excessive concern for the health of the (secular) leader as reminiscent of the doctrine of ‘The King’s Two Bodies’, others seized the metaphor and ran along with it – unironically.

Briefly (and somewhat reductively – please go somewhere else if you want to quibble, political theory bros), ‘King’s Two Bodies’ is a concept in political theology by which the state is composed of two ‘corporeal’ entities – the ‘body politic’ (the population) and the ‘body natural’ (the ruler)*. This principle allows the succession of political power even after the death of the ruler, reflected in the pronouncement ‘The King is Dead, Long Live the King’. From this perspective, the claim that 980 < 1 may seem justified. Yet, there is something troubling about this, even beyond basic principles of decency. Is there a large enough number that would disturb this balance? Is it irrelevant whose lives are those?

Formally, most liberal democratic societies forbid the operation of a principle of equivalence that values some human beings as lesser than others. This is most clearly expressed in universal suffrage, where one person (or, more specifically, one political subject) equals one vote; on the global level, it is reflected in the principle of human rights, which assert that all humans have a certain set of fundamental and unalienable rights simply as a consequence of being human. All members of the set ‘human’ have equal value, just by being members of that set: in Badiou’s terms, they ‘count for one.

Yet, liberal democratic societies also regularly violate these principles. Sometimes, unproblematically so: for instance, we limit the political and some other rights of children and young people until they become of ‘legal age’, which is usually the age at which they can vote; until that point, they count as ‘less than one’. Sometimes, however, the consequences of differential valuation of human beings are much darker. Take, for instance, the migrants who are regularly left to drown in the Mediterranean or treated as less-than-human in detention centres; or the NHS doctors and nurses – especially BAME doctors and nurses – whose exposure to Coronavirus gets less coverage than that of politicians, celebrities, or royalty. In the political ontology of contemporary Britain, some lives are clearly worth less than others.

The most troubling implication of the principle by which the body of the ruler is worth more than a thousand (ten thousand? forty thousand?) of ‘his’ subjects, then, is not its ‘throwback’ to mediaeval political theology: it is its meaning for politics here and now. The King’s Two Bodies, after all, is a doctrine of equivalence: the totality of the body politic (state) is worth as much as the body of the ruler. The underlying operation is 1 = 1. This is horribly disproportionate, but it is an equivalence nonetheless: both the ruler and the population, in this sense, ‘count for one’. From this perspective, the death of a sizeable portion of that population cannot be irrelevant: if the body politic is somewhat diminished, the doctrine of King’s Two Bodies suggests that the power of the ‘ruler’ is somewhat diminished too. By implication, the current political ontology of the British state currently rests not on the principle of equivalence, but on a zero-sum game: losses in population do not diminish the power of the ruler, but rather enlarge it. And that is a dangerous, dangerous form of political ontology.

*Hobbes’ Leviathan is often seen as the perfect depiction of this principle; it is possible to quibble with this reading, but the cover image for this post – here’s the credit to its creator on Twitter – is certainly the best possible reflection on the shift in contemporary forms of political power in the aftermath of the Covid-19 pandemic.

Never let a serious virus go to waste: solidarity in times of the Corona

[Please note that nothing in this post is a replacement for public health advice: if in doubt, refer to official guidelines].

I’m not going to bang on about neoliberal origins of the current crisis. To anyone remotely observant, it is obvious that pandemics are more likely to spread quickly in a globalized world, and that decades of underfunding public health services are going to create systems that are unable to deal when one, like the current Covid_19, hits. I’ll leave such conclusions to sufficiently white, British, and hyphenated writers in The Guardian; I’ve written about neoliberalism elsewhere, and a whole host of other people have too. But there is another reason why crises like these are almost a godsend for the kind of authoritarian neoliberalism that seems to be dominant today.

Self-isolation is useful health strategy, especially in the first phases of trying to stem the spread of the disease, but a nation of people boarded up in their homes staring suspiciously at anyone who seems ‘foreign’ or ‘an outsider’, with contact with the ‘outside world’ reduced to television (hello, BBC!) or social media is a perfect breeding ground for fear, hate, and control. In other words, the neoliberal dream of ‘no such thing as society – only individual men, women, and their families’, made flesh. In this sort of environment, not only does paranoia, misinformation, and mistrust abound, it becomes very difficult to maintain progressive movements or ideas. This post, therefore, is intended as a sort of checklist on how to keep some sort of social solidarity going under possible prolonged period of social isolation*.

It is a work in progress, and I didn’t have time to edit and proofread it, which means it is probably going to change. Feel free to adapt and share as necessary.

  1. Maintain social networks: build new ones, and reinforce the old.

Maintain networks and links with people whenever safe. You can spend time with people while keeping a decent distance, and obviously staying at home if you do develop symptoms. If mobility or public transport are limited, try to connect with people from the neighbourhood. Ask your neighbours if they need something. Use technology and social media to reach out to people. Text your friends. Call them on Skype: face-to-face contact, even if you are not physically in the same space, is really important.

Set up mutual aid networks (current link for Cambridge here). You can help distribute food (see more below), skills, and care – from childminding to helping those who are less able to provide for themselves. If you are worried, wear a mask and keep safe distance. Meet in open spaces. Spring is coming, at least on the North hemisphere. This is what parks and community gardens are for. Remember public spaces? Those.

  1. Develop alternative networks of provision and supply chains. SHARE THEM.

I know this doesn’t come naturally to people in highly consumerist societies, but think very carefully about your actual needs, and about possible replacements. Most shortages are outcomes of the combination of inadequate planning and the (surprise!) failure of ‘markets’ to ‘self-regulate’. Not having enough to eat is not the same scale of crisis as not being able to get exactly the brand of beer you prefer. Think about those who may need help with provisions: from simple things like helping the elderly or disabled people reach something on the upper shelf of a supermarket, to those who will inevitably be too ill to go out. Ask them if they need anything. Offer to make a meal and share it with them.

If possible, develop alternative means of providing food and other necessities. Grow vegetables or herbs; borrow and repair items (not that that’s not what you should be doing anyway). Many products that are bought ready-made can be assembled from common household items. Vinegar (white, 5%), for instance, is a relatively reliable disinfectant (this doesn’t mean you should use it on an operating table, but you can use it in the kitchen – listen to doctors, not to Tesco ads; remember your Chemistry lessons). So is vodka, but I didn’t tell you that. And FFS, stop hoarding toilet paper.

  1. Keep busy.

In the first stages, you may be thinking: ‘Lovely! I’ll get to watch all of those Netflix documentaries!’. However, as experience of people in self-isolation with relatively little to do – think long-distance sailors, monks and nuns – shows, you will get bored and listless. Limited range of mobility and/or actual illness will make it worse. It is actually very, very important to maintain at least a minimal amount of daily discipline. Don’t just think ‘oh, I’ll just read and maybe go out for a walk’. Make a schedule for yourself, and for your loved ones. Stick to it.

It is very likely that schools and universities are going to shut down, or at least shift most of instruction online. This may sound like the least of your worries, but it is incredibly important to keep some form of education going – for yourself, and for others. The immediate reason is that it keeps people occupied; the more distant one is that educational contexts are also opportunities for discussing thoughts and feelings, which may otherwise be scarce. It is also an opportunity to think about education outside of the institutional framework. When my school closed early in spring of 1999, our literature teacher kept up weekly seminars, which were completely voluntary. Best of all, it allowed us to read and discuss books that were not on the syllabus.

Obviously, we will have to think about ways to create meaningful discussions and forms of interaction in a mixture of online and offline environments, but it should not stop at technical innovation. That narrative about developing education that is not about the needs of the market? This is your opportunity to build it.

  1. Keep active.

Self-isolation does not mean you have to turn into a couch potato (trust me, there is rarely such a wealth of solitude as experienced on a long walk in nature). Keep moving – it’s fine to go out if you’re feeling healthy, just avoid enclosed and crowded spaces. Apparently, swimming pools are still OK, but even if you do not do swimming there are many forms of physical activity you can enjoy outdoors – from running and cycling to, for instance, doing Tai Chi or yoga out in the open, weather permitting. And walk, walk, walk. If you are unsure of your health or level of fitness, take shorter walks first. Go with a friend or in small groups. Take water and a snack. Stay safe.

The museums, galleries, cinemas, or shopping malls may be closed, but that doesn’t mean there’s nothing to see. Look around. Take a map and explore your local area. Learn names of plants, birds, or local places. There is a multitude of lovely books on how to do this – from Solnit’s Field Guide to Getting Lost to Oddell’s How to Do Nothing, not to mention endless resources on- and offline on local history, wildlife, or geology.

  1. Do not give up politics.

In this sort of moment, politics can rightly feel like a luxury. When you are increasingly reliant on the Government for medical care or emergency rations, criticizing it may seem ill-advised. This is one of the reasons dictators love crises. Crises stifle dissent. Sometimes, this is aided by the designation of a powerful external (or internal) enemy; sometimes, the enemy is invisible – like a virus, or the economic crisis. Unlike wars, however, which tend to – at least in the long run – provoke resistance, invisible sources of the crisis, especially when connected to health, can make it much more difficult to sustain any sort of political challenge.

This is why it is incredibly important to keep connecting, discussing, and supporting each other in small and big ways. Make sure that you include those who are most vulnerable, who are most likely to be excluded from state care (that includes migrants, rough sleepers, and some people with long-term mental health problems or other illnesses). Remember, building solidarity and alternative networks is not only vital for the community to survive, but will also help you organize more efficiently in the future. Trust me, these skills will come in handy.

Stay safe.

 

*You are probably wondering what makes me qualified to write about these things. I have grown a bit tired of the fact that, as an Eastern European woman, I constantly have to justify my epistemic authority, but this time it does actually have to do with the famous Where (Do) I Come From. During the 1999 NATO bombing in Serbia, most public services were closed, there were shortages and a curfew. I was part of the opposition to the Serbian regime, which put me (and many other people) in a slightly odd situation of being opposed to what the regime had been doing (meaning waging war for close to a decade at that point against different parts of former Yugoslavia, which was also the ostensible cause of the NATO intervention) but, obviously, also not very happy about being bombed. Obviously, many things from that period are not scalable: I was 18. It was socialism. A lot of today’s technology wasn’t there (for instance, I remember listening to the long sound of dial-up modem whenever the air raid sirens would go off – it was easier to connect as most people went offline and into bomb shelters). But some are. So use as necessary.

Why you’re never working to contract

During the last #USSstrike, on non-picketing days, I practiced working to contract. Working to contract is part of the broader strategy known as ASOS – action short of a strike – and it means fulfilling your contractual obligations, but not more than that. Together with many other UCU members, I will be moving to ASOS from Thursday. But how does one actually practice ASOS in the neoliberal academia?

 

I am currently paid to work 2.5 days a week. Normally, I am in the office on Thursdays and Fridays, and sometimes half a Monday or Tuesday. The rest of the time, I write and plan my own research, supervise (that’s Cambridgish for ‘teaching’), or attend seminars and reading groups. Last year, I was mostly writing my dissertation; this year, I am mostly panickedly filling out research grant and job applications, for fear of being without a position when my contract ends in August.

Yet I am also, obviously, not ‘working’ only when I do these things. Books that I read are, more often than not, related to what I am writing, teaching, or just thinking about. Often, I will read ‘theory’ books at all times of day (a former partner once raised the issue of the excess of Marx on the bedside table), but the same can apply to science fiction (or any fiction, for that matter). Films I watch will make it into courses. Even time spent on Twitter occasionally yields important insights, including links to articles, events, or just generic mood of a certain category of people.

I am hardly exceptional in this sense. Most academics work much more than the contracted hours. Estimates vary from 45 to as much as 100 hours/week; regardless of what is a ‘realistic’ assessment, the majority of academics report not being able to finish their expected workload within a 37.5-40hr working week. Working on weekends is ‘industry standard’; there is even a dangerous overwork ethic. Yet increasingly, academics have begun to unite around the unsustainability of the system in which we are increasingly feeling overwhelmed, underpaid, and with mental and other health issues on the rise. This is why rising workloads are one of the key elements of the current wave of UCU strikes. It also led to coining of a parallel hashtag: #ExhaustionRebellion. It seems like the culture is slowly beginning to shift.

From Thursday onwards, I will be on ASOS. I look forward to it: being precarious makes not working sometimes almost as exhausting as working. Yet, the problem with the ethic of overwork is not only that is is unsustainable, or that is directly harmful to the health and well-being of individuals, institutions, and the environment. It is also that it is remarkably resilient: and it is resilient precisely because it relies on some of the things academics value the most.

Marx’s theory of value* tells us that the origins of exploitation in industrial capitalism lie in the fact workers do not have ownership over means of production; thus, they are forced to sell their labour. Those who own means of production, on the other hand, are driven by the need to keep capital flowing, for which they need profit. Thus, they are naturally inclined to pay their workers as little as possible, as long as that is sufficient to actually keep them working. For most universities, a steady supply of newly minted graduate students, coupled with seemingly unpalatable working conditions in most other branches of employment, means they are well positioned to drive wages further down (in the UK, 17.5% in real terms since 2009).

This, however, is where the usefulness of classical Marxist theory stops. It is immediately obvious that many of the conditions the late 19th-century industrial capitalism no longer apply. To begin with, most academics own the most important means of production: their minds. Of course, many academics use and require relatively expensive equipment, or work in teams where skills are relatively distributed. Yet, even in the most collective of research teams and the most collaborative of labs, the one ingredient that is absolutely necessary is precisely human thoughts. In social sciences and humanities, this is even more the case: while a lot of the work we do is in libraries, or in seminars, or through conversations, ultimately – what we know and do rests within us**.

Neither, for that matter, can academics simply written off as unwitting victims of ‘false consciousness’. Even if the majority could have conceivably been unaware of the direction or speed of the transformation of the sector in the 1990s or in the early 2000s, after the last year’s industrial action this is certainly no longer the case. Nor is this true only of those who are certainly disproportionately affected by its dual face of exploitation and precarity: even academics on secure contracts and in senior positions are increasingly viewing changes to the sector as harmful not only to their younger colleagues, but to themselves. If nothing else, what USS strikes achieved was to help the critique of neoliberalism, marketization and precarity migrate from the pages of left-leaning political periodicals and critical theory seminars into mainstream media discourse. Knowing that current conditions of knowledge production are exploitative, however, does not necessarily translate into knowing what to do about them.

This is why contemporary academic knowledge production is better characterized as extractive or rentier capitalism. Employers, in most cases, do not own – certainly not exclusively – the means of production of knowledge. What they do instead is provide the setting or platform through which knowledge can be valorized, certified, and exchanged; and charge a hefty rent in the process (this is one part of what tuition fees are about). This ‘platform’ can include anything from degrees to learning spaces; from labs and equipment to email servers and libraries. It can also be adjusted, improved, fitted to suit the interests of users (or consumers – in this case, students); this is what endless investment in buildings is about.

The cunning of extractive capitalism lies in the fact that it does not, in fact, require workers to do very much. You are a resource: in industrial capitalism, your body is a resource; in cognitive capitalism, your mind is a resource too. In extractive capitalism, it gets even better: there is almost nothing you do, a single aspect of your thoughts, feelings, or actions, that the university cannot turn into profit. Reading Marxist theory on the side? It will make it into your courses. Interested in politics? Your awareness of social inequalities will be reflected in your teaching philosophy. Involved in community action? It will be listed in your online profile under ‘public engagement and impact’. It gets better still: even your critique of extractive, neoliberal conditions of knowledge production can be used to generate value for your employer – just make sure it is published in the appropriate journals, and before the REF deadline.

This is the secret to the remarkable resilience of extractive capitalism. It feeds on exactly what academics love most: on the desire to know more, to explore, to learn. This is, possibly, one of the most basic human needs past the point of food, shelter, and warmth. The fact that the system is designed to make access to all of the latter dependent on being exploited for the former speaks, I think, volumes (it also makes The Matrix look like less of a metaphor and more of an early blueprint, with technology just waiting to catch up). This makes ‘working to contract’ quite tricky: even if you pack up and leave your office at 16.38 on the dot, Monday to Friday, your employer will still be monetizing your labour. You are probably, even if unwittingly, helping them do so.

What, then, are we to do? It would be obviously easy to end with a vague call a las barricadas, conveniently positioned so as to boost one’s political cred. Not infrequently, my own work’s been read in this way: as if it ‘reminds academics of the necessity of activism’ or (worse) ‘invites to concrete action’ (bleurgh). Nothing could be farther from the truth: I absolutely disagree with the idea that critical analysis somehow magically transmigrates into political action. (In fact, why we are prone to mistaking one for the other is one of the key topics of my work, but this is an ASOS post, so I will not be writing about it). In other words, what you will do – tomorrow, on (or off?) the picket line, in a bit over a week, in the polling booth, in the next few months, when you are asked to join that and that committee or to a review a junior colleague’s tenure/promotion folder – is your problem and yours alone. What this post is about, however, is what to do when you’re on ASOS.

Therefore, I want to propose a collective reclaiming of the life of the mind. Too much of our collective capacity – for thinking, for listening, for learning, for teaching – is currently absorbed by institutions that turn it, willy-nilly, into capital. We need to re-learn to draw boundaries. We need thinking, learning, and caring to become independent of process that turns them into profit. There are many ways to do it – and many have been tried before: workers and cooperative universities; social science centres; summer schools; and, last but not least, our own teach-outs and picket line pedagogy. But even when these are not happening, we need to seriously rethink how we use the one resource that universities cannot replace: our own thoughts.

So from Thursday next week, I am going to be reclaiming my own. I will do the things I usually do – read; research; write; teach and supervise students; plan and attend meetings; analyse data; attend seminars; and so on – until 4.40. After that, however, my mind is mine – and mine alone.

 

*Rest assured that the students I teach get treated to a much more sophisticated version of the labour theory of value (Soc1), together with variations and critiques of Marxism (Soc2), as well as ontological assumptions of heterodox vs. ‘neoclassical’ economics (Econ8). If you are an academic bro, please resist the urge to try to ‘explain’ any of these as you will both waste my time and not like the result. Meanwhile, I strongly encourage you to read the *academic* work I have published on these questions over the past decade, which you can find under Publications.

**This is one of the reasons why some of the most interesting debates about knowledge production today concern ownership, copyright, or legal access. I do not have time to enter into these debates in this post; for a relatively recent take, see here.

Knowing neoliberalism

(This is a companion/’explainer’ piece to my article, ‘Knowing Neoliberalism‘, published in July 2019 in Social Epistemology. While it does include a few excerpts from the article, if using it, please cite and refer to the original publication. The very end of this post explains why).

What does it mean to ‘know’ neoliberalism?

What does it mean to know something from within that something? This question formed the starting point of my (recently defended) PhD thesis. ‘Knowing neoliberalism’ summarizes some of its key points. In this sense, the main argument of the article is epistemological — that is, it is concerned with the conditions (and possibilities, and limitations) of (human) knowledge — in particular when produced and mediated through (social) institutions and networks (which, as some of us would argue, is always). More specifically, it is interested in a special case of that knowledge — that is, what happens when we produce knowledge about the conditions of the production of our own knowledge (in this sense, it’s not ‘about universities’ any more than, say, Bourdieu’s work was ‘about universities’ and it’s not ‘on education’ any more than Latour’s was on geology or mining. Sorry to disappoint).

The question itself, of course, is not new – it appears, in various guises, throughout the history of Western philosophy, particularly in the second half of the 20th century with the rise (and institutionalisation) of different forms of theory that earned the epithet ‘critical’ (including the eponymous work of philosophers associated with the Frankfurt School, but also other branches of Marxism, feminism, postcolonial studies, and so on). My own theoretical ‘entry points’ came from a longer engagement with Bourdieu’s work on sociological reflexivity and Boltanski’s work on critique, mediated through Arendt’s analysis of the dichotomy between thinking and acting and De Beauvoir’s ethics of ambiguity; a bit more about that here. However, the critique of neoliberalism that originated in universities in the UK and the US in the last two decades – including intellectual interventions I analysed in the thesis – lends itself as a particularly interesting case to explore this question.

Why study the critique of neoliberalism?

  • Critique of neoliberalism in the academia is an enormously productive genre. The number of books, journal articles, special issues, not to mention ‘grey’ academic literature such as reviews or blogs (in the ‘Anglosphere’ alone) has grown exponentially since mid-2000s. Originating in anthropological studies of ‘audit culture’, the genre now includes at least one dedicated book series (Palgrave’s ‘Critical University Studies’, which I’ve mentioned in this book review), as well as people dedicated to establishing ‘critical university studies‘ as a field of its own (for the avoidance of doubt, I do not associate my work within this strand, and while I find the delineation of academic ‘fields’ interesting as a sociological phenomenon, I have serious doubts about the value and validity of field proliferation — which I’ve shared in many amicable discussions with colleagues in the network). At the start of my research, I referred to this as the paradox of the proliferation of critique and relative absence of resistance; the article, in part, tries to explain this paradox through the examination of what happens if and when we frame neoliberalism as an object of knowledge — or, in formal terms, epistemic object.
  • This genre of critique is, and has been, highly influential: the tropes of the ‘death’ of the university or the ‘assault’ on the academia are regularly reproduced in and through intellectual interventions (both within and outside of the university ‘proper’), including far beyond academic neoliberalism’s ‘native’ context (Australia, UK, US, New Zealand). Authors who present this kind of critique, while most frequently coming from (or being employed at) Anglophone universities in the ‘Global North’, are often invited to speak to audiences in the ‘Global South’. Some of this, obviously, has to do with the lasting influence of colonial networks and hierarchies of ‘global’ knowledge production, and, in particular, with the durability of ‘White’ theory. But it illustrates the broader point that the production of critique needs to be studied from the same perspective as the production of any sort of knowledge – rather than as, somehow, exempt from it. My work takes Boltanski’s critique of ‘critical sociology’ as a starting point, but extends it towards a different epistemic position:

Boltanski primarily took issue with what he believed was the unjustified reduction of critical properties of ‘lay actors’ in Bourdieu’s critical sociology. However, I start from the assumption that professional producers of knowledge are not immune to the epistemic biases to which they suspect their research subjects to be susceptible…what happens when we take forms and techniques of sociological knowledge – including those we label ‘critical’ and ‘reflexive’ – to be part and parcel of, rather than opposed to or in any way separate from, the same social factors that we assume are shaping epistemic dispositions of our research subjects? In this sense, recognising that forms of knowledge produced in and through academic structures, even if and when they address issues of exploitation and social (in)justice, are not necessarily devoid of power relations and epistemic biases, seems a necessary step in situating epistemology in present-day debates about neoliberalism. (KN, p. 4)

  • This, at the same time, is what most of the sources I analysed in my thesis have in common: by and large, they locate sources of power – including neoliberal power – always outside of their own scope of influence. As I’ve pointed out in my earlier work, this means ‘universities’ – which, in practice, often means ‘us’, academics – are almost always portrayed as being on the receiving end of these changes. Not only is this profoundly unsociological – literally every single take on human agency in the past 50-odd years, from Foucault through to Latour and from Giddens through to Archer – recognizes ‘we’ (including as epistemic agents) have some degree of influence over what happens; it is also profoundly unpolitical, as it outsources agency to variously conceived ‘others’ (as I’ve agued here) while avoiding the tricky elements of own participation in the process. This is not to repeat the tired dichotomy of complicity vs. resistance, which is another not particularly innovative reading of the problem. What the article asks, instead, is: What kind of ‘purpose’ does systematic avoidance of questions of ambiguity and ambivalence serve?

What does it aim to achieve?

The objective of the article is not, by the way, to say that the existing forms of critique (including other contributions to the special issue) are ‘bad’ or that they can somehow be ‘improved’. Least of all is it to say that if we just ‘corrected’ our theoretical (epistemological, conceptual) lens we would finally be able to ‘defeat neoliberalism’. The article, in fact, argues the very opposite: that as long as we assume that ‘knowing’ neoliberalism will somehow translate into ‘doing away’ with neoliberalism we remain committed to the (epistemologically and sociologically very limited) assumption that knowledge automatically translates into action.

(…) [the] politically soothing, yet epistemically limited assumption that knowledge automatically translates into action…not only omit(s) to engage with precisely the political, economic, and social elements of the production of knowledge elaborated above, [but] eschews questions of ambiguity and ambivalence generated by these contradictions…examples such as doctors who smoke, environmentalists who fly around the world, and critics of academic capitalism who nonetheless participate in the ‘academic rat race’ (Berliner 2016) remind us that knowledge of the negative effects of specific forms of behaviour is not sufficient to make them go away (KN, p. 10)

(If it did, there would be no critics of neoliberalism who exploit their junior colleagues, critics of sexism who nonetheless reproduce gendered stereotypes and dichotomies, or critics of academic hierarchy who evaluate other people on the basis of their future ‘networking’ potential. And yet, here we are).

What is it about?

The article approaches ‘neoliberalism’ from several angles:

Ontological: What is neoliberalism? It is quite common to see neoliberalism as an epistemic project. Yet, does the fact that neoliberalism changes the nature of the production of knowledge and even what counts as knowledge – and, eventually, becomes itself a subject of knowledge – give us grounds to infer that the way to ‘deal’ with neoliberalism is to frame it as an object (of knowledge)? Is the way to ‘destroy’ neoliberalism to ‘know it’ better? Does treating neoliberalism as an ideology – that is, as something that masses can be ‘enlightened’ about – translate into the possibility to wield political power against it?

(Plot spoiler: my answer to the above questions is no).

Epistemological: What does this mean for ways we can go about knowing neoliberalism (or, for that matter, any element of ‘the social’)? My work, which is predominantly in social theory and sociology of knowledge (no, I don’t work ‘on education’ and my research is not ‘about universities’), in many ways overlaps substantially with social epistemology – the study of the way social factors (regardless of how we conceive of them) shape the capacity to make knowledge claims. In this context, I am particularly interested in how they influence reflexivity, as the capacity to make knowledge claims about our own knowledge – including knowledge of ‘the social’. Enter neoliberalism.

What kind of epistemic position are we occupying when we produce an account of the neoliberal conditions of knowledge production in academia? Is one acting more like the ‘epistemic exemplar’ (Cruickshank 2010) of a ‘sociologist’, or a ‘lay subject’ engaged in practice? What does this tell us about the way in which we are able to conceive of the conditions of the production of our own knowledge about those conditions? (KN, p. 4)

(Yes, I know this is a bit ‘meta’, but that’s how I like it).

Sociological: How do specific conditions of our own production of knowledge about neoliberalism influence this? As a sociologist of knowledge, I am particularly interested in relations of power and privilege reproduced through institutions of knowledge production. As my work on the ‘moral economy’ of Open Access with Chris Muellerleile argued, the production of any type of knowledge cannot be analysed as external to its conditions, including when the knowledge aims to be about those conditions.

‘Knowing neoliberalism’ extends this line of argument by claiming we need to engage seriously with the political economy of critique. It offers some of the places we could look for such clues: for instance, the political economy of publishing. The same goes for networks of power and privilege: whose knowledge is seen as ‘translateable’ and ‘citeable’, and whose can be treated as an empirical illustration:

Neoliberalism offers an overarching diagnostic that can be applied to a variety of geographical and political contexts, on different scales. Whose knowledge is seen as central and ‘translatable’ in these networks is not independent from inequalities rooted in colonial exploitation, maintaining a ‘knowledge hierarchy’ between the Global North and the Global South…these forms of interaction reproduce what Connell (2007, 2014) has dubbed ‘metropolitan science’: sites and knowledge producers in the ‘periphery’ are framed as sources of ‘empirical’, ‘embodied’, and ‘lived’ resistance, while the production of theory, by and large, remains the work of intellectuals (still predominantly White and male) situated in prestigious univer- sities in the UK and the US. (KN, p. 9)

This, incidentally, is the only part of the article that deals with ‘higher education’. It is very short.

Political: What does this mean for different sorts of political agency (and actorhood) that can (and do) take place in neoliberalism? What happens when we assume that (more) knowledge leads to (more) action? (apart from a slew of often well-intended but misconceived policies, some of which I’ve analysed in my book, ‘From Class to Identity’). The article argues that affecting a cognitive slippage between two parts of Marx’s Eleventh Thesis – that is, assuming that interpreting the world will itself lead to changing it – is the thing that contributes to the ‘paradox’ of the overproduction of critique. In other words, we become more and more invested in ‘knowing’ neoliberalism – e.g. producing books and articles – and less invested in doing something about it. This, obviously, is neither a zero-sum game (and it shouldn’t be) nor an old-fashioned call on academics to drop laptops and start mounting barricades; rather, it is a reminder that acting as if there were an automatic link between knowledge of neoliberalism and resistance to neoliberalism tends to leave the latter in its place.

(Actually, maybe it is a call to start mounting barricades, just in case).

Moral: Is there an ethically correct or more just way of ‘knowing’ neoliberalism? Does answering these questions enable us to generate better knowledge? My work – especially the part that engages with the pragmatic sociology of critique – is particularly interested in the moral framing and justification of specific types of knowledge claims. Rather than aiming to provide the ‘true’ way forward, the article asks what kind of ideas of ‘good’ and ‘just’ are invoked/assumed through critique? What kind of moral stance does ‘gnossification’ entail? To steal the title of this conference, when does explaining become ‘explaining away’ – and, in particular, what is the relationship between ‘knowing’ something and framing our own moral responsibility in relation to something?

The full answer to the last question, unfortunately, will take more than one publication. The partial answer the article hints at is that, while having a ‘correct’ way of ‘knowing’ neoliberalism will not ‘do away’ with neoliberalism, we can and should invest in more just and ethical ways of ‘knowing’ altogether. It shouldn’t warrant reminding that the evidence of wide-spread sexual harrassment in the academia, not to mention deeply entrenched casual sexism, racism, ableism, ethnocentrism, and xenophobia, all suggest ‘we’ (as academics) are not as morally impeccable as we like to think we are. Thing is, no-one is. The article hopes to have made a small contribution towards giving us the tools to understand why, and how, this is the case.

I hope you enjoy the article!

——————————————————-

P.S. One of the rather straightforward implications of the article is that we need to come to terms with multiple reasons for why we do the work we do. Correspondingly, I thought I’d share a few that inspired me to do this ‘companion’ post. When I first started writing/blogging/Tweeting about the ‘paradox’ of neoliberalism and critique in 2015, this line of inquiry wasn’t very popular: most accounts smoothly reproduced the ‘evil neoliberalism vs. poor us little academics’ narrative. This has also been the case with most people I’ve met in workshops, conferences, and other contexts I have participated in (I went to quite a few as part of my fieldwork).

In the past few years, however, more analyses seem to converge with mine on quite a few analytical and theoretical points. My initial surprise at the fact that they seem not to directly engage with any of these arguments — in fact, were occasionally very happy to recite them back at me, without acknowledgement, attribution or citation — was somewhat clarified through reading the work on gendered citation practices. At the same time, it provided a very handy illustration for exactly the type of paradox described here: namely, while most academics are quick to decry the precarity and ‘awful’ culture of exploitation in the academia, almost as many are equally quick to ‘cite up’ or act strategically in ways that reproduce precisely these inequalities.

The other ‘handy’ way of appropriating the work of other people is to reduce the scope of their arguments, ideally representing it as an empirical illustration that has limited purchase in a specific domain (‘higher education’, ‘gender’, ‘religion’), while hijacking the broader theoretical point for yourself (I have heard a number of other people — most often, obviously, women and people of colour — describe a very similar thing happening to them).

This post is thus a way of clarifying exactly what the argument of the article is, in, I hope, language that is simple enough even if you’re not keen on social ontology, social epistemology, social theory, or, actually, anything social (couldn’t blame you).

PPS. In the meantime, I’ve also started writing an article on how precisely these forms of ‘epistemic positioning’ are used to limit and constrain the knowledge claims of ‘others’ (women, minorities) etc. in the academia: if you have any examples you would like to share, I’m keen to hear them!

Existing while female

 

Space

 

The most threatening spectacle to the patriarchy is a woman staring into space.

I do not mean in the metaphorical sense, as in a woman doing astronomy or astrophysics (or maths or philosophy), though all of these help, too. Just plainly sitting, looking into some vague mid-point of the horizon, for stretches of time.

I perform this little ‘experiment’ at least once per week (more often, if possible; I like staring into space). I wholly recommend it. There are a few simple rules:

  • You can look at the passers-by (a.k.a. ‘people-watching’), but try to avoid eye contact longer than a few seconds: people should not feel that they are particular objects of attention.
  • If you are sitting in a café, or a restaurant, you can have a drink, ideally a tea or coffee. That’s not saying you shouldn’t enjoy your Martini cocktails or glasses of Chardonnay, but images of women cradling tall glasses of alcoholic drink of choice have been very succesfully appropriated by both capitalism and patriarchy, for distinct though compatible purposes.
  • Don’t look at your phone. If you must check the time or messages it’s fine, but don’t start staring at it, texting, or browsing.
  • Don’t read (a book, a magazine, a newspaper). If you have a particularly interesting or important thought feel free to scribble it down, but don’t bury your gaze behind a notebook, book, or a laptop.

 

Try doing this for an hour.

What this ‘experiment’ achieves is that it renders visible the simple fact of existing. As a woman. Even worse, it renders visible the process of thinking. Simultaneously inhabiting an inner space (thinking) and public space (sitting), while doing little else to justify your existence.

NOT thinking-while-minding-children, as in ‘oh isn’t it admirrrrable that she manages being both an academic and a mom’.

NOT any other form of ‘thinking on our feet’ that, as Isabelle Stengers and Vinciane Despret (and Virginia Woolf) noted, was the constitutive condition for most thinking done by women throughout history.

The important thing is to claim space to think, unapologetically and in public.

Depending on place and context, this usually produces at least one of the following reactions:

  • Waiting staff, especially if male, will become increasingly attentive, repeatedly inquiring whether (a) I am alright (b) everything was alright (c) I would like anything else (yes, even if they are not trying to get you to leave, and yes, I have sat in the same place with friends, and this didn’t happen)
  • Men will try to catch my eye
  • Random strangers will start repeatedly glancing and sometimes staring in my direction.

 

I don’t think my experience in this regard is particularly exceptional. Yes, there are many places where women couldn’t even dream of sitting alone in public without risking things much worse than uncomfortable stares (I don’t advise attempting this experiment in such places). Yes, there are places where staring into a book/laptop/phone, ideally with headphones on, is the only way to avoid being approached, chatted up, or harassed by men. Yet, even in wealthy, white, urban, middle-class, ‘liberal’ contexts, women who display signs of being afflicted by ‘the life of the mind’ are still somehow suspect. For what this signals is that it is, actually, possible for women to have an inner life not defined by relation to men, if not particular men, then at least in the abstract.

 

Relations

‘Is it possible to not be in relation to white men?’, asks Sara Ahmed, in a brilliant essay on intellectual genealogies and institutional racism. The short answer is yes, of course, but not as long as men are in charge of drawing the family tree. Philosophy is a clear example. Two of my favourite philosophers, De Beauvoir and Arendt, are routinely positioned in relation to, respectively, Sartre and Heidegger (and, in Arendt’s case, to a lesser degree, Jaspers). While, in the case of De Beauvoir, this could be, to a degree, justified – after all, they were intellectual and writing partners for most of Sartre’s life – the narrative is hardly balanced: it is always Simone who is seen in relation to Jean-Paul, not the other way round*.

In a bit of an ironic twist, De Beauvoir’s argument in the Second Sex that a woman exists only in relation to a man seems to have been adopted as a stylistic prescription for narrating intellectual history (I recently downloaded an episode of In Our Time on De Beauvoir only to discover, in frustration, that it repeats exactly this pattern). Another example is the philosopher GEM Anscombe, whose work is almost uniquely described in terms of her interpretation of Wittgenstein (she was also married to the philosopher Peter Geach, which doesn’t help). A great deal of Anscombe’s writing does not deal with Wittgenstein, but that is, somehow, passed over, at least in non-specialist circles. What also gets passed over is that, in any intellectual partnership or friendship, ideas flow in both directions. In this case, the honesty and generosity of women’s acknowledgments (and occasional overstatements) of intellectual debt tends to be taken for evidence of incompleteness of female thinking; as if there couldn’t, possibly, be a thought in their ‘pretty heads’ that had not been placed there by a man.

Anscombe, incidentally, had a predilection for staring at things in public. Here’s an excerpt from the Introduction to the Vol. 2 of her collected philosophical papers, Metaphysics and the philosophy of mind:

“The other central philosophical topic which I got hooked on without realising it was philosophy, was perception (…) For years I would spend time in cafés, for instance, staring at objects saying to myself: ‘I see a packet. But what do I really see? How can I say that I see here anything more than a yellow expanse?’” (1981: viii).

 

But Wittgenstein, sure.

 

Nature

Nature abhors a vacuum, if by ‘nature’ we mean the rationalisation of patriarchy, and if by ‘vacuum’ we mean the horrifying prospect of women occupied by their own interiority, irrespectively of how mundane or elevated its contents. In Jane Austin’s novels, young women are regularly reminded that they should seem usefully occupied – embroidering, reading (but not too much, and ideally out loud, for everyone’s enjoyment), playing an instrument, singing – whenever young gentlemen came for a visit. The underlying message is that, of course, young gentlemen are not going to want to marry ‘idle’ women. The only justification for women’s existence, of course, is their value as (future) wives, and thus their reproductive capital: everything else – including forms of internal life that do not serve this purpose – is worthless.

Clearly, one should expect things to improve once women are no longer reduced to men’s property, or the function of wives and mothers. Clearly, they haven’t. In Motherhood, Sheila Heti offers a brilliant diagnosis of how the very question of having children bears down differently on women:

It suddenly seemed like a huge conspiracy to keep women in their thirties—when you finally have some brains and some skills and experience—from doing anything useful with them at all. It is hard to when such a large portion of your mind, at any given time, is preoccupied with the possibility—a question that didn’t seem to preoccupy the drunken men at all (2018: 98).

Rebecca Solnit points out the same problem in The Mother of All Questions: no matter what a woman does, she is still evaluated in relation to her performance as a reproductive engine. One of the messages of the insidious ‘lean-in’ kind of feminism is that it’s OK to not be a wife and a mother, as long as you are remarkably successful, as a businesswoman, a political leader, or an author. Obviously, ‘ideally’, both. This keeps women stressed, overworked, and so predictably willing to tolerate absolutely horrendous working conditions (hello, academia) and partnerships. Men can be mediocre and still successful (again, hello, academia); women, in order to succeed, have to be outstanding. Worse, they have to keep proving their oustandingness; ‘pure’ existence is never enough.

To refuse this – to refuse to justify one’s existence through a retrospective or prospective contribution to either particular men (wife of, mother of, daughter of), their institutions (corporation, family, country), or the vaguely defined ‘humankind’ (which, more often than not, is an extrapolation of these categories) – is thus to challenge the washed-out but seemingly undying assumption that a woman is somehow less-worthy version of a man. It is to subvert the myth that shaped and constrained so many, from Austin’s characters to Woolf’s Shakespeare’s sister: that to exist a woman has to be useful; that inhabiting an interiority is to be performed in secret (which meant away from the eyes of the patriarchy); that, ultimately, women’s existence needs to be justified. If not by providing sex, childbearing, and domestic labour, then at least indirectly, by consuming stuff and services that rely on underpaid (including domestic) labour of other women, from fashion to IPhones and from babysitting to nail salons. Sometimes, if necessary, also by writing Big Books: but only so they could be used by men who see in them the reflection of their own (imagined) glory.

 

Death

Heti recounts another story, about her maternal grandmother, Magda, imprisoned in a concentration camp during WWII. One day, Nazi soldiers came to the women’s barracks and asked for volunteers to help with cooking, cleaning and scrubbing in the officers’ kitchen. Magda stepped forward; as Heti writes, ‘they all did’. Magda was not selected; she was lucky, as it soon transpired that those women were not taken to the kitchen, but rather raped by the officers and then killed.

 

I lingered over the sentence ‘they all did’ for a long time. What would it mean for more women to not volunteer? To not accept endlessly proving one’s own usefulness, in cover letters, job interviews, student feedback forms? To simply exist, in space?

 

I think I’ll just sit and think about it for a while.

Screen Shot 2019-06-12 at 18.12.20.png

(The photo is by the British photographer Hannah Starkey, who has a particular penchant for capturing women inhabiting their own interiority. Thank you to my partner who first introduced me to her work, the slight irony being that he interrupted me in precisely one such moment of contemplation to tell me this).

 

 

*I used to make a point of asking the students taking Social Theory to change ‘Sartre’s partner Simone de Beauvoir’ in their essays to ‘de Beauvoir’s partner Jean-Paul Sartre’ and see if it begins to read differently.

Area Y: The Necropolitics of Post-Socialism

This summer, I spent almost a month in Serbia and Montenegro (yes, these are two different countries, despite New York Times still refusing to acknowledge this). This is about seven times as long as I normally would. The two principal reasons are that my mother, who lives in Belgrade, is ill, and that I was planning to get a bit of time to quietly sit and write my thesis on the Adriatic coast of Montenegro. How the latter turned out in light of the knowledge of the former I leave to imagination (tl;dr: not well). It did, however, give me ample time to reflect on the post-socialist condition, which I haven’t done in a while, and to get outside Belgrade, to which I normally confine my brief visits.

The way in which perverse necro/bio-politics of post-socialism obtain in my mother’s illness, in the landscape, and in the socio-material, fits almost too perfectly into what has been for years the dominant style of writing about places that used to be behind the Iron Curtain (or, in the case of Yugoslavia, on its borders). Social theory’s favourite ruins – the ruins of socialism – are repeatedly re-valorised through being dusted off and resurrected, as yet another alter-world to provide the mirror image to the here and the now (the here and the now, obviously, being capitalism). During the Cold War, the Left had its alter-image in the Soviet Union; now, the antidote to neoliberalism is provided not through the actual ruins of real socialism – that would be a tad too much to handle – but through the re-invention of the potential of socialism to provide, in a tellingly polysemic title of MoMA’s recently-opened exhibition on architecture in Yugoslavia, concrete utopias.

Don’t get me wrong: I would love to see the exhibition, and I am sure that it offers much to learn, especially for those who did not have the dubious privilege of having grown up on both sides of socialism. It’s not the absence of nuance that makes me nauseous in encounters with socialist nostalgia: a lot of it, as a form of cultural production, is made by well-meaning people and, in some cases, incredibly well-researched. It’s that  resurrecting hipsterified golems of post-socialism serves little purpose other than to underline their ontological status as a source of comparison for the West, cannon-fodder for imaginaries of the world so bereft of hope that it would rather replay its past dreams than face the potential waking nightmare of its future.

It’s precisely this process that leaves them unable to die, much like the ghosts/apparitions/copies in Lem’s (and Tarkovsky’s) Solaris, and in VanderMeer’s Southern Reach trilogy. In VanderMeer’s books, members of the eleventh expedition (or, rather, their copies) who return to the ‘real world’ after exposure to the Area X develop cancer and die pretty quickly. Life in post-socialism is very much this: shadows or copies of former people confusedly going about their daily business, or revisiting the places that once made sense to them, which, sometimes, they have to purchase as repackaged ‘post-socialism’; in this sense, the parable of Roadside Picnic/Stalker as the perennial museum of post-communism is really prophetic.

The necropolitical profile of these parts of former Yugoslavia, in fact, is pretty unexceptional. For years, research has shown that rapid privatisation increases mortality, even controlled for other factors. Obviously, the state still feigns perfunctory care for the elderly, but healthcare is cumbersome, inefficient and, in most cases, barely palliative. Smoking and heavy drinking are de rigueur: in winter, Belgrade cafés and pubs turn into proper smokehouses. Speaking of that, vegetarianism is still often, if benevolently, ridiculed. Fossil fuel extraction is ubiquitous. According to this report from 2014, Serbia had the second highest rate of premature deaths due to air pollution in Europe. That’s not even getting closer to the Thing That Can’t Be Talked About – the environmental effects of the NATO intervention in 1999.

An apt illustration comes as I travel to Western Serbia to give a talk at the anthropology seminar at Petnica Science Centre, where I used to work between 2000 and 2008. Petnica is a unique institution that developed in the 1980s and 1990s as part science camp, part extracurricular interdisciplinary  research institute, where electronics researchers would share tables in the canteen with geologists, and physicists would talk (arguably, not always agreeing) to anthropologists. Founded in part by the Young Researchers of Serbia (then Yugoslavia), a forward-looking environmental exploration and protection group, the place used to float its green credentials. Today, it is funded by the state – and fully branded by the Oil Industry of Serbia. The latter is Serbian only in its name, having become a subsidiary of the Russian fossil fuel giant Gazpromneft. What could arguably be dubbed Serbia’s future research elite, thus, is raised in view of full acceptance of the ubiquity of fossil fuels not only for providing energy, but, literally, for running the facilities they need to work.

These researchers can still consider themselves lucky. The other part of Serbian economy that is actually working are factories, or rather production facilities, of multinational companies. In these companies, workers are given 12-hour shifts, banned from unionising, and, as a series of relatively recent reports revealed, issued with adult diapers so as to render toilet breaks unnecessary.

As Elizabeth Povinelli argued, following Achille Mbembe, geontopower – the production of life and nonlife, and the creation of the distinction between them, including what is allowed to live and what is allowed to die – is the primary mode of exercise of power in late liberalism. Less frequently examined way of sustaining the late liberal order is the production of semi-dependent semi-peripheries. Precisely because they are not the world’s slums, and because they are not former colonies, they receive comparatively little attention. Instead, they are mined for resources (human and inhuman). That the interaction between the two regularly produces outcomes guaranteed to deplete the first is of little relevance. The reserves, unlike those of fossil fuels, are almost endless.

Serbian government does its share in ensuring that the supply of cheap labour force never runs out, by launching endless campaigns to stimulate reproduction. It seems to be working: babies are increasingly the ‘it’ accessory in cafés and bars. Officially, stimulating the birth rate is to offset the ‘cost’ of pensions, which IMF insists should not increase. Unofficially, of course, the easiest way to adjust for this is to make sure pensioners are left behind. Much like the current hype about its legacy, the necropolitics of post-socialism operates primarily through foregrounding its Instagrammable elements, and hiding the ugly, non-productive ones.

Much like in VanderMeer’s Area X, knowledge that the border is advancing could be a mixed blessing: as Danowski and Viveiros de Castro argued in a different context, end of the world comes more easily to those for whom the world has already ended, more than once. Not unlike what Scranton argued in Learning to Die in the Anthropocene – this, perhaps, rather than sanitised dreams of a utopian future, is one thing worth resurrecting from post-socialism.

Writing our way out of neoliberalism? For an ecology of publishing

[This blog post is written in preparation for the panel Thinking knowledge production without the university that I am organising at the Sociological Review’s conference Undisciplining: conversations from the edges, Newcastle, Gateshead, 18-21 June 2018. Reflections from other participants are here. I am planning to expand on this part during and after the conference, so questions and comments welcome!]

What kind of writing and publishing practices might support knowledge that is not embedded in the neoliberal university? I’ve been interested in this question for a long while, in part because it is a really tough one. As academics – and certainly as academics in social sciences and humanities – writing and publishing is, ultimately, what we do. Of course, our work frequently also involves teaching – or, as those with a love for neat terminologies like to call it, ‘knowledge transmission’ – as well as different forms of its communication or presentation, which we (sometimes performatively) refer to as ‘public engagement’. Even those, however, often rely or at least lead to the production of written text of some sort: textbooks, academic blogs. This is no surprise: modern Western academic tradition is highly reliant on the written word. Obviously, in this sense, questions and problems of writing/publishing and its relationship with knowledge practices are both older and much broader than the contemporary economy of knowledge production, which we tend to refer to as neoliberal. They may also last beyond it, if, indeed, we can imagine the end of neoliberalism. However, precisely for this reason, it makes sense to think about how we might reconstruct writing and publishing practices in ways that weaken, rather than contribute to the reproduction of neoliberal practices of knowledge production.

The difficulty with thinking outside of the current framework becomes apparent when we try thinking of the form these practices could take. While there are many publications  not directly contributing to the publishing industry – blogs, zines, open-access, collaborative, non-paywalled articles all come to mind – they all too easily become embedded in the same dynamic. As a result, they are either eschewed because ‘they do not count’, or else they are made to count (become countable) by being reinserted in the processes of valorisation via the proxy of ‘impact’. As I’ve argued in this article (written with my former colleague from the UNIKE (Universities in the knowledge economy) project, economic geographer Chris Muellerleile), even forms of knowledge production that explicitly seek to ‘disrupt’ such modes, such as Open Access or publish first/review later platforms, often rely on – even if implicit – assumptions that can feed into the logic of evaluation and competition. This is not saying that restricting access to scientific publications is in any way desirable. However, we need to accept that opening access (under certaincircumstances, for certain parts of the population) does not in and of itself do much to ‘disrupt’ the broader political and economic system in which knowledge is embedded.

Publish or…publish 

Unsurprisingly,  the hypocrisy of the current system disproportionately affects early career and precarious scholars. ‘Succeeding’ in the academia – i.e. escaping precarity – hinges on publishing in recognised formats and outlets: this means, almost exclusively, peer-reviewed journal in one’s discipline, and books. The process is itself costly and risky. Turnover times can be ridiculously long: a chapter for an edited volume I wrote in July 2015 has finally been published last month, presumably because other – more senior, obviously – contributors took much longer. The chapter deals with a case from 2014, which makes the three-year lag between its accepted version and publication problematic for all sorts of reasons. On the other hand, even when good and relatively timely, the process of peer review can be soul-crushing for junior scholars (see: Reviewer No.2). Obviously, if this always resulted in a better final version of the article, we could argue it would make it worthwhile. However, while some peer reviewers offer constructive feedback that really improves the process of publication, this is not always the case. Increasingly, because peer review takes time and effort, it is kicked down the academic ladder, so it becomes a case of who can afford to review – or, equally (if not more) often, who cannot afford to say no a review.

In other words, just like other aspects of academic knowledge production, the reviewing and publishing process is plagued by stark inequalities. ‘Big names’ or star professors can get away with only perfunctory – if any – peer review; a series of clear cases of plagiarism or self-plagiarism, not to mention a string of recent books with bombastic titles that read like barely-edited transcripts of undergraduate seminars (there are plenty around), are a testament to this. Just in case, many of these ‘Trump academics‘ keep their own journals or book series as a side hustle, where the degree of familiarity with the editorial board is often the easiest path to publication.

What does this all lead to? The net result is the proliferation of academic publications of all sorts, what some scholars have dubbed the shift from an economy of scarcity to that of abundance. However, it’s not that more is necessarily better: while it’s difficult (if not entirely useless) to speak of scholarly publications in universal terms, as the frequently (mis-)cited piece of research argued, most academic articles are read and cited by very few people. It’s quite common for academics to complain they can’t keep up with the scholarly production in their field, even when narrowed down to a very tight disciplinary specialism. Some of this, obviously, has to do with the changing structure of academic labour, in particular the increasing load of administration and the endless rounds of research evaluation and grant application writing, which syphons aways time for reading. But some of this has to do with the simple fact that there is so much more of published stuff around: scholars compete with each other in terms of who’s going to get more ‘out there’, and sooner. As a result, people rarely take the time to read others’ work carefully, especially if it is outside of their narrow specialism or discipline. Substituting this with sycophantic shout-outs via Twitter or book reviews, which are often thinly veiled self-serving praise that reveals more about the reviewer’s career plans, than about the actual publication being reviewed.

For an ecology of knowledge production 

So, how is it possible to work against all this? Given that the purpose of this panel was to start thinking about actual solutions, rather than repeat tired complaints about the nature of knowledge production in the neoliberal academia, I am going to put forward two concrete proposals: one is on the level of conceptual – not to say ‘behavioural’ -change; the other on the level of institutions, or organisations. The first is a commitment to, simply, publish less. Much like in environmental pollution where solutions such as recycling, ‘natural’ materials, and free and ethical trading are a way less effective way to minimise CO2 emissions than just reducing consumption (and production), in writing and publishing we could move towards the progressive divestment from the idea that the goal is to produce as much as possible, and put it ‘out there’ as quickly as possible. To be clear, this isn’t a thinly-veiled plea for ‘slow’ scholarship. Some disciplines or topics clearly call for quicker turnover – one can think of analyses in current affairs, environmental or political science. On the other hand, some topics or disciplines require time, especially when there is value in observing how trends develop over a period of time. Recognising the divergent temporal cycles of knowledge production not only supports the dignity of the academic profession, but also recognises knowledge production happens outside of academia, and should not – need not – necessarily be dependent on being recognised or rewarded within it. As long as the system rewards output, the rate of output will tend to increase: in this sense, competition can be seen not necessarily as an outcome as much as a byproduct of our desire to ‘populate’ the world with the fruits of our labour. Publishing less, in this sense, is not that much a performative act as the first step in divesting from the incessant drive of competitive logic that permeates both the academia and the world ‘outside’ of it.

One way is to, simply, publish less.

Publishers play a very important role in this ecology of knowledge production. Much has been made of the so-called ‘predatory’ journals and publishers, clearly seeking even a marginal profit: the less often mentioned flipside is that almost all publishing is to some degree ‘predatory’, in the sense in which editors seek out authors whose work they believe can sell – that is, sell for a profit that goes to the publisher, and sometimes the editors, while authors can, at best, hope for an occasional drip from royalties (unless, again, they are star/Trump academics, in which case they can aspire to hefty book advances). Given the way in which the imperative to publish is ingrained in the dynamics of academic career progression – and, one might argue, in the academic psyche – it is no surprise that multiple publishing platforms, often of dubious quality, thrive in this landscape.

Instead of this, we could aim for a combination of publishing cooperatives – perhaps embedded in professional societies – and a small number of established journals, which could serve as platforms or hubs for a variety of formats, from blogs to full-blown monographs. These journals would have an established, publicly known, and well-funded board of reviewers and editors. Combined, these principles could enable publishing to serve multiple purposes, communities and formats, without the need to reproduce a harmful hierarchy embedded in competitive market-oriented models. It seems to me that the Sociological Review, which is organising this conference, could be  going towards this model. Another journal with multiple formats and an online forum is the Social Epistemology Review and Reply Collective. I am sure there are others that could serve as blueprints for this new ecology of knowledge production; perhaps, together, we can start thinking how to build it.

Life or business as usual? Lessons of the USS strike

[Shortened version of this blog post was published on Times Higher Education blog on 14 March under the title ‘USS strike: picket line debates will reenergise scholarship’].

 

Until recently, Professor Marenbon writes, university strikes in Cambridge were a hardly noticeable affair. Life, he says, went on as usual. The ongoing industrial action that UCU members are engaging in at UK’s universities has changed all that. Dons, rarely concerned with the affairs of the lesser mortals, seem to be up in arms. They are picketing, almost every day, in the wind and the snow; marching; shouting slogans. For Heaven’s sake, some are even dancing. Cambridge, as pointed out on Twitter, has not seen such upheaval ever since we considered awarding Derrida an honorary degree.

This is possibly the best thing that has happened to UK higher education, at least since the end of the 1990s. Not that there’s much competition: this period, after all, brought us the introduction, then removal of tuition fee caps; abolishment of maintenance grants; REF and TEF; and as crowning (though short-lived) glory, appointment of Toby Young to the Office for Students. Yet, for most of this period, academics’ opposition to these reforms conformed to ‘civilised’ ways of protest: writing a book, giving a lecture, publishing a blog post or an article in Times Higher Education, or, at best, complaining on Twitter. While most would agree that British universities have been under threat for decades, concerted effort to counter these reforms – with a few notable exceptions – remained the provenance of the people Professor Marenbon calls ‘amiable but over-ideological eccentrics’.

This is how we have truly let down our students. Resistance was left to student protests and occupations. Longer-lasting, transgenerational solidarity was all but absent: at the end of the day, professors retreated to their ivory towers, precarious academics engaged in activism on the side of ever-increasing competition and pressure to land a permanent job. Students picked up the tab: not only when it came to tuition fees, used to finance expensive accommodation blocks designed to attract more (tuition-paying) students, but also when it came to the quality of teaching and learning, increasingly delivered by an underpaid, overworked, and precarious labour force.

This is why the charge that teach-outs of dubious quality are replacing lectures comes across as particularly disingenuous. We are told that ‘although students are denied lectures on philosophy, history or mathematics, the union wants them to show up to “teach-outs” on vital topics such as “How UK policy fuels war and repression in the Middle East” and “Neoliberal Capitalism versus Collective Imaginaries”’. Although this is but one snippet of Cambridge UCU’s programme of teach-outs, the choice is illustrative.

The link between history and UK’s foreign policy in the Middle East strikes me as obvious. Students in philosophy, politics or economics could do worse than a seminar on the development of neoliberal ideology (the event was initially scheduled as part of the Cambridge seminar in political thought). As for mathematics – anybody who, over the past weeks, has had to engage with the details of actuarial calculation and projections tied to the USS pension scheme has had more than a crash refresher course: I dare say they learned more than they ever hoped they would.

Teach-outs, in this sense, are not a replacement for education “as usual”. They are a way to begin bridging the infamous divide between “town and gown”, both by being held in more open spaces, and by, for instance, discussing how the university’s lucrative development projects are impacting on the regional economy. They are not meant to make up for the shortcomings of higher education: if anything, they render them more visible.

What the strikes have made clear is that academics’ ‘life as usual’ is vice-chancellors’ business as usual. In other words, it is precisely the attitude of studied depoliticisation that allowed the marketization of higher education to continue. Markets, after all, are presumably ‘apolitical’. Other scholars have expanded considerable effort in showing how this assumption had been used to further policies whose results we are now seeing, among other places, in the reform of the pensions system. Rather than repeat their arguments, I would like to end with the words of another philosopher, Hannah Arendt, who understood well the ambiguous relationship between the academia and politics:

 

‘Very unwelcome truths have emerged from the universities, and very unwelcome judgments have been handed down from the bench time and again; and these institutions, like other refuges of truth, have remained exposed to all the dangers arising from social and political power. Yet the chances for truth to prevail in public are, of course, greatly improved by the mere existence of such places and by the organization of independent, supposedly disinterested scholars associated with them.

This authentically political significance of the Academe is today easily overlooked because of the prominence of its professional schools and the evolution of its natural science divisions, where, unexpectedly, pure research has yielded so many decisive results that have proved vital to the country at large. No one can possibly gainsay the social and technical usefulness of the universities, but this importance is not political. The historical sciences and the humanities, which are supposed to find out, stand guard over, and interpret factual truth and human documents, are politically of greater relevance.’

In this sense, teach-outs, and industrial action in general, are a way to for us to recognise our responsibility to protect the university from the undue incursion of political power, while acknowledging that such responsibility is in itself political. At this moment in history, I can think of no service to scholarship greater than that.

I am a precarious, foreign, early career researcher. Why should I be striking?

 

OK, I’ll admit the title is a bit of clickbait. I’ve never had a moment of doubt around strikes. However, in the past few weeks, as the UCU strike over pensions is drawing nearer, I’ve had a series of conversations in which colleagues, friends, or just acquaintances have raised some of the concerns reflected in, though not exhausted by, this title. So, I’ve decided to write up a short post answering some of these questions, mostly so I could get out of people’s Facebook or Twitter timelines. This isn’t meant to try and convince you, and even less is it any form of official or legal advice: at the end of the day, exercising your rights is your choice. Here are some of mine.

I am precariously employed: I can’t really afford to lose the pay.

This is a very serious concern, especially for those who have no other source of income or savings (and that’s quite a few). The UCU has set up a solidarity fund to help in such cases; quite a few local organisations have as well, and from what I understand early career/precarious researchers should have advantage in applying to these. Even taking this into account, this is by no means a small sacrifice to make, but the current pension reform means that in the long run, you would be losing much more than the pay that could be docked.

But I am not even a member of the Union!

Your right to strike is not dependent on your membership in a(ny) union. That being said, if you would like the Union to represent/help you, it makes sense to join the Union. Actually, it makes sense to join the Union anyway. Why are you not a member of the Union? Join the Union. Here, have a uni(c)o(r)n.

unicorn-toys
Yes I know it’s the worst pun ever

 

 

 

 

I am afraid of pissing off my supervisor/boss, and I rely on their good will/recommendation letters/support for future jobs.

There’s a high chance your supervisor is striking – after all, their pensions are on the line as well. Even if they are not, it is possible that if you calmly explain why you feel this is important, and why you think you should show solidarity with your colleagues, they will see your point (and maybe even join you). Should this not be the case, they have no legal way of preventing you from exercising your basic employment right, one that is part of your contract (which, presumably, they will have read!).

In terms of future recommendations, if you really think your supervisor is evaluating your research on the basis of whether you show up in the office, and not on the basis of your commitment, results, or potential, perhaps it’s time to have a chat with them. Remember, exercising the right to strike is not meant to harm your project, your colleagues, or your supervisor: it is meant to show disagreement concerning a decision that affects you, was taken in your name, but you most likely had little or no say over. Few supervisors would dispute your right to do that.

I’ll be able to strike when I’m more senior/securely employed.

UK abolished ‘tenure’ about thirty years ago, so no one’s job is completely safe. Obviously, of course, this doesn’t mean there are no differences in status, but unfortunately, experience suggests that job security does not directly correlate with the willingness to be critical of the institution you work in. Anyway, look at the senior academics around you. Either they are striking – in which case they will certainly support your right to do the same – or they are not, which would suggest that there is nothing to suggest you will if, and when, you get to their career stage.

Remember, this is why precarity exists: employers benefit from insecure/casual contracts exactly because they provide an army of reserve (and cheap) labour in case the permanently employed decide to strike. Which is exactly what is happening now. Don’t let them get away with it.

I don’t want to let  my students down.

This obviously primarily applies to those of us who are teaching and/or supervising, but I think there is a broader point to be made: students are not children. Universities dispensed with in loco parentis in the 1970s. It’s fine to feel a duty of care for your students, but it also makes sense to recognize that they are capable of making decisions for themselves – for instance, whom they will invite to give a public lecture, how they will vote, or how they will interpret the fact their lecturers are on strike (here‘s a good example from Goldsmiths). Which is not to say you shouldn’t explain to them exactly why you are striking. Even better, invite them to help you organize or come to one of the teach-outs.

Think about it this way: next week, you can teach them one of the following: (a) how to stand up for their rights and show solidarity, or (b) how to read Shakespeare (sorry, English lit scholars, this came to mind first). You’ve got (according to employers’ calculations) 351 days in a year to do the latter. Will you use your chance to do the former?

I won’t even get to a pension; why should I fight for the benefits of entitled, securely employed academics?

If you are an employee of a pre-1992 university in the UK, chances are you are enrolled in the USS. This means you are accruing some pension through the system, thus the proposed changes are affecting you. The less you’ve been in the system – that is, the shorter the period of time you’ve been employed – the more of a difference it makes. Remember, entitled academics you are talking about have accrued most of their pension under the old system; paradoxically, you are set to lose much more than they are.

I feel this struggle is really about the privilege of white male dons, and does not address the deeper structural inequalities I experience.

 It’s true that the struggle is primarily about pensions, and it’s true that the majority of people who have benefited from the system so far are traditionally privileged. This reflects the deeper inequalities of UK higher education, and, in particular, its employment structure. My experience is a bit of a mixed bag: I am a woman and ethnic minority, but I am also white and middle-class, so I clearly can’t speak for everyone, but I think that this is precisely why it’s important to be present in the strike. We need to make sure it doesn’t remain about white men only, and that it becomes obvious that higher education in England rests not on the traditional idea of a ‘professor’, but on the work of many, often precariously employed, early career researchers, women, minorities, non-binaries, and, yes, foreigners.

Speaking of that – I’m a foreigner, why should I care?

This is most difficult for me to relate to, not only because my work has been in and on the UK for quite a while but because, frankly, I’ve never felt like not a foreigner, no matter where I lived, and I always thought solidarity is international or it is nothing. But here’s my attempt at a more pragmatic argument: this is where you work, so this is where you exercise your rights as a worker. You may obviously have a lot of other, non-local concerns – family and friends in different countries, causes (or fieldwork sites) on other continents, and so on, but none of that should preclude the possibility to be actively involved in something that concerns your rights, here and now. After all, if you can show solidarity with Palestinian children or Yemeni refugees, you can show solidarity with people working in the same industry, who share many of your concerns.

There is a related serious issue concerning those on Tier 2 visas – UCU offers some guidance here; in a nutshell, you are most likely safe as long as you don’t intend to be absent without leave (i.e. consent from your employer) for many more consecutive days during the rest of the year.

There are so many problems with higher education, this seems like a very minor fight!

True. Fighting for pensions is not going to stop the neoliberalisation of HE or the precarisation of the academic workforce per se.

Yet, imagine the longer-term potential of an action like this. You will have met other (precarious) colleagues (especially outside of your discipline/field) on picket lines and at teach-outs; you will have learnt how to effectively organize actions that bring together different groups and different concerns; not least importantly, you will have shown your employer how crucial for teaching, and research, people like you really are. Now, that’s something that could come handy in future struggles, don’t you think?

The paradox of resistance: critique, neoliberalism, and the limits of performativity

The critique of neoliberalism in academia is almost as old as its object. Paradoxically, it is the only element of the ‘old’ academia that seems to be thriving amid steadily worsening conditions: as I’ve argued in this book review, hardly a week goes by without a new book, volume, or collection of articles denouncing the neoliberal onslaught or ‘war’ on universities and, not less frequently, announcing their (untimely) death.

What makes the proliferation of critique of the transformation of universities particularly striking is the relative absence – at least until recently – of sustained modes of resistance to the changes it describes. While the UCU strike in reaction to the changes to the universities’ pension scheme offers some hope, by and large, forms of resistance have much more often taken the form of a book or blog post than strike, demo, or occupation. Relatedly, given the level of agreement among academics about the general direction of these changes, engagement with developing long-term, sustainable alternatives to exploitative modes of knowledge production has been surprisingly scattered.

It was this relationship between the abundance of critique and paucity of political action that initially got me interested in arguments and forms of intellectual positioning in what is increasingly referred to as the ‘[culture] war on universities’. Of course, the question of the relationship between critique and resistance – or knowledge and political action – concerns much more than the future of English higher education, and reaches into the constitutive categories of Western political and social thought (I’ve addressed some of this in this talk). In this post, however, my intention is to focus on its implications for how we can conceive critique in and of neoliberal academia.

Varieties of neoliberalism, varieties of critique?

While critique of neoliberalism in the academia tends to converge around the causes as well as consequences of this transformation, this doesn’t mean that there is no theoretical variation. Marxist critique, for instance, tends to emphasise the changes in working conditions of academic staff, increased exploitation, and growing commodification of knowledge. It usually identifies precarity as the problem that prevents academics from exercising the form of political agency – labour organizing – that is seen as the primary source of potential resistance to these changes.

Poststructuralist critique, most of it drawing on Foucault, tends to focus on changing status of knowledge, which is increasingly portrayed as a private rather than a public good. The reframing of knowledge in terms of economic growth is further tied to measurement – reduction to a single, unitary, comparable standard – and competition, which is meant to ensure maximum productivity. This also gives rise to mechanisms of constant assessment, such as the TEF and the REF, captured in the phrase ‘audit culture‘. Academics, in this view, become undifferentiated objects of assessment, which is used to not only instill fear but also keep them in constant competition against each other in hope of eventual conferral of ‘tenure’ or permanent employment, through which they can be constituted as full subjects with political agency.

Last, but not least, the type of critique that can broadly be referred to as ‘new materialist’ shifts the source of political power directly to instruments for measurement and sorting, such as algorithms, metrics, and Big Data. In the neoliberal university, the argument goes, there is no need for anyone to even ‘push the button’; metrics run on their own, with the social world already so imbricated by them that it becomes difficult, if not entirely impossible, to resist. The source of political agency, in this sense, becomes the ‘humanity’ of academics, what Arendt called ‘mere’ and Agamben ‘bare’ life. A significant portion of new materialist critique, in this vein, focuses on emotions and affect in the neoliberal university, as if to underscore the contrast between lived and felt experiences of academics on the one hand, and the inhumanity of algorithms or their ‘human executioners’ on the other.

Despite possibly divergent theoretical genealogies, these forms of critique seem to move in the same direction. Namely, the object or target of critique becomes increasingly elusive, murky, and de-differentiated: but, strangely enough, so does the subject. As power grows opaque (or, in Foucault’s terms, ‘capillary’), the source of resistance shifts from a relatively defined position or identity (workers or members of the academic profession) into a relatively amorphous concept of humanity, or precarious humanity, as a whole.

Of course, there is nothing particularly original in the observation that neoliberalism has eroded traditional grounds for solidarity, such as union membership. Wendy Brown’s Undoing the Demos and Judith Butler’s Notes towards a performative theory of assembly, for instance, address the possibilities for political agency – including cross-sectional approaches such as that of the Occupy movement – in view of this broader transformation of the ‘public’. Here, however, I would like to engage with the implications of this shift in the specific context of academic resistance.

Nerdish subject? The absent centre of [academic] political ontology

The academic political subject, which is why the pun on Žižek, is profoundly haunted by its Cartesian legacy: the distinction between thinking and being, and, by extension, between subject and object. This is hardly surprising: critique is predicated on thinking about the world, which proceeds through ‘apprehending’ the world as distinct from the self; but the self  is also predicated on thinking about that world. Though they may have disagreed on many other things, Boltanski and Bourdieu – both  feature prominently in my work – converge on the importance of this element for understanding the academic predicament: Bourdieu calls it the scholastic fallacy, and Boltanski complex exteriority.

Nowhere is the Cartesian legacy of critique more evident than in its approach to neoliberalism. From Foucault onwards, academic critique has approached neoliberalism as an intellectual project: the product of a ‘thought collective’ or a small group of intellectuals, initially concentrated in the Mont Pelerin society, from which they went on to ‘conquer’ not only economics departments but also, more importantly, centres of political power. Critique, in other words, projects back onto neoliberalism its own way of coming to terms with the world: knowledge. From here, the Weberian assumption that ideas precede political action is transposed to forms of resistance: the more we know about how neoliberalism operates, the better we will be able to resist it. This is why, as neoliberalism proliferates, the books, journal articles, etc. that somehow seek to ‘denounce’ it multiply as well.

Speech acts: the lost hyphen

The fundamental notion of critique, in this sense, is (J.L Austin‘s and Searle’s) notion of speech acts: the assumption that words can have effects. What gets lost in dropping the hyphen in speech(-)acts is a very important bit in the theory of performativity: that is, the conditions under which speech does constitute effective action. This is why Butler in Performative agency draws attention to Austin’s emphasis on perlocution: speech-acts that are effective only under certain circumstances. In other words, it’s not enough to exclaim: “Universities are not for sale! Education is not a commodity! Students are not consumers!” for this to become the case. For this begs the question: “Who is going to bring this about? What are the conditions under which this can be realized?” In other words: who has the power to act in ways that can make this claim true?

What critique bounces against, thus, is thinking its own agency within these conditions, rather than trying to paint them as if they are somehow on the ‘outside’ of critique itself. Butler recognizes this:

“If this sort of world, what we might be compelled to call ‘the bad life’, fails to reflect back my value as a living being, then I must become critical of those categories and structures that produce that form of effacement and inequality. In other words, I cannot affirm my own life without critically evaluating those structures that differentially value life itself [my emphasis]. This practice of critique is one in which my own life is bound up with the objects that I think about” (2015: 199).

In simpler terms: my position as a political subject is predicated on the practice of critique, which entails reflecting on the conditions that make my life difficult (or unbearable). Yet, those conditions are in part what constitutes my capacity to engage in critique in the first place, as the practice of thinking (critically) is, especially in the case of academic critique, inextricably bound up in practices, institutions, and – not least importantly – economies of academic knowledge production. In formal terms, critique is a form of a Russell’s paradox: a set that at the same time both is and is not a member of itself.

Living with (Russell) paradoxes

This is why academic critique of neoliberalism has no problem with thinking about governing rationalities, exploitation of workers in Chinese factories, or VC’s salaries: practices that it perceives as outside of itself, or in which it can conceive of itself as an object. But it faces serious problems when it comes to thinking itself as a subject, and even more, acting in this context, as this – at least according to its own standards – means reflecting on all the practices that make it ‘complicit’ in exactly what it aims to expunge, or criticize.

This means coming to terms with the fact that neoliberalism is the Research Excellence Framework, but neoliberalism is also when you discuss ideas for a super-cool collaborative project. Neoliberalism is the requirement to submit all your research outputs to the faculty website, but neoliberalism is also the pride you feel when your most recent article is Tweeted about. Neoliberalism is the incessant corporate emails about ‘wellbeing’, but it is also the craft beer you have with your friends in the pub. This is why, in the seemingly interminable debates about the ‘validity’ of neoliberalism as an analytical term, both sides are right: yes, on the one hand, the term is vague and can seemingly be applied to any manifestation of power, but, on the other, it does cover everything, which means it cannot be avoided either.

This is exactly the sort of ambiguity – the fact that things can be two different things at the same time – that critique in neoliberalism needs to come to terms with. This could possibly help us move beyond the futile iconoclastic gesture of revealing the ‘true nature’ of things, expecting that action will naturally follow from this (Martijn Konings’ Capital and Time has a really good take on the limits of ‘ontological’ critique of neoliberalism). In this sense, if there is something critique can learn from neoliberalism, it is the art of speculation. If economic discourses are performative, then, by definition, critique can be performative too. This means that futures can be created – but the assumption that ‘voice’ is sufficient to create the conditions under which this can be the case needs to be dispensed with.

 

 

Is there such a thing as ‘centrist’ higher education policy?

OOOresearch
Object-oriented representation of my research, Cambridge, December 2017

This Thursday, I was at the Institute of Education in London, at the launch of David Willetts’ new book, A University Education. The book is another contribution to what I argued constitutes a veritable ‘boom’ in writing on the fate and future of higher education; my research is concerned, among other things, with the theoretical and political question of the relationship between this genre of critique and the social conditions of its production. However, this is not the only reason why I found it interesting: rather, it is because it sets out what may  become Conservatives’ future  policy for higher education. In broader terms, it’s an attempt to carve a political middle ground between Labour’s (supposedly ‘radical’) proposal for the abolition of fees, and the clear PR/political disaster that unmitigated marketisation of higher education has turned out to be. Differently put: it’s the higher education manifesto for what should presumably be the ‘middle’ of UK’s political spectrum.

The book

Critics of the transformation of UK higher education would probably be inclined to dismiss the book with a simple “Ah, Willetts: fees”. On the other hand, it has received a series of predominantly laudatory reviews – some of them, arguably, from people who know or have worked in the same sector as the author. Among the things the reviewers commend is the book’s impressive historical scope, as well as the additional value of ‘peppering’ with anecdotes from Willetts’ time as Minister for Universities and Science. There is substance to both: the anecdotes are sometimes straightforwardly funny, and the historical bits well researched, duly referencing notable predecessors from Kingsley Amis, through C.P. Snow and F.R. Leavis, to Halsey’s “Decline of Donnish Dominion” (though, as James Wilsdon remarked at the event, less so the more recent critics, such as Andrew McGettigan). Yet, what clearly stood out to me, on first reading, is that both historical and personal parts of the narrative are there to support the main argument: that market competition is, and was, the way to ‘solve’ problems of higher education (and, to some degree, the society in general); and that the government is uniquely capable of instituting such a market.

The development of higher education in Britain, in this sense, is told as the story of slow movement against the monopoly (or duopoly) of Oxford and Cambridge, and their selective, elitist model. Willetts recounts the struggle to establish what he (in a not particularly oblique invocation) refers to as ‘challenger’ institutions, from colleges that will become part of the University of London in the 19th century, all the way until Robbins and his own time in government. Fees, loans, and income-contingent repayment are, in this sense, presented as a way to solve the problem of expansion: in other words, their purpose was to make university education both more accessible (as admittance is no longer dependent on inherited privilege) and fairer (as the cost is defrayed not through all taxpayers but only through those who benefit directly from university education, and whose earnings reflect it).

Competition, competition, competition

Those familiar with the political economy of higher education will probably not have problems locating these ideas as part of a neoliberal playbook: competition is necessary to prevent the forming of monopolies, but the government needs to ensure competition actually happens, and this is why it needs to regulate a sector – but from a distance. I unfortunately have no time to get into this argument ; other authors, over the course of the last two decades, have engaged with various assumptions that underpin it. What I would like to turn to instead is the role that the presumably monopolistic ‘nature’ of universities plays in the argument.

Now, engaging with the critique of Oxford and Cambridge is tricky as it risks being interpreted (often, rightly) as a thinly veiled apology of their elitism. As a sociologist of higher education with first-hand experience of both, I’ve always been very – and vocally – far from uncritical endorsement of either. Yet, as Priyamvada Gopal noted not long ago, Oxbridge-bashing in itself constitutes an empty ritual that cannot replace serious engagement with social inequalities. In this sense, one of the reasons why English universities are hierarchical, elitist, and prone to reproducing accumulated privilege is because they are a reflection of their society: unequal, elitist, and fascinated with accumulated privilege (witness the obsession with the Royal Family). Of course, no one is blind to the role which institutions of higher education, and in particular elite universities, play in this. But thinking that ‘solving’ the problem of elite universities is going to solve society’s ills is, at best, an overestimation of their power, and at worst a category error.

Framing competition as a way to solve problems of inequality is, unfortunately, one of the cases where the treatment may be worse than the disease. British universities have shown a stubborn tendency to reproduce existing hierarchies no matter what attempts were made to challenge them – the abolition of differences between universities and polytechnics in 1992; the introduction of rankings and league tables; competitive research funding. The market, in this sense, acts not as “the great leveler” but rather as yet another way of instituting hierarchical relationships, except that mechanisms of reproduction are channeled away from professional (or professorial, in this case) control and towards the government, or, better still, towards supposedly independent and impartial regulatory bodies.

Of course, in comparison with Toby Young’s ‘progressive’ eugenics and rape jokes, Willetts’ take on higher education really sounds rather sensible. His critique of early specialisation is well placed; he addresses head-on the problem of equitable distribution; and, as reviews never tire of mentioning, he really knows universities. In other words: he sounds like one of us. Much like Andrew Adonis, on (presumably) other side of the political spectrum, who took issue with vice chancellors’ pay – one of the rare issues on which the opinion of academics is virtually undivided. But what makes these ideas “centrist” is not so much their actual content – like in the case of stopping Brexit, there is hardly anything wrong with ideas themselves  – as the fact that they seek to frame everything else as ‘radical’ or unacceptable.

What ‘everything else’ stands for in the case of higher education, however, is rather interesting. On the right-hand side, we have the elitism and high selectivity associated with Oxford and Cambridge. OK, one might say, good riddance! On the left, however – we have abolishing tuition fees. Not quite the same, one may be inclined to note.

There ain’t gonna be any middle anymore

Unfortunately, the only thing that makes the idea of abolishing tuition so ‘radical’ in England is its highly stratified social structure. It makes sense to remember that, among OECD countries, the UK is one with the lowest public and highest private expenditure on higher education as percentage of GDP. This means that the cost of higher education is disproportionately underwritten by individuals and their families. In lay terms, this means that public money that could be supporting higher education is spent elsewhere. But it also means something much more problematic, at least judging from the interpretation of this graph recently published by Branko Milanovic.

Let’s assume that the ‘private’ cost of higher education in the UK is currently mostly underwritten by the middle classes (this makes sense both in terms of who goes to university, and who pays for it). If the trends Milanovic analyses continue, not only is the income of middle classes likely to stagnate, it is – especially in the UK, given the economic effects of Brexit – likely to decline. This has serious consequences for the private financing of higher education. In one scenario, this means more loans, more student debt, and the creation of a growing army of indebted precarious workers. In another, to borrow from Pearl Jam, there ain’t gonna be any middle anymore: the middle-class families who could afford to pay for their children’s higher education will become a minority.

This is why there is no ‘centrist’ higher education policy. Any approach to higher education that does not first address longer-term social inequalities is unlikely to work; in periods of economic contraction, such as the one Britain is facing, it is even prone to backfire. Education policies, fundamentally, can do two things: one is to change how things are; the other is to make sure they stay the same. Arguing for a ‘sensible’ solution usually ends up doing the latter.

 

Between legitimation and imagination: epistemic attachment, ontological bias, and thinking about the future

Greyswans
Some swans are…grey (Cambridge, August 2017)

 

A serious line of division runs through my household. It does not concern politics, music, or even sports: it concerns the possibility of large-scale collapse of social and political order, which I consider very likely. Specific scenarios aside for the time being, let’s just say we are talking more human-made climate-change-induced breakdown involving possibly protracted and almost certainly lethal conflict over resources, than ‘giant asteroid wipes out Earth’ or ‘rogue AI takes over and destroys humanity’.

Ontological security or epistemic positioning?

It may be tempting to attribute the tendency towards catastrophic predictions to psychological factors rooted in individual histories. My childhood and adolescence took place alongside the multi-stage collapse of the country once known as the Socialist Federal Republic of Yugoslavia. First came the economic crisis, when the failure of ‘shock therapy’ to boost stalling productivity (surprise!) resulted in massive inflation; then social and political disintegration, as the country descended into a series of violent conflicts whose consequences went far beyond the actual front lines; and then actual physical collapse, as Serbia’s long involvement in wars in the region was brought to a halt by the NATO intervention in 1999, which destroyed most of the country’s infrastructure, including parts of Belgrade, where I was living at the time*. It makes sense to assume this results in quite a different sense of ontological security than one, say, the predictability of a middle-class English childhood would afford.

But does predictability actually work against the capacity to make accurate predictions? This may seem not only contradictory but also counterintuitive – any calculation of risk has to take into account not just the likelihood, but also the nature of the source of threat involved, and thus necessarily draws on the assumption of (some degree of) empirical regularity. However, what about events outside of this scope? A recent article by Faulkner, Feduzi and Runde offers a good formalization of this problem (the Black Swans and ‘unknown unknowns’) in the context of the (limited) possibility to imagine different outcomes (see table below). Of course, as Beck noted a while ago, the perception of ‘risk’ (as well as, by extension, any other kind of future-oriented thinking) is profoundly social: it depends on ‘calculative devices‘ and procedures employed by networks and institutions of knowledge production (universities, research institutes, think tanks, and the like), as well as on how they are presented in, for instance, literature and the media.

Screen shot 2017-12-18 at 3.58.23 PM
From: Faulkner, Feduzi and Runde: Unknowns, Black Swans and the risk/uncertainty distinction, Cambridge Journal of Economics 41 (5), August 2017, 1279-1302

 

Unknown unknowns

In The Great Derangement (probably the best book I’ve read in 2017), Amitav Gosh argues that this can explain, for instance, the surprising absence of literary engagement with the problem of climate change. The problem, he claims, is endemic to Western modernity: a linear vision of history cannot conceive of a problem that exceeds its own scale**. This isn’t the case only with ‘really big problems’ such as economic crises, climate change, or wars: it also applies to specific cases such as elections or referendums. Of course, social scientists – especially those qualitatively inclined – tend to emphasise that, at best, we aim to explain events retroactively. Methodological modesty is good (and advisable), but avoiding thinking about the ways in which academic knowledge production is intertwined with the possibility of prediction is useless, for at least two reasons.

One is that, as reflected in the (by now overwrought and overdetermined) crisis of expertise and ‘post-truth’, social researchers increasingly find themselves in situations where they are expected to give authoritative statements about the future direction of events (for instance, about the impact of Brexit). Even if they disavow this form of positioning, the very idea of social science rests on (no matter how implicit) assumption that at least some mechanisms or classes or objects will exhibit the same characteristics across cases; consequently, the possibility of inference is implied, if not always practised. Secondly, given the scope of challenges societies face at present, it seems ridiculous to not even attempt to engage with – and, if possibly, refine – the capacity to think how they will develop in the future. While there is quite a bit of research on individual predictive capacity and the way collective reasoning can correct for cognitive bias, most of these models – given that they are usually based on experiments, or simulations – cannot account for the way in which social structures, institutions, and cultures of knowledge production interact with the capacity to theorise, model, and think about the future.

The relationship between social, political, and economic factors, on the one hand, and knowledge (including knowledge about those factors), on the other, has been at the core of my work, including my current PhD. While it may seem minor compared to issues such as wars or revolutions, the future of universities offers a perfect case to study the relationship between epistemic positioning, positionality, and the capacity to make authoritative statements about reality: what Boltanski’s sociology of critique refers to as ‘complex externality’. One of the things it allowed me to realise is that while there is a good tradition of reflecting on positionality (or, in positivist terms, cognitive ‘bias’) in relation to categories such as gender, race, or class, we are still far from successfully theorising something we could call ‘ontological bias’: epistemic attachment to the object of research.

The postdoctoral project I am developing extends this question and aims to understand its implications in the context of generating and disseminating knowledge that can allow us to predict – make more accurate assessments of – the future of complex social phenomena such as global warming or the development of artificial intelligence. This question has, in fact, been informed by my own history, but in a slightly different manner than the one implied by the concept of ontological security.

Legitimation and prediction: the case of former Yugoslavia

Socialist Federal Republic of Yugoslavia had a relatively sophisticated and well developed networks of social scientists, which both of my parents were involved in***. Yet, of all the philosophers, sociologists, political scientists etc. writing about the future of the Yugoslav federation, only one – to the best of my knowledge – predicted, in eerie detail, the political crisis that would lead to its collapse: Bogdan Denitch, whose Legitimation of a revolution: the Yugoslav case (1976) is, in my opinion, one of the best books about former Yugoslavia ever written.

A Yugoslav-American, Denitch was a professor of sociology at the City University of New York. He was also a family friend, a fact I considered of little significance (having only met him once, when I was four, and my mother and I were spending a part of our summer holiday at his house in Croatia; my only memory of it is being terrified of tortoises roaming freely in the garden), until I began researching the material for my book on education policies and the Yugoslav crisis. In the years that followed (I managed to talk to him again in 2012; he passed away in 2016), I kept coming back to the question: what made Denitch more successful in ‘predicting’ the crisis that would ultimately lead to the dissolution of former Yugoslavia than virtually anyone writing on Yugoslavia at the time?

Denitch had a pretty interesting trajectory. Born in 1929 to Croat Serb parents, he spent his childhood in a series of countries (including Greece and Egypt), following his diplomat father; in 1946, the family emigrated to the United States (the fact his father was a civil servant in the previous government would have made it impossible for them to continue living in Yugoslavia after the Communist regime, led by Josip Broz Tito, formally took over). There, Denitch (in evident defiance of his upper-middle-class legacy) trained as a factory worker, while studying for a degree in sociology at CUNY. He also joined the Democratic Socialist Alliance – one of American socialist parties – whose member (and later functionary) he would remain for the rest of his life.

In 1968, Denitch was awarded a major research grant to study Yugoslav elites. The project was not without risks: while Yugoslavia was more open to ‘the West’ than other countries in Eastern Europe, visits by international scholars were strictly monitored. My mother recalls receiving a house visit from an agent of the UDBA, the Yugoslav secret police – not quite the KGB but you get the drift – who tried to elicit the confession that Denitch was indeed a CIA agent, and, in the absence of that, the promise that she would occasionally report on him****.

Despite these minor throwbacks, the research continued: Legitimation of a revolution is one of its outcomes. In 1973, Denitch was awarded a PhD by the Columbia University and started teaching at CUNY, eventually retiring in 1994. His last book, Ethnic nationalism: the tragic death of Yugoslavia came out in the same year, a reflection on the conflict that was still going on at the time, and whose architecture he had foreseen with such clarity eighteen years earlier (the book is remarkably bereft of “told-you-so”-isms, so warmly recommended for those wishing to learn more about Yugoslavia’s dissolution).

Did personal history, in this sense, have a bearing on one’s epistemic position, and by extension, on the capacity to predict events? One explanation (prevalent in certain versions of popular intellectual history) would be that Denitch’s position as both a Yugoslav and an American would have allowed him to escape the ideological traps other scholars were more likely to fall into. Yugoslavs, presumably,  would be at pains to prove socialism was functioning; Americans, on the other hand, perhaps egalitarian in theory but certainly suspicious of Communist revolutions in practice, would be looking to prove it wasn’t, at least not as an economic model. Yet this assumption hardly stands even the lightest empirical interrogation. At least up until the show trials of Praxis philosophers, there was a lively critique of Yugoslav socialism within Yugoslavia itself; despite the mandatory coating of jargon, Yugoslav scholars were quite far from being uniformly bright-eyed and bushy-tailed about socialism. Similarly, quite a few American scholars were very much in favour of the Yugoslav model, eager, if anything, to show that market socialism was possible – that is, that it’s possible to have a relatively progressive social policy and still be able to afford nice things. Herein, I believe, lies the beginning of the answer as to why neither of these groups was able to predict the type or the scale of the crisis that will eventually lead to the dissolution of former Yugoslavia.

Simply put, both groups of scholars depended on Yugoslavia as a source of legitimation of their work, though for different reasons. For Yugoslav scholars, the ‘exceptionality’ of the Yugoslav model was the source of epistemic legitimacy, particularly in the context of international scientific collaboration: their authority was, in part at least, constructed on their identity and positioning as possessors of ‘local’ knowledge (Bockman and Eyal’s excellent analysis of the transnational roots of neoliberalism makes an analogous point in terms of positioning in the context of the collaboration between ‘Eastern’ and ‘Western’ economists). In addition to this, many of Yugoslav scholars were born and raised in socialism: while, some of them did travel to the West, the opportunities were still scarce and many were subject to ideological pre-screening. In this sense, both their professional and their personal identity depended on the continued existence of Yugoslavia as an object; they could imagine different ways in which it could be transformed, but not really that it could be obliterated.

For scholars from the West, on the other hand, Yugoslavia served as a perfect experiment in mixing capitalism and socialism. Those more on the left saw it as a beacon of hope that socialism need not go hand-in-hand with Stalinist-style repression. Those who were more on the right saw it as proof that limited market exchange can function even in command economies, and deduced (correctly) that the promise of supporting failing economies in exchange for access to future consumer markets could be used as a lever to bring the Eastern Bloc in line with the rest of the capitalist world. If no one foresaw the war, it was because it played no role in either of these epistemic constructs.

This is where Denitch’s background would have afforded a distinct advantage. The fact his parents came from a Serb minority in Croatia meant he never lost sight of the salience of ethnicity as a form of political identification, despite the fact socialism glossed over local nationalisms. His Yugoslav upbringing provided him not only with fluency in the language(s), but a degree of shared cultural references that made it easier to participate in local communities, including those composed of intellectuals. On the other hand, his entire professional and political socialization took place in the States: this meant he was attached to Yugoslavia as a case, but not necessarily as an object. Not only was his childhood spent away from the country; the fact his parents had left Yugoslavia after the regime change at the end of World War II meant that, in a way, for him, Yugoslavia-as-object was already dead. Last, but not least, Denitch was a socialist, but one committed to building socialism ‘at home’. This means that his investment in the Yugoslav model of socialism was, if anything, practical rather than principled: in other words, he was interested in its actual functioning, not in demonstrating its successes as a marriage of markets and social justice. This epistemic position, in sum, would have provided the combination needed to imagine the scenario of Yugoslav dissolution: a sufficient degree of attachment to be able to look deeply into a problem and understand its possible transformations; and a sufficient degree of detachment to be able to see that the object of knowledge may not be there forever.

Onwards to the…future?

What can we learn from the story? Balancing between attachment and detachment is, I think, one of the key challenges in any practice of knowing the social world. It’s always been there; it cannot be, in any meaningful way, resolved. But I think it will become more and more important as the objects – or ‘problems’ – we engage with grow in complexity and become increasingly central to the definition of humanity as such. Which means we need to be getting better at it.

 

———————————-

(*) I rarely bring this up as I think it overdramatizes the point – Belgrade was relatively safe, especially compared to other parts of former Yugoslavia, and I had the fortune to never experience the trauma or hardship people in places like Bosnia, Kosovo, or Croatia did.

(**) As Jane Bennett noted in Vibrant Matter, this resonates with Adorno’s notion of non-identity in Negative Dialectics: a concept always exceeds our capacity to know it. We can see object-oriented ontology, (e.g. Timothy Morton’s Hyperobjects) as the ontological version of the same argument: the sheer size of the problem acts as a deterrent from the possibility to grasp it in its entirety.

(***) This bit lends itself easily to the Bourdieusian “aha!” argument – academics breed academics, etc. The picture, however, is a bit more complex – I didn’t grow up with my father and, until about 16, had a very vague idea of what my mother did for a living.

(****) Legend has it my mother showed the agent the door and told him never to call on her again, prompting my grandmother – her mother – to buy funeral attire, assuming her only daughter would soon be thrown into prison and possibly murdered. Luckily, Yugoslavia was not really the Soviet Union, so this did not come to pass.

The biopolitics of higher education, or: what’s the problem with two-year degrees?

[Note: a shorter version of this post was published in Times Higher Education’s online edition, 26 December 2017]

The Government’s most recent proposal to introduce the possibility of two-year (‘accelerated’) degrees has already attracted quite a lot of criticism. One aspect is student debt: given that universities will be allowed to charge up to £2,000 more for these ‘fast-track’ degrees, there are doubts in terms of how students will be able to afford them. Another concerns the lack of mobility: since the Bologna Process assumes comparability of degrees across European higher education systems, students in courses shorter than three or four years would find it very difficult to participate in Erasmus or other forms of student exchange. Last, but not least, many academics have said the idea of ‘accelerated’ learning is at odds with the nature of academic knowledge, and trivializes or debases the time and effort necessary for critical reflection.

However, perhaps the most curious element of the proposal is its similarity to the Diploma of Higher Education (DipHE), a two-year qualification proposed by Mrs Thatcher at the time when she was State Secretary for Education and Science. Of course, DipHE had a more vocational character, meant to enable access equally to further education and the labour market. In this sense, it was both a foundation degree and a finishing qualification. But there is no reason to believe those in new two-year programmes would not consider continuing their education through a ‘top-up’ year, especially if the labour market turns out not to be as receptive for their qualification as the proposal seems to hope. So the real question is: why introduce something that serves no obvious purpose – for the students or, for that matter, for the economy – and, furthermore, base it on resurrecting a policy that proved unpopular in 1972 and was abandoned soon after introduction?

One obvious answer is that the Conservative government is desperate for a higher education policy to match Labour’s proposal to abolish tuition fees (despite the fact that, no matter how commendable, abolishing tuition fees is little but a reversal of measures put in place by the last Labour government). But the case of higher education in Britain is more curious than that. If one sees policy as a set of measures designed to bring about a specific vision of society, Britain never had much of a higher education policy to begin with.

Historically, British universities evolved as highly autonomous units, which meant that the Government felt little need to regulate them until well into the 20th century. Until the 1960s, the University Grants Committee succeeded in maintaining the ‘gentlemanly conversation’ between the universities and the Government. The 1963 report of the Robbins Committee, thus, was to be the first serious step into higher education policy-making. Yet, despite the fact that the Robbins report was more complex than many who cite it approvingly give it credit for, its main contribution was to open the door of universities for, in the memorable phrase, “all who qualify by ability and attainment”. What it sought to regulate was thus primarily who should access higher education – not necessarily how it should be done, nor, for that matter, what the purpose of this was.

Even the combined pressures of the economic crisis and an uneven rate of expansion in the 1970s and the 1980s did little to orient the government towards a more coherent strategy for higher education. This led Peter Scott to comment in 1982 “so far as we have in Britain any policy for higher education it is the binary policy…[it] is the nearest thing we have to an authoritative statement about the purposes of higher education”. The ‘watershed’ moment of 1992, abolishing the division between universities and polytechnics, was, in that sense, less of a policy and more of an attempt to undo the previous forays into regulating the sector.

Two major reviews of higher education since Robbins, the Dearing report and the Browne review, represented little more than attempts to deal with the consequences of massification through, first, tying education more closely to the supposed needs of the economy, and, second, introducing tuition fees. The difference between Robbins and subsequent reports in terms of scope of consultation and collected evidence suggests there was little interest in asking serious questions about the strategic direction of higher education, the role of the government, and its relationship to universities. Political responsibility was thus outsourced to ‘the Market’, that rare point of convergence between New Labour and Conservatives – at best a highly abstract aggregate of unreliable data concerning student preferences, and, at worst, utter fiction.

Rather than as a policy in a strict sense of the term, this latest proposal should be seen as another attempt at governing populations, what Michel Foucault called biopolitics. Of course, there is nothing wrong with the fact that people learn at different speeds: anyone who has taught in a higher education institution is more than aware that students have varying learning styles. But the Neo-Darwinian tone of “highly motivated students hungry for a quicker pace of learning” combined with the pseudo-widening-participation pitch of “mature students who have missed out on the chance to go to university as a young person” neither acknowledges this, nor actually engages with the need to enable multiple pathways into higher education. Rather, funneling students through a two-year degree and into the labour market is meant to ensure they swiftly become productive (and consuming) subjects.

 

IMAG3397
People’s history museum, Manchester

 

Of course, whether the labour market will actually have the need for these ‘accelerated’ subjects, and whether universities will have the capacity to teach them, remains an open question. But the biopolitics of higher education is never about the actual use of degrees or specific forms of learning. As I have shown in my earlier work on vocationalism and education for labour, this type of political technology is always about social control; in other words, it aims to prevent potentially unruly subjects from channeling their energy into forms of action that could be disruptive of the political order.

Education – in fact, any kind of education policy – is perfect in this sense because it is fundamentally oriented towards the future. It occupies the subject now, but transposes the horizon of expectation into the ever-receding future – future employment, future fulfillment, future happiness. The promise of quicker, that is, accelerated delivery into this future is a particularly insidious form of displacement of political agency: the language of certainty (“when most students are completing their third year of study, an accelerated degree student will be starting work and getting a salary”) is meant to convey that there is a job and salary awaiting, as it were, at the end of the proverbial rainbow.

The problem is not simply that such predictions (or promises) are based on an empty rhetoric, rather than any form of objective assessment of the ‘needs’ of the labour market. Rather, it is that future needs of the labour market are notoriously difficult to assess, and even more so in periods of economic contraction. Two-year degrees, in this sense, are just a way to defer the compounding problems of inequality, unemployment, and social insecurity. Unfortunately, to this date, no higher education qualification has proven capable of doing that.

If on a winter’s night a government: a tale of universities and the state with some reference to present circumstances

Imagine you were a government. I am not saying imagine you were THE government, or any particular government; interpretations are beyond the scope of this story. For the sake of illustration, let’s say you are the government of Cimmeria, the fictional country in Italo Calvino’s If on a winter’s night a traveler...

I’m not saying you – the reader – should necessarily identify with this government. But I was trained as an anthropologist; this means I think it’s important to understand why people – and institutions – act in particular contexts the way that they do. So, for the sake of the story, let’s pretend we are the government of Cimmeria.

Imagine you, the Cimmerian government, are intent on doing something really, really stupid, with possibly detrimental consequences. Imagine you were aware that there is no chance you can get away with this and still hold on to power. Somehow, however, you’re still hanging on, and it’s in your interest to go on doing that for as long as possible, until you come up with something better.

There is one problem. Incidentally, sometime in your long past, you developed places where people can learn, talk, and – among many other things – reflect critically on what you are doing. Let’s, for the sake of the story, call these places universities. Of course, universities are not the only places where people can criticise what you are doing. But they are plentiful, and people in them are many, and vocal. So it’s in your interest to make sure these places don’t stir trouble.

At this point, we require a little historical digression.

How did we get so many universities in the first place?

Initially, it wasn’t you who developed universities at all, they mostly started on their own. But you tolerated them, then grew to like them, and even started a programme of patronage. At times, you struggled with the church – churches, in fact – over influence on universities. Then you got yourself a Church, so you didn’t have to fight any longer.

Universities educated the people you could trust to rule with you: not all of them specializing in the art of government, of course, but skilled in polite conversation and, above all, understanding of the division of power in Cimmeria. You trusted these people so much that, even when you had to set up an institution to mediate your power – the Parliament – you gave them special representation.* Even when this institution had to set up a further body to mediate its relationship with the universities – the University Grants Committee, later to become the funding councils – these discussions were frequently described as an ‘in-house conversation’.

Some time later, you extended this favour to more people. You thought that, since education made them more fit to rule with you, the more educated they were, the more they should see the value of your actions. The form you extended was a cheaper, more practical version of it: obviously, not everyone was fit to rule. Eventually, however, even these institutions started conforming to the original model, a curious phenomenon known as ‘academic drift’. You thought this was strange, but since they seemed intent on emulating each other, you did away with the binary model and brought in the Market. That’ll sort them out, you thought.

You occasionally asked them to work for you. You were always surprised, even hurt, when you found out they didn’t want to. You thought they were ridiculous, spoiled, ungrateful. Yet you carried on. They didn’t really matter.

Over the years, their numbers grew. Every once in a while, they would throw some sort of a fuss. They were very political. You didn’t really care; at the end of the day, all their students went on to become decent, tax-paying subjects, leaving days of rioting safely behind.

Until, one day, there were no more jobs. There was no more safety. Remember, you had cocked up, badly. Now you’ve got all of these educated people, disappointed, and angry, exactly at the time you need it least. You’ve got 99 problems but, by golly, you want academia not to be one.

So, if on a winter’s night a government should think about how to keep universities at bay while driving the country further into disarray…

Obviously, your first task is to make sure they are silent. God forbid all of those educated people would start holding you to account, especially at the same time! Historically, there are a few techniques at your disposal, but they don’t seem to fit very well. Rounding academics up and shipping them off into gulags seems a bit excessive. Throwing them in prison is bound not to prove popular – after all, you’re not Turkey. In fact, you’re so intent on communicating that you are not Turkey that you campaigned for leaving the Cimmeropean Union on the (fabricated) pretext that Turkey is about to join it.

Luckily, there is a strategy more effective than silencing. The exact opposite: making sure they talk. Not about Brexi–elephant in the room, of course; not about how you are systematically depriving the poor and the vulnerable of any source of support. Certainly not, by any chance, how you have absolutely no strategy, idea, or, for that matter, procedural skill, for the most important political transition in the last half-century Cimmeria is about to undergo. No, you have something much better at your disposal: make them talk about themselves.

One of the sure-fire ways to get them to focus on what happens within universities (rather than the outside) is to point to the enemy within their own ranks. Their own management seems like the ideal object for this. Not that anyone likes their bosses anyway, but the problem here is particularly exacerbated by the fact that their bosses are overpaid, and some of academics underpaid. Not all, of course; many academics get very decent sums. Yet questions of money or material security are traditionally snubbed in the academia. For a set of convoluted historical and cultural reasons that we unfortunately do not have time to go into here, academics like to pretend they work for love, rather than money, so much that when neophytes are recruited, they often indeed work for meagre sums, and can go on doing that for years. Resilience is seen as a sign of value; there is more than a nod to Weber’s analysis of the doctrine of predestination here. This, of course, does not apply only to universities, but to capitalism as a whole: but then again, universities have always been integrated into capitalism. They, however, like to imagine they are not. Because of this, the easiest way to keep them busy is to make them believe that they can get rid of capitalism by purging its representatives (ideally, some that embody the most hateful elements – e.g. Big Pharma) from the university. It is exactly by convincing them that capitalism can be expunged by getting rid of a person, a position, or even a salary figure, that you ensure it remains alive and well (you like capitalism, also for a set of historical reasons we cannot go into at this point).

The other way to keep them occupied is to poke at the principles of university autonomy and academic freedom. You know these principles well; you defined them and enshrined them in law, not necessarily because you trusted universities (you did, but not for too long), but because you knew that they will forever be a reminder to scholars that their very independence from the state is predicated on the dependence on the state. Now, obviously, you do not want to poke at these principles too much: as we mentioned above, such gestures tend not to be very popular. However, they are so effective that even a superficially threatening act is guaranteed to get academics up in arms. A clumsily written, badly (or: ideally) timed letter, for instance. An injunction to ‘protect free speech’ can go a very long way. Even better, on top of all that, you’ve got Prevent, which doubles as an actual tool for securitization and surveillance, making sure academics are focused on what’s going on inside, rather than looking outside.

They often criticize you. They say you do not understand how universities work. Truth is, you don’t. You don’t have to; you never cared about the process, only about the outcome.

What you do understand, however, is politics – the subtle art of making people do what you want them to, or, in the absence of that, making sure they do not do something that could really unsettle you. Like organize. Or strike. Oops.

* The constituency of Combined English Universities existed until 1950.

Why is it more difficult to imagine the end of universities than the end of capitalism, or: is the crisis of the university in fact a crisis of imagination?

neoliberalismwhatwillyoube
Graffiti at the back of a chair in a lecture theatre at Goldsmiths, University of London, October 2017

 

Hardly anyone needs convincing that the university today is in deep crisis. Critics warn that the idea of the University (at least in the form in which it emerged from Western modernity) is endangered, under attack, under fire; that governments or corporations are waging a war against them. Some even pronounce public university already dead, or at least lying in ruins. The narrative about the causes of the crisis is well known: shift in public policy towards deregulation and the introduction of market principles – usually known as neoliberalism – meant the decline of public investment, especially for social sciences and humanities, introduction of performance-based funding dependent on quantifiable output, and, of course, tuition fees. This, in turn, led to the rising precarity and insecurity among faculty and students, reflected, among other things, in a mental health crisis. Paradoxically, the only surviving element of the public university that seems to be doing relatively well in all this is critique. But what if the crisis of the university is, in fact, a crisis of imagination?

Don’t worry, this is not one of those posts that try to convince you that capitalism can be wished away by the power of positive thinking. Nor is it going to claim that neoliberalism offers unprecedented opportunities, if only we would be ‘creative’ enough to seize them. The crisis is real, it is felt viscerally by almost everyone in higher education, and – importantly – it is neither exceptional nor unique to universities. Exactly because it cannot be wished away, and exactly because it is deeply intertwined with the structures of the current crisis of capitalism, opposition to the current transformation of universities would need to involve serious thinking about long-term alternatives to current modes of knowledge production. Unfortunately, this is precisely the bit that tends to be missing from a lot of contemporary critique.

Present-day critique of neoliberalism in higher education often takes the form of nostalgic evocation of the glory days when universities were few, and funds for them plentiful. Other problems with this mythical Golden Age aside, what this sort of critique conveniently omits to mention is that institutions that usually provide the background imagery for these fantastic constructs were both highly selective and highly exclusionary, and that they were built on the back of centuries of colonial exploitation. If it seemed like they imparted a life of relatively carefree privilege on those who studied and worked in them, that is exactly because this is what they were designed to do: cater to the “life of the mind” via excluding all forms of interference, particularly if they took the form of domestic (or any other material) labour, women, or minorities. This tendency is reproduced in Ivory Tower nostalgia as a defensive strategy: the dominant response to what critics tend to claim is the biggest challenge to universities since their founding (which, as they like to remind us, was a long, long time ago) is to stick their head in the sand and collectively dream back to the time when, as Pink Floyd might put it, grass was greener and lights were brighter.

Ivory Tower nostalgia, however, is just one aspect of this crisis of imagination. A much broader symptom is that contemporary critique seems unable to imagine a world without the university. Since ideas of online disembedded learning were successfully monopolized by technolibertarian utopians, the best most academics seem to be able to come up with is to re-erect the walls of the institution, but make them slightly more porous. It’s as if the U of University and the U of Utopia were somehow magically merged. To extend the oft-cited and oft-misattributed saying, if it seems easier to imagine the end of the world than the end of capitalism, it is nonetheless easier to imagine the end of capitalism than the end of universities.

Why does the institution like a university have such a purchase on (utopian and dystopian) imagination? Thinking about universities is, in most cases, already imbued by the university, so one element pertains to the difficulty of perceiving conditions of reproduction of one’s own position (this mode of access from the outside, as object-oriented ontologists would put it, or complex externality, as Boltanski does, is something I’m particularly interested in). However, it isn’t the case just with academic critique; fictional accounts of universities or other educational institutions are proliferating, and, in most cases (as I hope to show once I finally get around to writing the book on magical realism and universities), they reproduce the assumption of the value of the institution as such, as well as a lot of associated ideas, as this tweet conveys succinctly:

Screen shot 2017-10-11 at 11.06.11 PM

This is, unfortunately, often the case even with projects whose explicit aim is to subvert existing  inequalities in the context of knowledge production, including open, free, and workers’ universities (Social Science Centre in Lincoln maintains a useful map of these initiatives globally). While these are fantastic initiatives, most either have to ‘piggyback’ on university labour – that is, on the free or voluntary labour of people employed or otherwise paid by universities – or, at least, rely on existing universities for credentialisation. Again, this isn’t to devalue those who invest time, effort, and emotions into such forms of education; rather, it is to flag that thinking about serious, long-term alternatives is necessary, and quickly, at that. This is a theme I spend a lot of time thinking about, and I hope to make one of central topics in my work in the future.

 

So what are we to do?

There’s an obvious bit of irony in suggesting a panel for a conference in order to discuss how the system is broken, but, in the absence of other forms, I am thinking of putting together a proposal for a workshop for Sociological Review’s 2018 “Undisciplining: Conversations from the edges” conference. The good news is that the format is supposed to go outside of the ‘orthodox’ confines of panels and presentations, which means we could do something potentially exciting. The tentative title Thinking about (sustainable?) alternatives to academic knowledge production.

I’m particularly interested in questions such as:

  • Qualifications and credentials: can we imagine a society where universities do not hold a monopoly on credentials? What would this look like?
  • Knowledge work: can we conceive of knowledge production (teaching and research) not only ‘outside of’, but without the university? What would this look like?
  • Financing: what other modes of funding for knowledge production are conceivable? Is there a form of public funding that does not involve universities (e.g., through an academic workers’ cooperative – Mondragon University in Spain is one example – or guild)? What would be the implications of this, and how it would be regulated?
  • Built environment/space: can we think of knowledge not confined to specific buildings or an institution? What would this look like – how would it be organised? What would be the consequences for learning, teaching and research?

The format would need to be interactive – possibly a blend of on/off-line conversations – and can address the above, or any of the other questions related to thinking about alternatives to current modes of knowledge production.

If you’d like to participate/contribute/discuss ideas, get in touch by the end of October (the conference deadline is 27 November).

[UPDATE: Our panel got accepted! See you at Undisciplining conference, 18-21 June, Newcastle, UK. Watch this space for more news].

A fridge of one’s own

OLYMPUS DIGITAL CAMERA
A treatise on the education of women, 1740. Museum of European Students, Bologna

 

A woman needs a fridge of her own if she is to write theory. In fact, I’d wager a woman needs a fridge of her own if she is to write pretty much anything, but since what I am writing at the moment is (mostly) theory, let’s assume that it can serve as a metaphor for intellectual labour more broadly.

In her famous injunction to undergraduates at Girton College in Cambridge (the first residential college for women that offered education to degree level) Virginia Woolf stated that a woman needed two things in order to write: a room of her own, and a small independent income (Woolf settled on 500 pounds a year; as this website helpfully informed me, this would be £29,593 in today’s terms). In addition to the room and the income,  a woman who wants to write, I want to argue, also needs a fridge. Not a shelf or two in a fridge in a kitchen in a shared house or at the end of the staircase; a proper fridge of her own. Let me explain.

The immateriality of intellect

Woolf’s broader point in A Room of One’s Own is that intellectual freedom and creativity require the absence of material constraints. In and of itself, this argument is not particularly exceptional: attempts to define the nature of intellectual labour have almost unfailingly centred on its rootedness in leisure – skholē – as the opportunity for peaceful contemplation, away from the vagaries of everyday existence. For ancient Greeks, contemplation was opposed to the political (as in the everyday life of the polis): what we today think of as the ‘private’ was not even a candidate, being the domain of women and slaves, neither of which were considered proper citizens. For Marx, it was  the opposite of material labour, with its sweat, noise, and capitalist exploitation. But underpinning it all was the private sphere – that amorphous construct that, as feminist scholars pointed out, includes the domestic and affective labour of care, cleaning, cooking, and, yes, the very act of biological reproduction. The capacity to distance oneself from these kinds of concerns thus became the sine qua non of scholarly reflection, particularly in the case of theōria, held to be contemplation in its pure(st) form. After all, to paraphrase Kant, it is difficult to ponder the sublime from too close.

This thread runs from Plato and Aristotle through Marx to Arendt, who made it the gist of her analysis of the distinction between vita activa and vita contemplativa; and onwards to Bourdieu, who zeroed in on the ‘scholastic reason’ (raison scolastique) as the source of Homo Academicus’ disposition to project the categories of scholarship – skholē – onto everyday life. I am particularly interested in the social framing of this distinction, given that I think it underpins a lot of contemporary discussions on the role of universities. But regardless of whether we treat it as virtue, a methodological caveat, or an interesting research problem, detachment from the material persists as the distinctive marker of the academic enterprise.

 

What about today?

So I think we can benefit from thinking about what would be the best way to achieve this absolution from the material for women who are trying to write today. One solution, obviously, would be to outsource the cooking and cleaning to a centralised service – like, for instance, College halls and cafeterias. This way, one would have all the time to write: away with the vile fridge! (It was anyway rather unseemly, poised as it was in the middle of one’s room). Yet, outsourcing domestic labour means we are potentially depriving other people of the opportunity to develop their own modes of contemplation. If we take into account that the majority of global domestic labour is performed by women, perfecting our scholarship would most likely be off the back of another Shakespeare’s (or, for consistency’s sake, let’s say Marx’s) sister. So, let’s keep the fridge, at least for the time being.

But wait, you will say, what about eating out – in restaurants and such? It’s fine you want to do away with outsourced domestic labour, but surely you wouldn’t scrap the entire catering industry! After all, it’s a booming sector of the economy (and we all know economic growth is good), and it employs so many people (often precariously and in not very nice conditions, but we are prone to ignore that during happy hour). Also, to be honest, it’s so nice to have food prepared by other people. After all, isn’t that what Simone de Beauvoir did, sitting, drinking and smoking (and presumably also eating) in cafés all day? This doesn’t necessarily mean we would need to do away with the fridge, but a shelf in a shared one would suffice – just enough to keep a bit of milk, some butter and eggs, fruit, perhaps even a bottle of rosé? Here, however, we face the economic reality of the present. Let’s do a short calculation.

 

£500 a year gets you very far…or not

The £29,593 Woolf proposes as sufficient independent income comes from an inheritance. Those of us who are less fortunate and are entering the field of theory today can hope to obtain one of many scholarships. Mine is currently at £13,900 a year (no tax); ESRC-funded students get a bit more, £14,000. This means we fall well short of today’s equivalent of 500 pound/year sum Woolf suggested to students at Girton. Starting from £14,000, assuming that roughly £2000 pounds annually are spent on things such as clothes, books, cosmetics, and ‘incidentals’ – for instance, travel to see one’s family or medical costs (non-EU students are subject to something called the Immigration Health Surcharge, paid upfront at the point of application for a student visa, which varies between £150 and £200 per year, but doesn’t cover dental treatment, prescriptions, or eye tests – so much for “NHS tourism”) – this leaves us with roughly £1000 per month. Out of this, accommodation costs anything between 400 and 700 pounds, depending on bills, council tax etc. – for a “room of one’s own”, that is, a room in a shared house or college accommodation – that, you’re guessing it, almost inevitably comes with a shared fridge.

So the money that’s left is supposed to cover  eating in cafés, perhaps even an occasional glass of wine (it’s important to socialise with other writers or just watch the world go by). Assuming we have 450/month after paying rent and bills, this leaves us with a bit less than 15 pounds per day. This suffices for about one meal and a half daily in most cheap high street eateries, if you do not eat a lot, do not drink, nor have tea or coffee. Ever. Even at colleges, where food is subsidised, this would be barely enough. Remember: this means you never go out for a drink with friends or to a cinema, you never buy presents, never pay for services: in short, it makes for a relatively boring and constrained life. This could turn writing, unless you’re Emily Dickinson, somewhat difficult. Luckily, you have Internet, that is, if it’s included in your bills. And you pray your computer does not break down.

Well, you can always work, you say. If the money you’re given is not enough to provide the sort of lifestyle you want, go earn more! But there’s a catch. If you are in full-time education, you are only allowed to work part-time. If you are a foreign national, there are additional constraints. This means the amount of money you can get is usually quite limited. And there are tradeoffs. You know all those part-time jobs that pay a lot, offer stability and future career progression, and everyone is flocking towards? I don’t either. If you ever wondered where the seemingly inexhaustible supply of cheap labour at universities – sessional lecturers, administrative assistants, event managers, servers etc. came from, look around you: more likely than not, it’s hungry graduate students.

 

The poverty of student life

Increasingly, this is not in the Steve Jobs “stay hungry” sense. As I’ve argued recently, “staying hungry” has quite a different tone when instead of a temporary excursion into relative deprivation (seen as part of ‘character building’ education is supposed to be about) it reflects the threat of, virtually, struggling to make ends meet way after graduation. Given the state of the economy and graduate debt, that is a threat faced by growing proportions of young people (and, no surprise, women are much more likely to end up in precarious employment). Of course, you could always argue that many people have it much worse: you are (relatively) young, well educated, and with likely more cultural and social capital than the average person. Sure you can get by. But remember – this isn’t about making it from one day to another. What you’re trying to do is write. Contemplate. Comprehend the beauty (and, sometimes, ugliness) of the world in its entirety. Not wonder whether you’ll be able to afford the electricity bill.

This is why a woman needs to have her own fridge. If you want access to healthy, cheap food, you need to be able to buy it in greater quantities, so you don’t have to go to the supermarket every other day, and store it at home, so you can prepare it quickly and conveniently, as well as plan ahead. For the record, by healthy I do not mean quinoa waffles, duck eggs and shitake mushrooms (not that there’s anything wrong with any of these, though I’ve never tried duck eggs). I mean the sort of food that keeps you full whilst not racking up your medical expenses further down the line. For this you need a fridge. Not half a vegetable drawer among opened cans of lager that some bro you happen to share a house with forgot to throw away months ago, but an actual fridge. Of your own. It doesn’t matter if it comes with a full kitchen – you can always share a stove, wait for your turn for the microwave, and cooking (and eating) together can be a very pleasurable way of spending time. But keep your fridge.

 

Emotional labour

But, you will protest, what about women who live with partners? Surely we want to share fridges with our loved ones! Well, good for you, go ahead. But you may want to make sure that it’s not always you remembering to buy the milk, it’s not always you supplying fresh fruit and vegetables, it’s not always you throwing away the food whose use-by date had long expired. That it doesn’t mean you pay the half of household bills, but still do more than half the work. For, whether we like it or not, research shows that in heterosexual partnerships women still perform a greater portion of domestic labour, not to mention the mental load of designing, organising, and dividing tasks. And yes, this impacts your ability to write. It’s damn difficult to follow the line of thought if you need to stop five times in order to take the laundry out, empty the bins, close the windows because it just started raining, pick up the mail that came through the door, and add tea to the shopping list – not even mentioning what happens if you have children on top of all this.

So no, a fridge cannot – and will not – solve the problem of gender inequality in the academia, let alone gender inequality on a more general level (after all, academics are very, very privileged). What it can do, though, is rebalance the score in the sense of reminding us that cooking, cleaning, and cutting up food are elements of life as much as citing, cross-referencing, and critique. It can begin to destroy, once and for all, the gendered (and classed) assumption that contemplation happens above and beyond the material, and that all reminders of its bodily manifestations – for instance, that we still need to eat whilst thinking – should be if not abolished entirely, then at least expelled beyond the margins of awareness: to communal kitchens, restaurants, kebab vans, anywhere where they do not disturb the sacred space of the intellect. So keep your income, get a room, and put a fridge in it. Then start writing.

 

The poverty of student experience

p1010850.jpg
“Be young and shut up”, poster from the demos in France in May 1968, Museum of students, Bologna, Italy, November 2012

 

 

One of my favourite texts back from the time when I was writing my Master’s thesis is the Situationist International’s On The Poverty of Student Life (De la misère au milieu étudiant). Written in 1966 and distributed in 10.000 copies at the official ceremony marking the start of the new academic year at the University of Strasbourg, it provoked an outcry and a swift reaction by the university authorities, who closed down UNEF, the student union that printed it. Today, it is recognized as one of the texts that both diagnosed and helped polarize conditions that eventually led to the famous 1968 student rebellions  in France. This is how it begins:

 

“We might very well say, and no one would disagree with us, that the student is the most universally despised creature in France, apart from the priest and the policeman. The licensed and impotent opponents of capitalism repress the obvious–that what is wrong with the students is also what is wrong with them. They convert their unconscious contempt into a blind enthusiasm. The radical intelligentsia prostrates itself before the so-called ‘rise of the student’ and the declining bureaucracies of the Left bid noisily for his moral and material support.

There are reasons for this sudden enthusiasm, but they are all provided by the present form of capitalism, in its overdeveloped state. We shall use this pamphlet for denunciation. We shall expose these reasons one by one, on the principle that the end of alienation is only reached by the straight and narrow path of alienation itself.

Up to now, studies of student life have ignored the essential issue. The surveys and analyses have all been psychological or sociological or economic: in other words, academic exercises, content with the false categories of one specialization or another. None of them can achieve what is most needed–a view of modern society as a whole.”

 

This diagnosis is pretty much relevant today: most discussions of tuition fees avoid tackling the bigger question, which is the purpose of education and its role in society, beyond the invocation of the standard slogans related to either economic development or social justice and fairness. However, neither clarity of its analysis nor its resonance with contemporary issues are the main reason why I believe the Situationist pamphlet is worth reading. Instead, I would like to draw attention to draw attention to one of its underlying assumptions, reflected in the broader cultural imaginary of the ‘misery’ of student existence, life and social position, and then contrast it with current trends in the provision of student ‘experience’. Last, I want to bring this conversation to the question of tuition fees, which recently re-gained prominence in England, but has been at the back of higher education policy discussions – both in the UK and globally – for at least the last 30 years, and then use it to reflect on the changing role of higher education more generally.

The misery of student life?

There existed a time when being a student was really an exercise in misery. Stories of dank rooms, odd jobs, scraping by on half a baguette and half a pack of cigarettes used to be the staple of ‘the student experience’. Nor were such stories limited to France; I often hear colleagues in the UK complain about not being able to stand cider as they drank way too much of the cheap stuff as undergrads. All of this, as the adage went, was in preparation for a better life to come: stories of nights spent drinking cheap cider only make sense if they are told from a position in which one can afford if not exactly Dom Perignon, then at least decent craft beer.

In fact, these stories are most often told in senior common rooms, at alumni gala dinners, or cheerful reunions of former uni classmates, appropriately decked out in suits. In them, poverty is framed as a rite of passage, serving to justify one’s privileged social and professional position: instituting a myth of meritocracy (look how much I suffered in order to get to where I am now!) as well as the myth of disinterestedness in the material, creature-comforts side of life (I cared about perfecting my intellect so much I was prepared to lead a life of [relative] material deprivation!).

These stories do more than establish the privilege and shared social identity of those who tell them, however. They also support the figure of ‘the student’ as healthy, able-bodied, and – most of all – with little to focus on besides learning. After all, in order to endure between three and eight years on packets of noodle soup, cheap booze, and no sleep, you need to be young, relatively fit, and without caring duties: staying up all night drinking Strongbow and discussing Schopenhauer is kind-of-less-likely if you’ve got to take kids to school or go to work in the morning. This automatically excludes most mature and part-time students; not even to mention that negotiating campus sociality is still more difficult if (for cultural, religious, health or other reasons) you do not drink or do drugs. But, most importantly, it reinforces the idea that scarcity is a choice; the ‘student experience’, in this myth, is a form of poverty tourism or bootcamp from which you emerge strengthened and ready to assume your (obviously advantageous) position in life. This, clearly, excludes everyone without a guaranteed position in the social and economic elite. Poverty is not a rite de passage for those who stay poor throughout their life, and there is no glory in recalling the days of drinking cheap cider if, ten years down the line, you doubt you’ll be able to afford much better. Increasingly, however, that is all of us.

Situationists recognized the connection between the ‘poverty of student life’ and generalised poverty back in 1966:

 

“At least in consciousness, the student can exist apart from the official truths of ‘economic life’ .But for very simple reasons: looked at economically, student life is a hard one. In our ‘society of abundance’, he is still a pauper. 80% of students come from income groups well above the working class, yet 90% have less money than the meanest laborer. Student poverty is an anachronism, a throw-back from an earlier age of capitalism; it does not share in the new poverties of the spectacular societies; it has yet to attain the new poverty of the new proletariat.”

 

This brings us to the misery of student experience here and now. For the romanticisation of the poverty of student life makes sense only if that poverty is chosen, and temporary. Just like the graduate premium, it is predicated on the idea that you are ‘suffering’ now, in order to benefit later. And, of course, in the era of precarity, unemployment, and what David Graeber famously dubbed ‘bullshit jobs’, it no longer holds.

 

The gilded cage of student experience

 

Of course, university degree, in principle, still means your chances on the job market are better than those of someone who hasn’t got a degree. But this data skews the bigger picture, which is that the proportion of bullshit jobs is increasing: it’s not that a university degree guarantees fantastic employment opportunities, it’s that not having one means falling out of the competition for anything but the bottom of the job ladder. Most importantly, talk of graduate premium often omits to take into account the degree to which higher education is still a proxy for something else entirely: class. The effect of a university degree on employment and quality of life is thus a compound of education, social background, cultural capital, and race, gender, age etc., rather than an automatic effect of enduring three to eight years of exam taking, excessive drinking, and excruciating anxiety.

Perhaps surprisingly, one of the most visible reflections of the changing socio-economic structure of student existence is the growth of high-end or luxury student housing, and the associated focus on ‘student experience’. Of course, in most cases universities and property developers do this in order to cater to foreign, ‘overseas’ fee-paying students, who are often quite openly framed as the institution’s main source of income (it is particularly interesting to observe otherwise staunch critics of ‘marketization’ and defenders of the ‘public’ status of the university unashamedly treat such students or their parents as cash cows, or at the very least, consumers). But, to a not much lesser degree, it is also a reflection of (if still implicit) recognition that studying no longer guarantees a good and well-paid job. In other words, if you’re not necessarily going to have a better life after university, you may as well live in decent conditions while you’re in it.

The replacement of dank bedsits and instant noodles with ensuite rooms and gluten-free granola, then, is not ‘selling out’ the ideals of education in order to pander to the ‘Snowflake’ generation, as some conservative authors have argued. It is a reflection of a broader socio-economic shift related to the quality of life and life chances, as well as the breaking of the assumption of a direct (if not necessarily causal) link between education, employment, and status. In this sense, Labour’s plan to abolish tuition fees is a good start, but it does not solve the greater question of poverty and precarity, both of which will increasingly impact even those who have previously been relatively shielded from the effects of the crumbling economy – graduates.

 

Beyond fees

 

Even with no tuition, graduates will either need loans to cover living costs, or – unless they rely on their parents (and here we are stuck in the vicious cycle of class reproduction) – engage in bullshit work (at least until there is an actual effort to integrate part-time study with decent jobs, something that the Open University used to do well). In the same vein, Graduate Tax only makes sense if the highly educated on the whole actually earn much more than the rest of the population (see an interesting discussion here) – which, if current trends continue, is hardly going to be the case. In the meantime, the graduate premium reflects less the actual ‘earning power’ a degree brings and more the further slide into poverty for those without degrees, coupled with the increasing wealth of those in top-tier jobs, hardly representative of graduates as a whole (in fact, they usually come from a small number of institutions, and, again, from relatively privileged social backgrounds).

 

Addressing tuition fees in isolation, then, does little to counter the compound effects of deindustrialization, financialization, and growing public debt. This is not to say that it isn’t a solution – it’s certainly preferable to accruing a lifetime of debt – but it speaks to the need to integrate education policy into broader questions of economic and social justice, rather than treat it as temporary solution for rapid social, technological and demographic change. Meanwhile, we could do something really radical, like, I dunno, tax the rich? Just a thought.

 

Theory as practice: for a politics of social theory, or how to get out of the theory zoo

 

[These are my thoughts/notes for the “Practice of Social Theory, which Mark Carrigan and I are running at the Department of Sociology of the University of Cambridge from 4 to 6 September, 2017].

 

Revival of theory?

 

It seems we are witnessing something akin to a revival of theory, or at least of an interest in it. In 2016, the British Journal of Sociology published Swedberg’s “Before theory comes theorizing, or how to make social sciences more interesting”, a longer version of its 2015 Annual public lecture, followed by responses from – among others – Krause, Schneiderhan, Tavory, and Karleheden. A string of recent books – including Matt Dawson’s Social Theory for Alternative Societies, Alex Law’s Social Theory for Today, and Craig Browne’s Critical Social Theory, to name but a few – set out to consider the relevance or contribution of social theory to understanding contemporary social problems. This is in addition to the renewal of interest in biography or contemporary relevance of social-philosophical schools such as Existentialism (1, 2) and the Frankfurt School [1, 2].

To a degree, this revival happens on the back of the challenges posed to the status of theory by the rise of data science, leading Lizardo and Hay to engage in defense of the value and contributions of theory to sociology and international relations, respectively. In broader terms, however, it addresses the question of the status of social sciences – and, by extension, academic knowledge – more generally; and, as such, it brings us back to the justification of expertise, a question of particular relevance in the current political context.

The meaning of theory

Surely enough, theory has many meanings (Abend, 2008), and consequently many forms in which it is practiced. However, one of the characteristics that seem to be shared across the board is that it is  part of (under)graduate training, after which it gets bracketed off in the form of “the theory chapter” of dissertations/theses. In this sense, theory is framed as foundational in terms of socialization into a particular discipline, but, at the same time, rarely revisited – at least not explicitly – after the initial demonstration of aptitude. In other words, rather than doing, theory becomes something that is ‘done with’. The exception, of course, are those who decide to make theory the centre of their intellectual pursuits; however, “doing theory” in this sense all too often becomes limited to the exegesis of existing texts (what Krause refers to as ‘theory a’ and Abend as ‘theory 4’) that leads to the competition among theorists for the best interpretation of “what theorist x really wanted to say”, or, alternatively, the application of existing concepts to new observations or ‘problems’ (‘theory b and c’, in Krause’s terms). Either way, the field of social theory resembles less the groves of Plato’s Academy, and more a zoo in which different species (‘Marxists’, ‘critical realists’, ‘Bourdieusians’, ‘rational-choice theorists’) delve in their respective enclosures or fight with members of the same species for dominance of a circumscribed domain.

 

Screen shot 2017-06-12 at 8.11.36 PM
Competitive behaviour among social theorists

 

This summer school started from the ambition to change that: to go beyond rivalries or allegiances to specific schools of thought, and think about what doing theory really means. I often told people that wanting to do social theory was a major reason why I decided to do a second PhD; but what was this about? I did not say ‘learn more’ about social theory (my previous education provided a good foundation), ‘teach’ social theory (though supervising students at Cambridge is really good practice for this), read, or even write social theory (though, obviously, this was going to be a major component). While all of these are essential elements of becoming a theorist, the practice of social theory certainly isn’t reducible to them. Here are some of the other aspects I think we need to bear in mind when we discuss the return, importance, or practice of theory.

Theory is performance

This may appear self-evident once the focus shifts to ‘doing’, but we rarely talk about what practicing theory is meant to convey – that is, about theorising as a performative act. Some elements of this are not difficult to establish: doing theory usually means  identification with a specific group, or form of professional or disciplinary association. Most professional societies have committees, groups, and specific conference sessions devoted to theory – but that does not mean theory is exclusively practiced within them. In addition to belonging, theory also signifies status. In many disciplines, theoretical work has for years been held in high esteem; the flipside, of course, is that ‘theoretical’ is often taken to mean too abstract or divorced from everyday life, something that became a more pressing problem with the decline of funding for social sciences and the concomitant expectation to make them socially relevant. While the status of theory is a longer (and separate) topic, one that has been discussed at length in the history of sociology and other social sciences, it bears repeating that asserting one’s work as theoretical is always a form of positioning: it serves to define the standing of both the speaker, and (sometimes implicitly) others contributors. This brings to mind that…

Theory is power

Not everyone gets to be treated as a theorist: it is also a question of recognition, and thus, a question of political (and other) forms of power. ‘Theoretical’ discussions are usually held between men (mostly, though not exclusively, white men); interventions from women, people of colour, and persons outside centres of epistemic power are often interpreted as empirical illustrations, or, at best, contributions to ‘feminist’ or ‘race’ theory*. Raewyn Connell wrote about this in Southern Theory, and initiatives such as Why is my curriculum white? and Decolonizing curriculum in theory and practice have brought it to the forefront of university struggles, but it speaks to the larger point made by Spivak: that the majority of mainstream theory treats the ‘subaltern’ as only empirical or ethnographic illustration of the theories developed in the metropolis.

The problem here is not only (or primarily) that of representation, in the sense in which theory thus generated fails to accurately depict the full scope of social reality, or experiences and ideas of different people who participate in it. The problem is in a fundamentally extractive approach to people and their problems: they exist primarily, if not exclusively, in order to be explained. This leads me to the next point, which is that…

Theory is predictive

A good illustration for this is offered by pundits and political commentators’ surprise at events in the last year: the outcome of the Brexit referendum (Leave!), US elections (Donald Trump!), and last but not least, the UK General Election (surge in votes for Corbyn!). Despite differences in how these events are interpreted, they in most cases convey that, as one pundit recently confessed, nobody has a clue about what is going on. Does this mean the rule of experts really is over, and, with it, the need for general theories that explain human action? Two things are worth taking into account.

To begin with, social-scientific theories enter the public sphere in a form that’s not only simplified, but also distilled into ‘soundbites’ or clickbait adapted to the presumed needs and preferences of the audience, usually omitting all the methodological or technical caveats they normally come with. For instance, the results of opinion polls or surveys are taken to presented clear predictions, rather than reflections of general statistical tendencies; reliability is rarely discussed. Nor are social scientists always innocent victims of this media spin: some actively work on increase their visibility or impact, and thus – perhaps unwittingly – contribute to the sensationalisation of social-scientific discourse. Second, and this can’t be put delicately, some of these theories are just not very good. ‘Nudgery’ and ‘wonkery’ often rest on not particularly sophisticated models of human behaviour; which is not saying that they do not work – they can – but rather that theoretical assumptions underlying these models are rarely accessible to scrutiny.

Of course, it doesn’t take a lot of imagination to figure out why this is the case: it is easier to believe that selling vegetables in attractive packaging can solve the problem of obesity than to invest in long-term policy planning and research on decision-making that has consequences for public health. It is also easier to believe that removing caps to tuition fees will result in universities charging fees distributed normally from lowest to highest, than to bother reading theories of organizational behaviour in different economic and political environments and try to understand how this maps onto the social structure and demographics of a rapidly changing society. In other words: theories are used to inform or predict human behaviour, but often in ways that reinforce existing divisions of power. So, just in case you didn’t see this coming…

Theory is political

All social theories are about constraints, including those that are self-imposed. From Marx to Freud and from Durkheim to Weber (and many non-white, non-male theorists who never made it into ‘the canon’), theories are about what humans can and cannot do; they are about how relatively durable relations (structures) limit and enable how they act (agency). Politics is, fundamentally, about the same thing: things we can and things we cannot change. We may denounce Bismarck’s definition of politics as the art of the possible as insufficiently progressive, but – at the risk of sounding obvious – understanding how (and why) things stay the same is fundamental to understanding how to go about changing them. The history of social theory, among other things, can be read as a story about shifting the boundaries of what was considered fixed and immutable, on the one hand, and constructed – and thus subject to change – on the other.

In this sense, all social theory is fundamentally political. This isn’t to license bickering over different historical materialisms, or to stimulate fantasies – so dear to intellectuals – of ‘speaking truth to power’. Nor should theories be understood as weapons in the ‘war of time’, despite Débord’s poetic formulation: this is but the flipside of intellectuals’ dream of domination, in which their thoughts (i.e. themselves) inspire masses to revolt, usually culminating in their own ascendance to a position of power (thus conveniently cutting out the middleman in ‘speaking truth to power’, as they become the prime bearers of both).

Theory is political in a much simpler sense, in which it is about society and elements that constitute it. As such, it has to be about understanding what is it that those we think of as society think, want, and do, even – and possibly, especially – when we do not agree with them. Rather than aiming to ‘explain away’ people, or fit their behaviour into pre-defined social models, social theory needs to learn to listen to – to borrow a term from politics – its constituents. This isn’t to argue for a (not particularly innovative) return to grounded theory, or ethnography (despite the fact both are relevant and useful). At the risk of sounding pathetic, perhaps the next step in the development of social theory is to really make it a form of social practice – that is, make it be with the people, rather than about the people. I am not sure what this would entail, or what it would look like; but I am pretty certain it would be a welcome element of building a progressive politics. In this sense, doing social theory could become less of the practice of endlessly revising a blueprint for a social theory zoo, and more of a project of getting out from behind its bars.

 

 

*The tendency to interpret women’s interventions as if they are inevitably about ‘feminist theory’ (or, more frequently, as if they always refer to empirical examples) is a trend I have been increasingly noticing since moving into sociology, and definitely want to spend more time studying. This is obviously not to say there aren’t women in the field of social theory, but rather that gender (and race, ethnicity, and age) influence the level of generality at which one’s claims are read, thus reflecting the broader tendency to see universality and Truth as coextensive with the figure of the male and white academic.

 

 

Universities, neoliberalisation, and the (im)possibility of critique

Last Friday in April, I was at a conference entitled Universities, neoliberalisation and (in)equality at Goldsmiths, University of London. It was an one-day event featuring presentations and interventions from academics who work on understanding, and criticising, the transformation of working conditions in neoliberal academia. Besides sharing these concerns, attending such events is part of my research: I, in fact, study the critique of neoliberalism in UK higher education.

Why study critique, you may ask? At the present moment, it may appear all the more urgent to study the processes of transformation themselves, especially so that we can figure out what can be done about them. This, however, is precisely the reason: critique is essential to how we understand social processes, in part because it entails a social diagnostic – it tells us what is wrong – and, in part, because it allows us to conceptualise our own agency – what is to be done – about this. However, the link between the two is not necessarily straightforward: first you read some Marx, and then you go and start a revolution. Some would argue that the reading of Marx (what we usually think of as consciousness-raising) is essential part of the process, but there are many variables that intervene between awareness of the unfairness of certain conditions – say, knowing that part-time, low paid teaching work is exploitative – and actually doing something about those conditions, such as organising an occupation. In addition, as virtually everyone from the Frankfurt School onwards had noted, linking these two aspects is complicated by the context of mass consumerism, mass media, and – I would add – mass education. Still, the assumption of an almost direct (what Archer dubbed an ‘hydraulic) link between knowledge and action still haunts the concept of critique, both as theory and as practice.

In the opening remarks to the conference, Vik Loveday actually zeroed in on this, asking: why is it that there seems to be a burgeoning of critique, but very little resistance? For it is a burgeoning indeed: despite it being my job, even I have issues keeping up to speed with the veritable explosion of the writing that seeks to analyse, explain, or simply mourn the seemingly inevitable capitulation of universities in the face of neoliberalism. By way of illustration, the Palgrave series in “Critical University Studies” boasts eleven new titles, all published in 2016-7; and this is but one publisher, in English language only.

What can explain the relationship between the relative proliferation of critique, and relative paucity of resistance? This question forms the crux of my thesis: less, however, as an invocation for the need to resist, and more as the querying of the relationship between knowledge – especially as forms of critique, including academic critique – and political agency (I do see political agency on a broader spectrum than the seemingly inexhaustible dichotomy between ‘compliance’ and ‘resistance’, but that is another story).

So here’s a preliminary hypothesis (H, if you wish): the link between critique and resistance is mediated by the existence of and position in of academic hierarchy. Two presentations I had the opportunity to hear at the conference were very informative in this regard: the first is Loveday’s analysis of academics’ experience of anxiety, the other was Neyland and Milyaeva’s research on the experiences of REF panelists. While there is a shared concern among academics about the neoliberalisation of higher education, what struck me was the pronounced difference in the degree to which two groups express doubts about their own worth as academics, future, and relevance (in colloquial parlance, ‘impostor syndrome’). While junior* and relatively precarious academics seem to experience high levels of anxiety in relation to their value as academics, senior* academics who sit on REF panels experience it far less. The difference? Level of seniority and position in decision-making.

Well, you may say, this is obvious – the more established academics are, the more confident they are going to be. However, what varies with levels of seniority is not just confidence and trust in one’s own judgements: it’s the sense of entitlement, the degree to which you feel you deserve to be there (Loveday writes about the classed aspects of the sense of entitlement here). I once overheard someone call it the Business Class Test: the moment you start justifying to yourself flying business class on work trips (unless you’re very old, ill, or incapacitated), is the moment when you will have convinced yourself you deserve this. The issue, however, is not how this impacts travel practices: it’s the effect that the differential sense of entitlement has on the relationship between critique and resistance.

So here’s another hypothesis (h1, if you wish). The more precarious your position, the more likely you are to perceive the working conditions as unfair – and, thus, to be critical of the structure of academic hierarchy that enables it. Yet, at the same time, the more junior you are, the more risk voicing that critique – that is, translating it into action – entails. Junior academics often point out that they have to shut up and go on ‘playing the game’: churning out publications (because REF), applying for external funding (because grant capture), and teaching ever-growing numbers of students (because students generate income for the institution). Thus, junior academics may well know everything that is wrong with the academia, but will go on conforming to it in ways that reproduce exactly the conditions they are critical of.

What happens once one ascends to the coveted castle of permanent employment/tenure and membership in research evaluation panels and appointment committees? Well, I’ve only ever been tenure track for a relatively short period of time (having left the job before I found myself justifying flying business class) but here’s an assumption based on anecdotal evidence and other people’s data (h2): you still grin and bear it. You do not, under any circumstances, stop participating in the academic ‘game’ – with the added catch that now you actually believe you deserved your position in it. I’m not saying senior academics are blind to the biases and social inequalities reflected in the academic hierarchy: what I am saying is that it is difficult, if not altogether impossible, to simultaneously be aware of it and continue participating in it (there’s a nod to Sartre’s notion of ‘bad faith‘ here, but I unfortunately do not have the time to get into that now). Ever encounter a professor stand up at a public lecture or committee meeting and say “I recognize that I owe my being here to the combined fortunes of inherited social capital, [white] male privilege, and the fact English is my native language”? I didn’t either. If anything, there are disavowals of social privilege (“I come from a working class background”), which, admirable as they may be, unfortunately only serve to justify the hierarchical nature of academia and its selection procedures (“I definitely deserve to be here, because look at all the odds I had to beat in order to get here in the first place”).

In practice, this leads to the following. Senior academics stay inside the system, and, if they are critical, believe to work against the system – for instance, by fighting for their discipline, or protecting junior colleagues, or aiming to make academia that little bit more diverse. In the longer run, however, their participation keeps the system going – the equivalent of carbon offsetting your business class flight; sure, it may help plant trees in Guinea Bissau, but it does not obfuscate the fact you are flying in the first place. Junior academics, on the other hand, contribute through their competition for positions inside the system – believing that if only they teach enough (perform low-paid work), publish enough (contribute to abundance), or are visible enough (perform unpaid labour of networking on social media, through conferences etc.) – they will get away from precarity, and then they can really be critical (there’s a nod to Berlant’s cruel optimism here that I also unfortunately cannot expand on). Except that, of course, they end up in the position of senior academics, with an added layer of entitlement (because they fought so hard) and an added layer of fear (because no job is really safe in neoliberalism). Thus, while everyone knows everything is wrong, everyone still plays along. This ‘gamification’ of research, which seems to be the new mot du jour in the academia, becomes a stand-in term for the moral economy of  justifying one’s own position while participating in the reproduction of the conditions that contribute to its instability.

Cui bono critique, in this regard? It depends. If critique is divorced from its capacity to incite political action, there is no reason why it cannot be appropriated – and, correspondingly, commodified – in the broader framework of neoliberal capitalism. It’s already been pointed out that critique sells – and, perhaps less obviously, the critique of neoliberal academia does too. Even if the ever-expanding number of publications on the crisis of the university do not ‘sell’ in the narrow sense of the term, they still contribute to the symbolic economy via accruing prestige (and citation counts!) for their authors. In other words: the critique of neoliberalism in the academia can become part and parcel of the very processes it sets out to criticise. There is nothing, absolutely nothing, in the content, act, or performance of critique itself that renders it automatically subversive or dangerous to ‘the system’. Sorry. (If you want to blame me for being a killjoy, note that Boltanski and Chiapello have noted a long time ago in “The New Spirit of Capitalism” that contemporary capitalism grew through the appropriation of the 1968 artistic critique).

Does this mean critique has, as Latour famously suggested, ‘run out of steam’? If we take the steam engine as a metaphor for the industrial revolution, then the answer may well be yes, and good riddance. Along with other Messianic visions, this may speed up the departure of the Enlightenment’s legacy of pastoral power, reflected – imperfectly, yet unmistakably – in the figure of (organic or avant-guarde) ‘public’ intellectual, destined, as he is (for it is always a he) to lead the ‘masses’ to their ultimate salvation. What we may want to do instead is to examine what promise critique (with a small c) holds – especially in the age of post-truth, post-facts, Donald Trump, and so on. In this, I am fully in agreement with Latour that it is important to keep tabs on the difference between matters of fact, and maters of concern; and, perhaps most disturbingly, think about whether we want to stake out the claim for defining the latter on the monopoly on producing the former.

For getting rid of the veneer of entitlement to critique does not in any way mean abandoning the project of critical examination altogether – but it does, very much so, mean reexamining the positions and perspectives from which it is made. This is the reason why I believe it is so important to focus on the foundations of epistemic authority, including that predicated on the assumption of difference between ‘lay’ and academic forms of reflexivity (I’m writing up a paper on this – meanwhile, my presentation on the topic from this year’s BSA conference is here). In other words, in addition to the analysis of threats to critical scholarship that are unequivocally positioned as coming from ‘the outside’, we need to examine what it is about ‘the inside’ – and, particularly, about the boundaries between ‘out’ and ‘in’ – that helps perpetuate the status quo. Often, this is the most difficult task of all.

Screen shot 2017-05-01 at 1.55.12 PM
Here’s a comic for the end. In case you don’t know it already, it’s Pearls Before Swine, by the brilliant Stephan Pastis. This should at least brighten your day.

P.S. People often ask me what my recommendations would be. I’m reluctant to give any – the academia is broken, and I am not sure whether fixing it in this form makes any sense. But here’s a few preliminary thoughts:

(a) Stop fetishising the difference between ‘inside’ and ‘outside’. ‘Leaving’ the academia is still framed like some epic sort of failure, which amplifies both the readiness of precarious workforce to sustain truly abominable working conditions just in order to stay “in”, and the anxiety and other mental health issues arising from the possibility of falling “out”. Most people with higher education should be able to do well and thrive in all sorts of jobs; if we didn’t frame tenure as a life-or-death achievement, perhaps fewer would agree to suffer for years in hope of its attainment.

(b) Fight for decent working conditions for contingent faculty. Not everyone needs to have tenure if working part-time (or going in and out) are acceptable career choices that offer a liveable income and a level of social support. This would also help those who want to have children or, godforbid, engage in activities other than the rat race for academic positions.

(c) This doesn’t get emphasised enough, but one of the reasons why people vie for positions in the academia is because at least it offers a degree of intellectual satisfaction, in opposition to what Graeber has termed the ever-growing number of ‘bullshit jobs’. So, one of the ways of making working conditions in the academia more decent is by making working conditions outside of academia more decent – and, perhaps, by decentralising a bit the monopoly on knowledge work that the academia holds. Not, however, in the neoliberal outsourcing/’creative hubs’ model, which unfortunately mostly serves to generate value for existing centres while further depleting the peripheries.

* By ”junior” and “senior” I obviously do not mean biological age, but rather status – I am intentionally avoiding denominators such as ‘ECRs’ etc. since I think someone can be in a precarious position whilst not being exactly at the start of their career, and, conversely, someone can be a very early career researcher but have a type of social capital, security, and recognition that are normally associated with ‘later’ career stages.

Zygmunt Bauman and the sociologies of end times

[This post was originally published at the Sociological Review blog’s Special Issue on Zygmunt Bauman, 13 April 2017]

“Morality, as it were, is a functional prerequisite of a world with an in-built finality and irreversibility of choices. Postmodern culture does not know of such a world.”

Zygmunt Bauman, Sociology and postmodernity

Getting reacquainted with Bauman’s 1988 essay “Sociology and postmodernity”, I accidentally misread the first word of this quote as “mortality”. In the context of the writing of this piece, it would be easy to interpret this as a Freudian slip – yet, as slips often do, it betrays a deeper unease. If it is true that morality is a functional prerequisite of a finite world, it is even truer that such a world calls for mortality – the ultimate human experience of irreversibility. In the context of trans- and post-humanism, as well as the growing awareness of the fact that the world, as the place inhabited (and inhabitable) by human beings, can end, what can Bauman teach us about both?

In Sociology and postmodernity, Bauman assumes the position at the crossroads of two historical (social, cultural) periods: modernity and postmodernity. Turning away from the past to look towards the future, he offers thoughts on what a sociology adapted to the study of postmodern condition would be like. Instead of a “postmodern sociology” as a mimetic representation of (even if a pragmatic response to) postmodernity, he argues for a sociology that attempts to give a comprehensive account of the “aggregate of aspects” that cohere into a new, consumer society: the sociology of postmodernity. This form of account eschews the observation of the new as a deterioration, or aberration, of the old, and instead aims to come to terms with the system whose contours Bauman will go on to develop in his later work: the system characterised by a plurality of possible worlds, and not necessarily a way to reconcile them.

The point in time in which he writes lends itself fortuitously to the argument of the essay. Not only did Legislators and interpreters, in which he reframes intellectuals as translators between different cultural worlds, come out a year earlier; the publication of Sociology and postmodernity briefly precedes 1989, the year that will indeed usher a wholly new period in the history of Europe, including in Bauman’s native Poland.

On the one hand, he takes the long view back to post-war Europe, built, as it was, on the legacy of Holocaust as a pathology of modernity, and two approaches to preventing its repetition – market liberalism and political freedoms in the West, and planned economies and more restrictive political regimes in Central and Eastern parts of the subcontinent. On the other, he engages with some of the dilemmas for the study of society that the approaching fall of Berlin Wall and eventual unification of those two hitherto separated worlds was going to open. In this sense, Bauman really has the privilege of a two-facing version of Benjamin’s Angel of History. This probably helped him recognize the false dichotomy of consumer freedom and dictatorship over needs, which, as he stated, was quickly becoming the only imaginable alternative to the system – at least as far as imagination was that of the system itself.

The present point of view is not all too dissimilar from the one in which Bauman was writing. We regularly encounter pronouncements of an end of a whole host of things, among them history, classical distribution of labour, standards of objectivity in reporting, nation-states, even – or so we hope – capitalism itself. While some of Bauman’s fears concerning postmodernity may, from the present perspective, seem overstated or even straightforwardly ridiculous, we are inhabiting a world of many posts – post-liberal, post-truth, post-human. Many think that this calls for a rethinking of how sociology can adapt itself to these new conditions: for instance, in a recent issue of International Sociological Association’s Global Dialogue, Leslie Sklair considers what a new radical sociology, developed in response to the collapse of global capitalism, would be like.

As if sociology and the zeitgeist are involved in some weird pas-de-deux: changes in any domain of life (technology, political regime, legislation) almost instantaneously trigger calls for, if not the invention of new, then a serious reconsideration of old paradigms and approaches to its study.

I would like to suggest that one of the sources of continued appeal of this – which Mike Savage brilliantly summarised as epochal theorising – is not so much the heralding of the new, as the promise that there is an end to the present state of affairs. In order for a new ‘epoch’ to succeed, the old one needs to end. What Bauman warns about in the passage cited at the beginning is that in a world without finality – without death – there can be no morality. In T.S. Eliot’s lines from Burnt Norton: If all time is eternally present, all time is irredeemable. What we may read as Bauman’s fear, therefore, is not that worlds as we know them can (and will) end: it is that, whatever name we give to the present condition, it may go on reproducing itself forever. In other words, it is a vision of the future that looks just like the present, only there is more of it.

Which is worse? It is hard to tell. A rarely discussed side of epochal theorising is that it imagines a world in which social sciences still have a role to play, if nothing else, in providing a theoretical framing or empirically-informed running commentary of its demise, and thus offers salvation from the existential anxiety of the present. The ‘ontological turn’ – from object-oriented ontology, to new materialisms, to post-humanism – reflects, in my view, the same tendency. If objects ‘exist’ in the same way as we do, if matter ‘matters’ in the same way (if not in the same degree) in which, for instance, black lives matter, this provides temporary respite from the confines of our choices. Expanding the concept of agency so as to involve non-human actors may seem more complicated as a model of social change, but at least it absolves humans from the unique burden of historical responsibility – including that for the fate of the world.

Human (re)discovery of the world, thus, conveys less a newfound awareness of the importance of the lived environment, as much as the desire to escape the solitude of thinking about the human (as Dawson also notes, all too human) condition. The fear of relativism that postmodern ‘plurality’ of worlds brought about appears to have been preferable to the possibility that there is, after all, just the one world. If the latter is the case, the only escape from it lies, to borrow from Hamlet, in the country from whose bourn no traveller has ever returned: in other words, in death.

This impasse is perhaps felt strongest in sociology and anthropology because excursions into other worlds have been both the gist of their method and the foundations of their critical potential (including their self-critique, which focused on how these two elements combine in the construction of epistemic authority). The figure of the traveller to other worlds was more pronounced in the case of anthropology, at least at the time when it developed as the study of exotic societies on the fringe of colonial empires, but sociology is no stranger to visitation either: its others, and their worlds, delineated by sometimes less tangible boundaries of class, gender, race, or just epistemic privilege. Bauman was among theorists who recognized the vital importance of this figure in the construction of the foundations of European modernity, and thus also sensitive to its transformations in the context of postmodernity – exemplified, as he argued, in contemporary human’s ambiguous position: between “a perfect tourist” and a “vagabond beyond remedy”.

In this sense, the awareness that every journey has an end can inform the practice of social theory in ways that go beyond the need to pronounce new beginnings. Rather than using eulogies in order to produce more of the same thing – more articles, more commentary, more symposia, more academic prestige – perhaps we can see them as an opportunity to reflect on the always-unfinished trajectory of human existence, including our existence as scholars, and the responsibility that it entails. The challenge, in this case, is to resist the attractive prospect of escaping the current condition by ‘exit’ into another period, or another world – postmodern, post-truth, post-human, whatever – and remember that, no matter how many diverse and wonderful entities they may be populated with, these worlds are also human, all too human. This can serve as a reminder that, as Bauman wrote in his famous essay on heroes and victims of postmodernity, “Our life struggles dissolve, on the contrary, in that unbearable lightness of being. We never know for sure when to laugh and when to cry. And there is hardly a moment in life to say without dark premonitions: ‘I have arrived’”.

Boundaries and barbarians: ontological (in)security and the [cyber?] war on universities

baradurPrologue

One Saturday in late January, I go to the PhD office at the Department of Sociology at the University of Cambridge’s New Museums site (yes, PhD students shouldn’t work on Saturdays, and yes, we do). I swipe my card at the main gate of the building. Nothing happens.

I try again, and again, and still nothing. The sensor stays red. An interaction with a security guard who seems to appear from nowhere conveys there is nothing wrong with my card; apparently, there has been a power outage and the whole system has been reset. A rather distraught-looking man from the Department History and Philosophy of Science appears around the corner, insisting to be let back inside the building, where he had left a computer on with, he claims, sensitive data. The very amicable security guard apologises. There’s nothing he can do to let us in. His card doesn’t work, either, and the system has to be manually reset from within the computers inside each departmental building.

You mean the building noone can currently access, I ask.

I walk away (after being assured the issue would be resolved on Monday) plotting sci-fi campus novels in which Skynet is not part of a Ministry of Defense, but of a university; rogue algorithms claim GCSE test results; and classes are rescheduled in a way that sends engineering undergrads to colloquia in feminist theory, and vice versa (the distances one’ s mind will go to avoid thinking about impending deadlines)*. Regretfully pushing prospective pitches to fiction publishers aside (temporarily)**, I find the incident particularly interesting for the perspective it offers on how we think about the university as an institution: its spatiality, its materiality, its boundaries, and the way its existence relates to these categories – in other words, its social ontology.

War on universities?

Critiques of the current transformation of higher education and research in the UK often frame it as an attack, or ‘war’, on universities (this is where the first part of the title of my thesis comes from). Exaggeration for rhetorical purposes notwithstanding, being ‘under attack’ suggests is that it is possible to distinguish the University (and the intellectual world more broadly) from its environment, in this case at least in part populated by forces that threaten its very existence. Notably, this distinction remains almost untouched even in policy narratives (including those that seek to promote public engagement and/or impact) that stress the need for universities to engage with the (‘surrounding’) society, which tend to frame this imperative as ‘going beyond the walls of the Ivory Tower’.

The distinction between universities and the society has a long history in the UK: the university’s built environment (buildings, campuses, gates) and rituals (dress, residence requirements/’keeping term’, conventions of language) were developed to reflect the separateness of education from ordinary experience, enshrined in the dichotomies of intellectual vs. manual labour, active life vs. ‘life of the mind’ and, not least, Town vs. Gown. Of course, with the rise of ‘redbrick’, and, later, ‘plateglass’ universities, this distinction became somewhat less pronounced. Rather than in terms of blurring, however, I would like to suggest we need to think of this as a shift in scale: the relationship between ‘Town’ and ‘Gown’, after all, is embedded in the broader framework of distinctions between urban and suburban, urban and rural, regional and national, national and global, and the myriad possible forms of hybridisation between these (recent work by Addie, Keil and Olds, as well as Robertson et al., offers very good insights into issues related to theorising scale in the context of higher education).

Policing the boundaries: relational ontology and ontological (in)security

What I find most interesting, in this setting, is the way in which boundaries between these categories are maintained and negotiated. In sociology, the negotiation of boundaries in the academia has been studied in detail by, among others, Michelle Lamont (in How Professors Think, as well as in an overview by Lamont and Molnár), Thomas Gieryn (both in Cultural Boundaries of Science and few other texts), Andrew Abbott in The Chaos of Disciplines (and, of course, in sociologically-inclined philosophy of science, including Feyerabend’s Against Method, Lakatos’ work on research programmes, and Kuhn’s on scientific revolutions, before that). Social anthropology has an even longer-standing obsession with boundaries, symbolic as well as material – Mary Douglas’ work, in particular, as well as Augé’s Non-Places offer a good entry point, converging with sociology on the ground of neo-Durkheimian reading of the distinction between the sacred and profane.

My interest in the cultural framing of boundaries goes back to my first PhD, which explored the construal of the category of (romantic) relationship through the delineation of its difference from other types of interpersonal relations. The concept resurfaced in research on public engagement in UK higher education: here, the negotiation of boundaries between ‘inside’ (academics) and ‘outside’ (different audiences), as well as between different groups within the university (e.g. administrators vs. academics) becomes evident through practices of engaging in the dissemination and, sometimes, coproduction of knowledge, (some of this is in my contribution to this volume). The thread that runs through these cases is the importance of positioning in relation to a (relatively) specified Other; in other words, a relational ontology.

It is not difficult to see the role of negotiating boundaries between ‘inside’ and ‘outside’ in the concept of ontological security (e.g. Giddens, 1991). Recent work in IR (e.g. Ejdus, 2017) has shifted the focus from Giddens’ emphasis on social relations to the importance of stability of material forms, including buildings. I think we can extend this to universities: in this case, however, it is not (only) the building itself that is ‘at risk’ (this can be observed in intensified securitisation of campuses, both through material structure such as gates and cards-only entrances, and modes of surveillance such as Prevent – see e.g. Gearon, 2017), but also the materiality of the institution itself. While the MOOC hype may have (thankfully) subsided (though not dissappeared) there is the ubiquitous social media, which, as quite a few people have argued, tests the salience of the distinction between ‘inside’ and ‘outside’ (I’ve written a bit about digital technologies as mediating the boundary between universities and the ‘outside world’ here as well in an upcoming article in Globalisation, Education, Societies special issue that deals with reassembling knowledge production with/out the university).

Barbarians at the gates

In this context, it should not be surprising that many academics fear digital technologies: anything that tests the material/symbolic boundaries of our own existence is bound to be seen as troubling/dirty/dangerous. This brings to mind Kavafy’s poem (and J.M. Coetzee’s novel) Waiting for the Barbarians, in which an outpost of the Empire prepares for the attack of ‘the barbarians’ – that, in fact, never arrives. The trope of the university as a bulwark against and/or at danger of descending into barbarism has been explored by a number of writers, including Thorstein Veblen and, more recently, Roy Coleman. Regardless of the accuracy or historical stretchability of the trope, what I am most interested in is its use as a simultaneously diagnostic and normative narrative that frames and situates the current transformation of higher education and research.

As the last line of Kavafy’s poem suggests, barbarians represent ‘a kind of solution’: a solution for the otherwise unanswered question of the role and purpose of universities in the 21st century, which began to be asked ever more urgently with the post-war expansion of higher education, only to be shut down by the integration/normalization of the soixante-huitards in what Boltanski and Chiapello have recognised as contemporary capitalism’s almost infinite capacity to appropriate critique. Disentangling this dynamic is key to understanding contemporary clashes and conflicts over the nature of knowledge production. Rather than locating dangers to the university firmly beyond the gates, then, perhaps we could use the current crisis to think about how we perceive, negotiate, and preserve the boundaries between ‘in’ and ‘out’. Until we have a space to do that, I believe we will continue building walls only to realise we have been left on the wrong side.

(*) I have a strong interest in campus novels, both for PhD-related and unrelated reasons, as well as a long-standing interest in Sci-Fi, but with the exception of DeLillo’s White Noise can think of very few works that straddle both genres; would very much appreciate suggestions in this domain!

(**) I have been thinking for a while about a book that would be a spin-off from my current PhD that would combine social theory, literature, and critical cultural political economy, drawing on similarities and differences between critical and magical realism to look at universities. This can be taken as a sketch for one of the chapters, so all thoughts and comments are welcome.

On ‘Denial’: or, the uncanny similarity between Holocaust and mansplaining

hero_denial-2016

Last week, I finally got around to seeing Denial. It has many qualities and a few disadvantages – its attempt at hyperrealism treading on both – but I would like to focus on the aspect most reviews I’ve read so far seem to have missed. In other words: mansplaining.

Brief contextualization. Lest I be accused of equating Holocaust and mansplaining (I am not – similarity does not denote equivalence), my work deals with issues of expertise, fact, and public intellectualism; I have always found the Irving case interesting, for a variety of reasons (incidentally, I was also at Oxford during the famous event at the Oxford Union). At the same time, like, I suppose, every woman in the academia and beyond with more agency than a doormat, I have, over the past year, become embroiled in countless arguments about what mansplaining is, whether it is really so widespread, whether it is done only by men (and what to call it when it’s perpetrated by those who are not men?) and, of course, that pseudo-liberal what-passes-as-an-attempt at outmaneuvering the issue, which is whether using the term ‘mansplaining’ blames men as a group and is as such essentialising and oppressive, just like the discourses ‘we’ (feminists conveniently grouped under one umbrella) seek to condemn (otherwise known as a tu quoque argument).

Besides logical flaws, what many of these attacks seem to have in common with the one David Irving launched on Deborah Lipstadt (and Holocaust deniers routinely use) is the focus on evidence: how do we know that mansplaining occurs, and is not just some fabrication of a bunch of conceited females looking to get ahead despite their obvious lack of qualifications? Other uncanny similarities between arguments of Holocaust deniers and those who question the existence of mansplaining temporarily aside, one of undisputable qualities of Denial is that it provides multiple examples of what mansplaining looks like. It is, of course, a film, despite being based on a true story. Rather than presenting a downside, this allows for a concentrated portrayal of the practice – for those doubting its verisimilitude, I strongly recommend watching the film and deciding for yourself whether it resembles real-life situations. For those who do not, voilà, a handy cinematic case to present to those who prefer to plead ignorance as to what mansplaining ‘actually’ entails.

To begin with, the case portrayed in the film is a par excellence instance of mansplaining  as a whole: after all, it is about a self-educated (male) historian who sues an academic historian (a woman) because she does not accept his ‘interpretation’ of World War II (namely, that Holocaust did not happen) and, furthermore, dares to call him out on it. In the case (and the film), he sets out to explain to the (of course, male) judge and the public that Lipstadt (played by Rachel Weisz) is wrong and, furthermore, that her critique has seriously damaged his career (the underlying assumption being that he is entitled to lucrative publishing deals, while she, clearly, has to earn hers – exacerbated by his mockery of the fact that she sells books, whereas his, by contrast, are free). This ‘talking over’ and attempt to make it all about him (remember, he sues her) are brilliantly cast in the opening, when Irving (played by Timothy Spall) visits Lipstadt’s public talk and openly challenges her in the Q&A, ignoring her repeated refusal to engage with his arguments. Yet, it would be a mistake to locate the trope of mansplaining only in the relation Irving-Lipstadt. On the contrary – just like the real thing – it is at its most insidious when it comes from those who are, as it were, ‘on our side’.

A good example is the first meeting of the defence team, where Lipstadt is introduced to people working with her legal counsel, the famous Anthony Julius (Andrew Scott). There is a single woman on Julius’ team: Laura (Caren Pistorius), who, we are told, is a paralegal. Despite it being her first case, it seems she has developed a viable strategy: or at least so we are told by her boss, who, after announcing Laura’s brilliant contribution to the case, continues to talk over her – that is, explain her thoughts without giving her an opportunity to explain them herself. In this sense, what at first seems like an act of mentoring support – passing the baton and crediting a junior staff member – becomes a classical act in which a man takes it onto himself to interpret the professional intervention of a female colleague, appropriating it in the process.

The cases of professional mansplaining are abundant throughout the film: in multiple scenes lawyers explain the Holocaust as well as the concept of denial to Lipstadt despite her meek protests that she “has actually written a book about it”. Obvious irony aside, this serves as a potent reminder that women have to invoke professional credentials not to be recognized as experts, but in order to be recognized as equally valid participants in debate. By contrast, when it comes to the only difference in qualifications in the film that plays against Lipstadt – that of the knowledge of the British legal system – Weisz’s character conveniently remains a mixture of ignorance and naïveté couched in Americanism. One would be forgiven to assume that long-term involvement in a libel case, especially one that carries so much emotional and professional weight, would have provoked a university professor to get acquainted with at least the basic rules of the legal system in which the case was processed, but then, of course, that would have stripped the male characters of the opportunity to shine the light of their knowledge in contrast to her supposed ignorance.

Of course, emotional involvement is, in the film, presented as a clear disadvantage when it comes to the case. While Lipstadt first assumes she will, and then repeatedly asks to be allowed to testify, her legal team insists she would be too emotional a witness. The assumption that having an emotional reaction (even if one that is quite expected – it is, after all, the Holocaust we are talking about) and a cold, hard approach to ‘facts’ are mutually exclusive is played off succinctly in the scenes that take place at Auschwitz. While Lipstadt, clearly shaken (as anyone, Jewish or not, is bound to be when standing at the site of such a potent example of mass slaughter), asks the party to show respect for the victims, the head barrister Richard Rampton (Tom Wilkinson) is focused on calmly gathering evidence. The value of this, however, only becomes obvious in the courtroom, where he delivers his coup de grâce, revealing that his calm pacing around the perimeter of Auschwitz II-Birkenau (which makes him arrive late and upsets everyone, Lipstadt in particular) was actually measuring the distance between the SS barracks and the gas chambers, allowing him to disprove Irving’s assertion that the gas chambers were built as air raid shelters, and thus tilt the whole case in favour of the defence.

The mansplaining triumph, however, happens even before this Sherlockian turn, in the scene in which Rampton visits Lipstadt in her hotel room (uninvited, unannounced) in order to, yet again, convince her that she should not testify or engage with Irving in any form. After he gently (patronisingly) persuades her that  “What feels best isn’t necessarily what works best” (!), she, emotionally moved, agrees to “pass her conscience” to him – that is, to a man. By doing this, she abandons not only her own voice, but also the possibility to speak for Holocaust survivors – the one that appears as a character in the film also, poignantly, being female. In Lipstadt’s concession that silence is better because it “leads to victory”, it is not difficult to read the paradoxical (pseudo)pragmatic assertion that openly challenging male privilege works, in fact, against gender equality, because it provokes a counterreaction. Initially protesting her own silencing, Lipstadt comes to accept what her character in the script dubs “self-denial” as the only way to beat those who deny the Holocaust.

Self-denial: for instance, denying yourself food for fear of getting ‘fat’ (and thus unattractive for the male gaze); denying yourself fun for fear of being labeled easy or promiscuous (and thus undesirable as a long-term partner); denying yourself time alone for fear of being seen as selfish or uncaring (and thus, clearly, unfit for a relationship). Silence: for instance, letting men speak first for fear of being seen as pushy (and thus too challenging); for instance, not speaking up when other women are oppressed, for fear of being seen as too confrontational (and thus, of course, difficult); for instance, not reporting sexual harassment, for fear of retribution, shame, isolation (self-explanatory). In celebrating ‘self-denial’, the film, then, patently reinscribes the stereotype of the patient, silent female.

Obviously, there is value in refusing to engage with outrageous liars; equally, there are issues that should remain beyond discussion – whether Holocaust happened being one of them. Yet, selective silencing masquerading as strategy – note that Lipstadt is not allowed to speak (not even to the media), while Rampton communicates his contempt for Irving by not looking at him (thus, denying him the ‘honour’ of the male gaze) – too often serves to reproduce the structural inequalities that can persist even under a legal system that purports to be egalitarian.

Most interestingly, the fact that a film that is manifestly about mansplaining manages to reproduce quite a few of mansplaining tropes (and, I would argue, not always in a self-referential or ironic manner) serves as a poignant reminder how deeply the ‘splaining complex is embedded not only in politics or the academia, but also in cultural representations. This is something we need to remain acutely aware of in the age of ‘post-truth’ or ‘post-facts’. If resistance to lying politicians and the media is going to take the form of (re)assertion of one, indisputable truth, and the concomitant legitimation of those who claim to know it – strangely enough, most often white, privileged men – then we’d better think of alternatives, and quickly.

Against academic labour: foraging in the wildlands of digital capitalism

sqrl
Central Park, NYC, November 2013

I am reading a book called “The Slow Professor: Challenging the Culture of Speed in the Academy”, by two Canadian professors, Maggie Berg and Barbara Seeber. Published earlier in 2016, to (mostly) wide critical acclaim, it critiques the changing conditions of knowledge production in the academia, in particular those associated with the expectation to produce more and at faster rates (also known as ‘acceleration‘). As an antidote, as the Slow Professor Manifesto appended to the Preface suggests, faculty should resist the corporatisation of the university by adopting the principles of Slow Movement (as in Slow Food etc.) in their professional practices.

While the book is interesting, the argument is not particularly exceptional in the context of the expanding genre of diagnoses of the ‘end’ or ‘crisis’ of the Western university. The origins of the genre could be traced to Bill Readings’ 1996 ‘University in Ruins’ (though, of course, one could always stretch the lineage back to 1918 and Veblen’s ‘The Higher Learning in America’; predecessors in Britain include E.P. Thompson’s ‘Warwick University Ltd.’ (1972) and Halsey’s ‘The Decline of Donnish Dominion’ (1982)). Among contemporary representatives of the genre are Nussbaum’s ‘Not for Profit: Why Democracy Needs the Humanities’ (2010), Collini’s ‘What Are Universities For’ (2012), and Giroux’s ‘Neoliberal Attack on Higher Education’ (2013), to name but a few; in other words, there is no shortage of works documenting how the transformation of the conditions of academic labour fundamentally threatens the role and function of universities in the Western societies – and, by extension, the survival of these societies themselves.

I would like to say straight away that I do not, for a single moment, dispute or doubt the toll that the transformation of the conditions of academic labour is having on those who are employed at universities. Having spent the past twelve years researching the politics of academic knowledge, and most of those working in higher education in a number of different countries, I encountered hardly a single academic or student not pressured, threatened, or at the very least insecure about their future employment. What I want to argue, instead, is that the critique of the transformation of knowledge production that focuses on academic labour is no longer sufficient. Concomitantly, the critique of time – as in labour time – isn’t either.

In lieu of labour, I suggest we could think of what academics do as foraging. By this I do not in any way mean to trivialize union struggles that focus on working conditions for faculty or the position of students; these are and continue to be very important, and I have always been proud to support them. However, unfortunately, they cannot capture the way knowledge has already changed. This is not only due to the growing academic ‘precariat’ (or ‘cognitariat’): while the absence of stable or full-time employment has been used to inform both analyses and specific forms of political action on both sides of the Atlantic, they still frame the problem as fundamentally dependent on academic labour. While this may for the time being represent a good strategy in the political sense, it creates a set of potential contradictions in the conceptual.

For one, labour implies the concept of use: Marx’s labour theory of value postulates that this is what it allows it to be exchanged for something (money, favours). Yet, we as  academics are often the first to point out that lot of knowledge is not directly useful: for every paradigmatic scientist in a white lab coat that cures cancer, there is the equally paradigmatic bookworm reading 18th-century poetry (bear with me, it’s that time of the year when clichés abound). Trying to measure their value by the same or even similar standard risks slipping into the pathologies of impact, or, worse, vague statements about the necessity of social sciences and humanities for democracy, freedom, and human rights (despite personal sympathy for the latter argument, it warrants mentioning that the link between democratic regimes and academic freedom is historically contingent, rather than causal).

Second, framing what academics do as labour makes it very difficult to avoid embracing some form of measurement of output. This isn’t always related to quantity: one can also measure the quality of publications (e.g., by rating them in relation to the impact factors of journals they were published in). Often, however, the ideas of productivity and excellence go hand in hand. This contributes to the proliferation of academic writing – not all of which is exceptional, to say the very least – and, in turn, creates incentives to produce both more and better (‘slow’ academia is underpinned by the argument that taking more time creates better writing).

This also points to why the critique of the conditions of knowledge production is so focused on the notion of time. As long as creating knowledge is primarily defined as a form of labour, it depends on socially and culturally defined cycles of production and consumption. Advocating ‘slowness’, thus, does not amount to the critique of the centrality of time to capitalist production: it just asks for more of it.

The concept of foraging, by contrast, is embedded in a different temporal cycle: seasonal, rather that annual or REF-able. This isn’t some sort of neo-primitivist glorification of supposed forms of sustenance of the humanity’s forebears before the (inevitable) fall from grace; it’s, rather, a more precise description of how knowledge works. To this end, we could say most academics forage anyway: they collect bits and scraps of ideas and information, and turn them into something that can be consumed (if only by other academics). Some academics will discover new ‘edible’ things, either by trial and error or by learning from (surveying) the population that lives in the area, and introduce this to other academics. Often, however, this does not amount to creating something entirely new or original, as much to the recombination of existing flavours. This is why it is not abundance as such as much as diversity that plays a role in how interesting an environment a university, city, or region will become.

However, unlike labour, foraging is not ‘naturally’ given to the creation of surplus: while foraged food can be stored, most of it is collected and prepared more or less in relation to the needs of those who eat it. Similarly, it is also by default somewhat undisciplined: foragers must keep an eye out for the plants and other foodstuffs that may be useful to them. This does not mean that it does not rely on tradition, or that it is not susceptible to prejudice – often, people will ignore or attribute negative properties to forms of food that they are unfamiliar with, much like academics ignore or fear disciplines or approaches that do not form part of their ‘tribe’ or school of thought.

As appealing as it may sound, foraging is not a romanticized, or, worse, sterile vision of what academics do. Some academics, indeed, labour. Some, perhaps, even invent. But increasing numbers are actually foraging: hunting for bits and pieces, some of which can be exchanged for other stuff – money, prestige – thus allowing them to survive another winter. This isn’t easy: in the vast digital landscape, knowing how to spot ideas and thoughts that will have traction – and especially those that can be exchanged – requires continued focus and perseverance, as well as a lot of previously accumulated knowledge. Making a mistake can be deadly, perhaps not in the literal sense, but certainly as far as reputation is concerned.

So, workers of all lands, happy New Year, and spare a thought for the foragers in the wildlands of digital capitalism.

We are all postliberals now: teaching Popper in the era of post-truth politics

blackswan
Adelaide, South Australia, December 2014

Late in the morning after the US election, I am sitting down to read student essays for the course on social theory I’m supervising. This part of the course involves the work of Popper, Kuhn, Lakatos, and Feyerabend, and its application in the social sciences. The essay question is: do theories need to be falsifiable, and how to choose between competing theories if they aren’t? The first part is a standard essay question; I added the second a bit more than a week ago, interested to see how students would think about criteria of verification in absence of an overarching regime of truth.

This is one of my favourite topics in the philosophy of science. When I was a student at the University of Belgrade, feeling increasingly out of place in the post-truth and intensely ethnographic though anti-representationalist anthropology, the Popper-Kuhn debate in Criticism and the Growth of Knowledge held the promise that, beyond classification of elements of material culture of the Western Balkans, lurked bigger questions of the politics and sociology of knowledge (paradoxically, this may have been why it took me very long to realize I actually wanted to do sociology).

I was Popper-primed well before that, though: the principle of falsification is integral to the practice of parliamentary-style academic debating, in which the task of the opposing team(s) is to ‘disprove’ the motion. In the UK, this practice is usually associated with debate societies such as the Oxford and Cambridge Union, but it is widespread in the US as well as the rest of the world; during my undergraduate studies, I was an active member of Yugoslav (now Serbian) Universities Debating Network, known as Open Communication. Furthermore, Popper’s political ideas – especially those in Open Society and its Enemies – formed the ideological core of the Open Society Foundation, founded by the billionaire George Soros to help the promotion of democracy and civil society in Central and Eastern Europe.

In addition to debate societies, the Open Society Foundation supported and funded a greater part of civil society activism in Serbia. At the time, most of it was conceived as the opposition to the regime of Slobodan Milošević, a one-time-banker-turned-politician who ascended to power in the wake of the dissolution of the Socialist federal republic of Yugoslavia. Milošević played a major role in the conflicts in its former republics, simultaneously plunging Serbia deeper into economic and political crisis exacerbated by international isolation and sanctions, culminating in the NATO intervention in 1999. Milošević’s rule ended in a coup following a disputed election in 2000.

I had been part of the opposition from the earliest moment conceivable, skipping classes in secondary school to go to anti-government demos in 1996 and 1997. The day of the coup – 5 October 2000 – should have been my first day at university, but, together with most students and staff, I was at what would turn out to be the final public protest that ended up in the storming of the Parliament. I swallowed quite a bit of tear gas, twice in situations I expected not to get out of alive (or at the very least unharmed), but somehow made it to a friend’s house, where, together with her mom and grandma, we sat in the living room and watched one of Serbia’s hitherto banned TV and radio stations – the then-oppositional B92 – come back on air. This is when we knew it was over.

Sixteen years and little more than a month later, I am reading students’ essays on truth and falsehood in science. This, by comparison, is a breeze, and it’s always exciting to read different takes on the issue. Of course, in the course of my undergraduate studies, my own appreciation of Popper was replaced by excitement at the discovery of Kuhn – and the concomitant realization of the inertia of social structures, which, just like normal science, are incredibly slow to change – and succeeded by light perplexity by Lakatos (research programmes seemed equal parts reassuring and inherently volatile – not unlike political coalitions). At the end, obviously, came infatuation with Feyerabend: like every self-respecting former liberal, I reckoned myself a methodological (and not only methodological) anarchist.

Unsurprisingly, most of the essays I read exhibit the same trajectory. Popper is, quite obviously, passé; his critique of Marxism (and other forms of historicism) not particularly useful, his idea of falsificationism too strict a criterion for demarcation, and his association with the ideologues of neoliberalism did probably not help much either.

Except that…. this is what Popper has to say:

It is undoubtedly true that we have a more direct knowledge of the ‘inside of the human atom’ than we have of physical atoms; but this knowledge is intuitive. In other words, we certainly use our knowledge of ourselves in order to frame hypotheses about some other people, or about all people. But these hypotheses must be tested, they must be submitted to the method of selection by elimination.

(The Poverty of Historicism, 127)

Our knowledge of ourselves: for instance, our knowledge that we could never, ever, elect a racist, misogynist, reality TV star for the president of one of world’s superpowers. That we would never vote to leave the European Union, despite the fact that, like all supranational entities, it has flaws, but look at how much it invests in our infrastructure. Surely – as Popper would argue – we are rational animals: and rational animals would not do anything that puts them in unnecessary danger.

Of course, we are correct. The problem, however, is that we have forgotten about the second part of Popper’s claim: we use knowledge of ourselves to form hypotheses about other people. For instance: since we understand that a rich businessman is not likely to introduce economic policies that harm the elite, the poor would never vote for him. For instance: since we remember the victims of Nazism and fascism, everyone must understand how frail is the liberal consensus in Europe.

This is why the academia came to be “shocked” by Trump’s victory, just like it was shocked by the outcome of the Brexit referendum. This is also the key to the question of why polls “failed” to predict either of these outcomes. Perhaps we were too focused on extrapolating our assumptions to other people, and not enough on checking whether they hold.

By failing to understand that the world is not composed of left-leaning liberals with a predilection for social justice, we commit, time and again, what Bourdieu termed scholastic fallacy – propensity to attribute categories of our own thinking to those we study. Alternatively, and much worse, we deny them common standards of rationality: the voters whose political choices differ from ours are then cast as uneducated, deluded, suffering from false consciousness. And even if they’re not, they must be a small minority, right?

Well, as far as hypotheses are concerned, that one has definitely failed. Maybe it’s time we started considering alternatives.

One more time with [structures of] feeling: anxiety, labour, and social critique in/of the neoliberal academia

906736_10151382284833302_1277162293_o
Florence, April 2013

Last month, I attended the symposium on Anxiety and Work in the Accelerated Academy, the second in the Accelerated Academy series that explores the changing scapes of time, work, and productivity in the academia. Given that my research is fundamentally concerned with the changing relationships between universities and publics, and the concomitant reframing of the subjectivity, agency, and reflexivity of academics, I naturally found the question of the intersection of academic labour and time relevant. One particular bit resonated for a long time: in her presentation, Maggie O’Neill from the University of York suggested anxiety has become the primary structure of feeling in the neoliberal academia. Having found myself, in the period leading up to the workshop, increasingly reflecting on the structures of feeling,  I was intrigued by the salience of the concept. Is there a place for theoretical concepts such as this in research on the transformations of knowledge production in contemporary capitalism, and where is it?

All the feels

“Structure of feeling” may well be one of those ideas whose half-life way superseded their initial purview. Raymond Williams introduced it in a brief chapter included in Marxism and Literature, contributing to carving out what would become known as the distinctly British take on the relationship between “base” and “superstructure”: cultural studies. In it, he says:

Specific qualitative changes are not assumed to be epiphenomena of changed institutions, formations, and beliefs, or merely secondary evidence of changed social and economic relations between and within classes. At the same time they are from the beginning taken as social experience, rather than as ‘personal’ experience or as the merely superficial or incidental ‘small change’ of society. They are social in two ways that distinguish them from reduced senses of the social as the institutional and the formal: first, in that they are changes of presence (while they are being lived this is obvious; when they have been lived it is still their substantial characteristic); second, in that although they are emergent or pre-emergent, they do not have to await definition, classification, or rationalization before they exert palpable pressures and set effective limits on experience and on action. Such changes can be defined as changes in structures of feeling. (Williams, 1977:130).

Williams thus introduces structures of feeling as a form of social diagnostic; he posits it against the more durable but also more formal concepts of ‘world-view’ or ‘ideology’. Indeed, the whole chapter is devoted to the critique of the reificatory tendencies of Marxist social analysis: the idea of things (or ideas) being always ‘finished’, always ‘in the past’, in order for them to be subjected to analytical scrutiny. The concept of “structure of feeling” is thus invoked in order to keep tabs on social change and capture the perhaps less palpable elements of transformation as they are happening.

Emotions and the scholastic disposition

Over the past years, discourse of feelings has certainly become more prominent in the academia. Just last week, Cambridge’s Festival of Ideas featured a discussion on the topic, framing it within issues of free speech and trigger warnings on campus. While the debate itself has a longer history in the US, it had begun to attract more attention in the UK – most recently in relation to challenging colonial legacies at both Oxford and Cambridge.

Despite multiple nuances of political context and the complex interrelation between imperialism and higher education, the debate in the media predominantly plays out in dichotomies of ‘thinking’ and ‘feeling’. Opponents tend to pit trigger warnings or the “culture of offence” against the concept of academic freedom, arguing that today’s students are too sensitive and “coddled” which, in their view, runs against the very purpose of university education. From this perspective, education is about ‘cultivating’ feelings: exercising control, submerging them under the strict institutional structures of the intellect.

Feminist scholars, in particular, have extensively criticised this view for its reductionist properties and, not least, its propensity to translate into institutional and disciplinary policies that seek to exclude everything framed as ‘emotional’, bodily, or material (and, by association, ‘feminine’) from academic knowledge production. But the cleavage runs deeper. Research in social sciences is often framed in the dynamic of ‘closeness’ and ‘distancing’, ‘immersion’ and ‘purification’: one first collects data by aiming to be as close as possible to the social context of the object of research, but then withdraws from it in order to carry out analysis. While approaches such as grounded theory or participatory methods (cl)aim to transcend this boundary, its echoes persist in the structure of presentation of academic knowledge (for instance, the division between data and results), as well as the temporal organisation of graduate education (for instance, the idea that the road to PhD includes a period of training in methods and theories, followed by data collection/fieldwork, followed by analysis and the ‘writing up’ of results).

The idea of ‘distanced reflection’ is deeply embedded in the history of academic knowledge production. In Pascalian Meditations, Bourdieu relates it to the concept of skholē – the scholarly disposition – predicated on the distinction between intellectual and manual labour. In other words, in order for reflection to exist, it needed to be separated from the vagaries of everyday existence. One of its radical manifestations is the idea of the university as monastic community. Oxford and Cambridge, for instance, were explicitly constructed on this model, giving rise to animosities between ‘town’ and ‘gown’: concerns of the ‘lay’ folk were thought to be diametrically opposed to those of the educated. While arguably less prominent in (most) contemporary institutions of knowledge production, the dichotomy is still unproblematically transposed in concepts such as “university’s contribution to society”, which assumes universities are distinct from the society, or at least their interests radically different from those of “the society” – raising obvious questions about who, in fact, is this society.

Emotions, reason, and critique

Paradoxically, perhaps, one of the strongest reverberations of the idea is to be found in the domain of social critique. On the one hand, this sounds counter-intuitive – after all, critical social science should be about abandoning the ‘veneer’ of neutrality and engaging with the world in all of its manifestations. However, establishing the link between social science and critique rests on something that Boltanski, in his critique of Bourdieu’s sociology of domination, calls the metacritical position:

For this reason we shall say that critical theories of domination are metacritical in order. The project of taking society as an object and describing the components of social life or, if you like, its framework, appeals to a thought experiment that consists in positioning oneself outside this framework in order to consider it as a whole. In fact, a framework cannot be grasped from within. From an internal perspective, the framework coincides with reality in its imperious necessity. (Boltanski, 2011:6-7)

Academic critique, in Boltanski’s view, requires assuming a position of exteriority. A ‘simple’ form of exteriority rests on description: it requires ‘translation’ of lived experience (or practices) into categories of text. However, passing the kind of moral judgements critical theory rests on calls for, he argues, a different form of distancing: complex exteriority.

In the case of sociology, which at this level of generality can be regarded as a history of the present, with the result that the observer is part of what she intends to describe, adopting a position of exteri­ority is far from self-evident… This imaginary exit from the viscosity of the real initially assumes stripping reality of its character of implicit necessity and proceeding as if it were arbitrary (as if it could be other than it is or even not be);

This “exit from the viscosity of the real” (a lovely phrase!) proceeds in two steps. The first takes the form of “control of desire”, that is, procedural distancing from the object of research. The second is the act of judgement by which a social order is ‘ejected’, seen in its totality, and as such evaluated from the outside:

In sociology the possibility of this externalization rests on the existence of a laboratory – that is to say, the employment of protocols and instructions respect for which must constrain the sociologist to control her desires (conscious or unconscious). In the case of theories of domination, the exteriority on which cri­tique is based can be called complex, in the sense that it is established at two different levels. It must first of all be based on an exteriority of the first kind to equip itself with the requisite data to create the picture of the social order that will be submitted to critique. A meta­ critical theory is in fact necessarily reliant on a descriptive sociology or anthropology. But to be critical, such a theory also needs to furnish itself, in ways that can be explicit to very different degrees, with the means of passing a judgement on the value of the social order being described. (ibid.)

Critique: inside, outside, in-between?

To what degree could we say that this categorisation can be applied to the current critique of conditions of knowledge production in the academia? After all, most of those who criticize the neoliberal transformation of higher education and research are academics. In this sense, it would make sense to question the degree to which they can lay claims to a position of exteriority. However, more problematically (or interestingly), it is also questionable to which degree a position of exteriority is achievable at all.

Boltanski draws attention to this problem by emphasising the distinction between the cognition – awareness – of ‘ordinary’ actors, and that of sociologists (or other social scientists), the latter, presumably, able to perceive structures of domination that the subjects of their research do not:

Metacritical theories of domination tackle these asymmetries from a particular angle – that of the miscognition by the actors themselves of the exploitation to which they are subject and, above all, of the social conditions that make this exploitation possible and also, as a result, of the means by which they could stop it. That is why they present themselves indivisibly as theories of power, theories of exploitation and theories of knowledge. By this token, they encounter in an especially vexed fashion the issue of the relationship between the knowledge of social reality which is that of ordinary actors, reflexively engaged in practice, and the knowledge of social reality conceived from a reflexivity reliant on forms and instruments of totalization – an issue which is itself at the heart of the tensions out of which the possibility of a social science must be created (Boltanski, 2011:7)

Hotel Academia: you can check out any time you like, but you can never leave?

How does one go about thinking about the transformation of the conditions of knowledge production when one is at the same time reflexively engaged in practice and relying on the reflexivity provided by sociological instruments? Is it at all possible? The feelings of anxiety, to this end, could be provoked exactly by this lack of opportunity to step aside – to disembed oneself from the academic life and reflect on it at the leisurely pace of skholē. On the one hand, this certainly has to do with the changing structure and tempo of academic life – acceleration and demands for increased output: in this sense, anxiety is a reaction to the changes perceived and felt, the feeling that the ground is no longer stable, like a sense of vertigo. On the other hand, however, this feeling of decentredness could be exactly what contemporary critique calls for.

The challenge, of course, is how to turn this “structure of feeling” into something that has analytical as well as affective power – and can transform the practice itself. Stravinsky’s Rite of Spring, I think, is a wonderful example of this. As a melody, it is fundamentally disquieting: its impact primarily drawn from the fact that it disrupted what were, at the time, expectations of the (musical) genre, and in the process, rewrote them.

In other words, anxiety could be both creative and destructive. This, however, is not some broad call to “embrace anxiety”. There is a clear and pertinent need to understand the way in which the transformations of working conditions – everywhere, and also in the context of knowledge production – are influencing the sense of self and what is commonly referred to as mental health or well-being.

However, in this process, there is no need to externalise anxiety (nor other feelings): that is, frame it as if caused by forces outside of, or completely independent from, human influence, including within the academia itself (for instance, government policies, or political changes on supranational level). Conversely, there is no need to completely internalise it, in the sense of ascribing it to the embodied experience of individuals only. If feelings occupy the unstable ‘middle ground’ between institutions and individuals, this is the position from which they will have to be thought. If anxiety is an interpretation of the changes of the structures of knowledge production, its critique cannot but stem from the same position. This position is not ‘outside’, but rather ‘in-between’; insecure and thought-provoking, but no less potent for that.

Which, come to think of it, may be what Williams was trying to say all along.

All the feels

This poster drew my attention while I was working in the library of Cambridge University a couple of weeks ago:

lovethelib

 

For a while now, I have been fascinated with the way in which the language of emotions, or affect, has penetrated public discourse. People ‘love’ all sorts of things: the way a film uses interior light, the icing on a cake, their friend’s new hairstyle. They ‘hate’ Donald Trump, the weather, next door neighbours’ music. More often than not, conversations involving emotions would not be complete without mentioning online expressions of affect, such as ‘likes’ or ‘loves’ on Facebook or on Twitter.

Of course, the presence of emotions in human communication is nothing new. Even ‘ordinary’ statements – such as, for instance, “it’s going to rain tomorrow” – frequently entail an affective dimension (most people would tend to get at least slightly disappointed at the announcement). Yet, what I find peculiar is that the language of affect is becoming increasingly present not only in non-human-mediated communication, but also in relation to non-human entities. Can you really ‘love’ a library? Or be ‘friends’ with your local coffee place?

This isn’t to in any way concede ground to techno-pessimists who blame social media for ‘declining’ standards in human communication, nor even to express concern over the ways in which affective ‘reaction’ buttons allow tracking online behaviour (privacy is always a problem, and ‘unmediated’ communication largely a fiction). Even if face-to-face is qualitatively different from online interaction, there is nothing to support the claim that makes it inherently more valuable, or, indeed, ‘real’ (see: “IRL fetish[i]). It is the social and cultural framing of these emotions, and, especially, the way social sciences think about it – the social theory of affect, if you wish – that concerns me here.

Fetishism and feeling

So what is different about ‘loving’ your library as opposed to, say, ‘loving’ another human being? One possible way of going about this is to interpret expressions of emotion directed at or through non-human entities as ‘shorthand’ for those aimed at other human beings. The kernel of this idea is contained in Marx’s concept of commodity fetishism: emotion, or affect, directed at an object obscures the all-too-human (in his case, capital) relationship behind it. In this sense, ‘liking’ your local coffee place would be an expression of appreciation for the people who work there, for the way they make double macchiato, or just for the times you spent there with friends or other significant others. In human-to-human communication, things would be even more straightforward: generally speaking, ‘liking’ someone’s status updates, photos, or Tweets would signify appreciation of/for the person, agreement with, or general interest in, what they’re saying.

But what if it is actually the inverse? What if, in ‘liking’ something on Facebook or on Twitter, the human-to-human relationship is, in fact, epiphenomenal to the act? The prime currency of online communication is thus the expenditure of (emotional) energy, not the relationship that it may (or may not) establish or signify. In this sense, it is entirely irrelevant whether one is liking an inanimate object (or concept), or a person. Likes or other forms of affective engagement do not constitute any sort of human relationship; the only thing they ‘feed’ is the network itself. The network, at the same time, is not an expression, reflection, or (even) simulation of human relationships: it is the primary structure of feeling.

All hail…

Yuval Noah Harari’s latest book, Homo Deus, puts the issue of emotions at the centre of the discussion of the relationship between human and AI. In a review in The Guardian, David Runciman writes:

“Human nature will be transformed in the 21st century because intelligence is uncoupling from consciousness. We are not going to build machines any time soon that have feelings like we have feelings: that’s consciousness. Robots won’t be falling in love with each other (which doesn’t mean we are incapable of falling in love with robots). But we have already built machines – vast data-processing networks – that can know our feelings better than we know them ourselves: that’s intelligence. Google – the search engine, not the company – doesn’t have beliefs and desires of its own. It doesn’t care what we search for and it won’t feel hurt by our behaviour. But it can process our behaviour to know what we want before we know it ourselves. That fact has the potential to change what it means to be human.”

On the surface level, this makes sense. Algorithms can measure our ‘likes’ and other emotional reactions and combine them into ‘meaningful’ patterns – e.g., correlate them with specific background data (age, gender, location), time of day, etc., and, on the basis of this, predict how you will act (click, shop) in specific situations. However, does this amount to ‘knowledge’? In other words, if machines cannot have feelings – and Harari seems adamant that they cannot – how can they actually ‘know’ them?

Frege on Facebook

This comes close to a philosophical problem I’ve  been trying to get a grip on recently: the Frege-Geach (alternatively, the embedding, or Frege-Geach-Searle) problem. It is comprised of two steps. The first is to claim that there is a qualitative difference between moral and descriptive statements – for instance, between saying “It is wrong to kill” and “It is raining”. Most humans, I believe, would agree with this. The second is to observe that there is no basis for claiming this sort of difference based on sentence structure alone, which then leads to the problem of explaining its source – how do we know there is one? In other words, how it could be that moral and descriptive terms have exactly the same sort of semantic properties in complex sentences, even though they have different kinds of meaning? Where does this difference stem from?

The argument can be extended to feelings: how do we know that there is a qualitative difference between statements such as “I love you” and “I eat apples”? Or loving someone and ‘liking’ an online status? From a formal (syntactic) perspective, there isn’t. More interestingly, however, there is no reason why machines should not be capable of such a form of expression. In this sense, there is no way to reliably establish that likes coming from a ‘real’ person and, say, a Twitterbot, are qualitatively different. As humans, of course, we would claim to know the difference, or at least be able to spot it. But machines cannot. There is nothing inherent in the expression of online affect that would allow algorithms to distinguish between, say, the act of ‘loving’ the library and the act of loving a person. Knowledge of emotions, in other words, is not reducible to counting, even if counting takes increasingly sophisticated forms.

How do you know what you do not know?

The problem, however, is that humans do not have superior knowledge of emotions, their own or other people’s. I am not referring to situations in which people are unsure or ‘confused’ about how they feel [ii], but rather to the limited language – forms of expression – available to us. The documentary “One More Time With Feeling”, which I saw last week, engages with this issue in a way I found incredibly resonant. Reflecting on the loss of his son, Nick Cave relates how the words that he or people around him could use to describe the emotions seemed equally misplaced, maladjusted and superfluous (until the film comes back into circulation, Amanda Palmer’s review which addresses a similar question is  here) – not because they couldn’t reflect it accurately, but because there was no necessary link between them and the structure of feeling at all.

Clearly, the idea that language does not reflect, but rather constructs  – and thus also constrains – human reality is hardly new: Wittgenstein, Lacan, and Rorty (to name but a few) have offered different interpretations of how and why this is the case. What I found particularly poignant about the way Cave frames it in the film is that it questions the whole ontology of emotional expression. It’s not just that language acts as a ‘barrier’ to the expression of grief; it is the idea of the continuity of the ‘self’ supposed to ‘have’ those feelings that’s shattered as well.

Love’s labour’s lost (?): between practice and theory

This brings back some of my fieldwork experiences from 2007 and 2008, when I was doing a PhD in anthropology, writing on the concept of romantic relationships. Whereas most of my ‘informants’ – research participants – could engage in lengthy elaboration of the criteria they use in choosing (‘romantic’) partners (as well as, frequently, the reasons why they wouldn’t designate someone as a partner), when it came to emotions their narratives could frequently be reduced to one word: love (it wasn’t for lack of expressive skills: most were highly educated). It was framed as a binary phenomenon: either there or not there. At the time, I was more interested in the way their (elaborated) narratives reflected or coded markers of social inequality – for instance, class or status. Recently, however, I have been going back more to their inability (or unwillingness) to elaborate on the emotion that supposedly underpins, or at least buttresses, those choices.

Theoretical language is not immune to these limitations. For instance, whereas social sciences have made significant steps in deconstructing notions such as ‘man’, ‘woman, ‘happiness’, ‘family’, we are still miles away from seriously examining concepts such as ‘love’, ‘hate’, or ‘fear’. Moira Weigel’s and Eva Illouz’ work are welcome exceptions to the rule: Weigel uses the feminist concept of emotional labour to show how the responsibility for maintaining relationships tends to be unequally distributed between men and women, and Illouz demonstrates how modern notions of dating come to define subjectivity and agency of persons in ways conducive to the reproduction of capitalism. Yet, while both do a great job in highlighting social aspects of love, they avoid engaging with its ontological basis. This leaves the back door open for an old-school dualism that either assumes there is an (a- or pre-social?) ‘basis’ to human emotions, which can  be exploited or ‘harvested’ through relationships of power; or, conversely, that all emotional expression is defined by language, and thus its social construction the only thing worth studying. It’s almost as if ‘love’ is the last construct left standing, and we’re all too afraid to disenchant it.

For a relational ontology

A relational ontology of human emotions could, in principle, aspire to de-throne this nominalist (or, possibly worse, truth-proceduralist) notion of love in favour of one that sees it as a by-product of relationality. This isn’t claiming that ‘love’ is epiphenomenal: to the degree to which it is framed as a motivating force, it becomes part and parcel of the relationship itself. However, not seeing it as central to this inquiry would hopefully allow us to work on the diversification of the language of emotions. Instead of using a single marker (even as polysemic as ‘love’) for the relationship with one’s library and one’s significant other, we could start thinking about ways in which they are (or are not) the same thing. This isn’t, of course, to sanctify ‘live’ human-to-human emotion: I am certain that people can feel ‘love’ for pets, places, or deceased ones. Yet, calling it all ‘love’ and leaving it at that is a pretty shoddy way of going about feelings.

Furthermore, a relational ontology of human emotions would mean treating all relationships as unique. This isn’t, to be clear, a pseudoanarchist attempt to deny standards of or responsibility for (inter)personal decency; and even less a default glorification of long-lasting relationships. Most relationships change over time (as do people inside them), and this frequently means they can no longer exist; some relationships cannot coexist with other relationships; some relationships are detrimental to those involved in them, which hopefully means they cease to exist. Equally, some relationships are superficial, trivial, or barely worth a mention. However, this does not make them, analytically speaking, any less special.

This also means they cannot be reduced to the same standard, nor measured against each other. This, of course, runs against one of capitalism’s dearly-held assumptions: that all humans are comparable and, thus, mutually replaceable. This assumption is vital not only for the reproduction of labour power, but also, for instance, for the practice of dating [iii], whether online or offline. Moving towards a relational concept of emotions would allow us to challenge this notion. In this sense, ‘loving’ a library is problematic not because the library is not a human being, but because ‘love’, just like other human concepts, is a relatively bad proxy. Contrary to what pop songs would have us believe, it’s never an answer, and, quite possibly, neither the question.

Some Twitter wisdom for the end….

————————————————————————–

[i] Thanks go to Mark Carrigan who sent this to me.

[ii] While I am very interested in the question of self-knowledge (or self-ignorance), for some reason, I never found this particular aspect of the question analytically or personally intriguing.

[iii] Over the past couple of years, I’ve had numerous discussions on the topic of dating with friends, colleagues, but also acquaintances and (almost) strangers (the combination of having a theoretical interest in the topic and not being in a relationship seem to be particularly conducive to becoming involved in such conversations, regardless of whether one wants it or not). I feel compelled to say that my critique of dating (and the concomitant refusal to engage in it, at least as far as its dominant social forms go) does not, in any way, imply a criticism of people who do. There is quite a long list of people whom I should thank for helping me clarify this, but instead I promise to write another longer post on the topic, as well as, finally, develop that app  :).

Out of place? On Pokémon, foxes, and critical cultural political economy

WightFoxBanner
Isle of Wight, August 2016

Last week, I attended the Second international conference in Cultural political economy organized by the Centre for globalization, education and social futures at the University of Bristol. It was through working with Susan Robertson and other folk at the Graduate School of Education, where I had spent parts of 2014 and 2015 as a research fellow, that I first got introduced to cultural political economy.

The inaugural conference last year took place in Lancaster, so it was a great opportunity to both meet other people working within this paradigm and do a bit of hiking in the Lake District. This year, I was particularly glad to be in Bristol – the city that, to a great degree, comes closest to ‘home’, and where – having spent the majority of those two years not really living anywhere – I felt I kind of belonged. The conference’s theme – “Putting culture in its place” – held, for me, in this sense, a double meaning: it was both about critically assessing the concept of culture in cultural political economy, and about being in a particular place from which to engage in doing just that.

 Cultural political economy (CPE) unifies (or hybridises) approaches from cultural studies and those from (Marxist) political economy, in order to address the challenges of growing complexity (and possible incommensurability, or what Jessop refers to as in/compossibility) of elements of global capitalism. Of course, as Andrew Sayer pointed out, the ‘cultural’ streak in political economy can be traced all the way to Marx, if not downright to Aristotle. Developing it as a distinct approach, then, needs to be understood both genealogically – as a way to reconcile two strong traditions in British sociology – and politically, inasmuch as it aspires to make up for what some authors have described as cultural studies’ earlier disregard of the economic, without, at the same time, reverting to the old dichotomies of base/superstructure.

 Whereas it would be equal parts wrong, pretentious, and not particularly useful to speak of “the” way of doing cultural political economy – in fact, one of its strongest points, in my view, is that it has so far successfully eschewed theoretical and institutional ossification that seems to be an inevitable corollary of having (or building) ‘disciples’ (in both senses: as students, and as followers of a particular disciplinary approach) – what it emphasises is the interrelationship between the ‘cultural’ (as identities, materialities, civilisations, or, in Jessop and Sum’s – to date the most elaborate – view, processes of meaning-making), the political, and the economic, whilst avoiding reducing them one onto another. Studying how these interact over time, then, can help understand how specific configurations (or ‘imaginaries’) of capitalism – for instance, competitiveness and the knowledge-based economy – come into being.

My relationship to CPE is somewhat ambiguous. CPE is grounded in the ontology of critical realism, which, ceteris paribus, comes closest to my own views of reality [*]. Furthermore, having spent a good portion of the past ten years researching knowledge production in a variety of regional and historical contexts, the observation that factors we call ‘cultural’ play a role in each makes sense to me, both intuitively and analytically. On the other hand, being trained in anthropology means I am highly suspicious of the reifying and exclusionary potential of concepts such as ‘culture’ and, especially, ‘civilisation’ (in ways which, I would like to think, go beyond the (self-)righteousness immanent in many of their critiques on the Left). Last, but not least, despite a strong sense of solidarity with a number of identity-based causes, my experience in working in post-conflict environments has led me to believe that politics of identity, almost inevitably, fails to be progressive.[†]

For these reasons, the presentation I did at the conference was aimed at clarifying the different uses of the concept of ‘culture’ (and, to a lesser degree, ‘civilisation’) in cultural political economy, and discussing their political implications. To begin with, it might make sense to put culture through the 5W1H of journalistic inquiry. What is culture (or, what is its ontology)? Who is it – in other words, when we say that ‘culture does things’, how do we define agency? Where is it – in other words, how does it extend in space, and how do we know where its boundaries are? When is it – or what is its temporal dimension, and why does it seem easiest to define when it has either already passed, or is at least ‘in decline’, the label that seems particularly given to application to the Western civilisation? How is it (applied as an analytical concept)? This last bit is particularly relevant, as ‘culture’ sometimes appears in social research as a cause, sometimes as a mediating force (in positivist terms, ‘intervening variable’), and sometimes as an outcome, or consequence. Of course, the standard response is that it is, in fact, all of these, but instead of foreclosing the debate, this just opens up the question of WHY: if culture is indeed everything (or can be everything), what is its value as an analytical term?

A useful metaphor to think about different meanings of ‘culture’ could be the game of Pokémon Go. It figures equally as an entity (in the case of Pokémon, entities are largely fictional, but this is of lesser importance – many entities we identify as culturally significant, for instance deities, are); as a system of rules and relationships (for instance, those governing the game, as well as online and offline relationships between players); as a cause of behaviour (in positivist terms, an independent variable); and as an indicator (for instance, Pokémon Go is taken as a sign of globalization, alienation, revolution [in gaming], etc.). The photos in the presentation reflect some of these uses, and they are from Bristol: the first is a Pikachu caught in Castle Park (no, not mine :)); the other is from an event in July, when the Bristol Zoo was forced to close because too many people turned up for a Pokémon lure party. This brings in the political economy of the game; however, just like in CPE, the ‘lifeworld’ of Pokémon Go cannot be reduced to it, despite the fact it would not exist without it. So, when we go ‘hunting’ for culture, where should we look?

Clarifying the epistemic uses of the concept of culture serves not only to prevent treating culture as what Archer has referred to as ‘epiphenomenal’, or what Rojek & Urry have (in a brilliantly scathing review) characterised as ‘decorative’, but primarily to avoid what Woolgar & Pawluch dubbed ‘ontological gerrymandering’. Ontological gerrymandering refers to conceptual sliding in social problems definitions, and consists of “making problematic the truth status of certain states of affairs selected for analysis and explanation, while backgrounding or minimizing the possibility that the same problems apply to assumptions upon which the analysis depends. (…) Some areas are portrayed as ripe for ontological doubt and others portrayed as (at least temporarily) immune to doubt”[‡].

In the worst of cases, ‘culture’ lends itself to this sort of use – one moment almost an ‘afterthought’ of the more foundational processes related to politics and economy; the other foundational, at the very root of the transformations we see in everyday life; and yet, at other moments, mediating, as if a ‘lens’ that refracts reality. Of course, different concepts and uses of the term have been dissected and discussed at length in social theory; however, in research, just like in practice, ‘culture’ frequently resurfaces as a blackbox that can be conveniently proffered to explain elements not attributable (or reducible) to other factors.

This is important not only for theoretical but also, and possibly more, for political reasons. Culture is often seen as a space of freedom, for expression and experimentation. The line from which I borrow the title of my talk – “When I hear the word culture” – is an example of a right-wing reaction to exactly that sort of concept. Variously misattributed to Goering, Gebels, or even Hitler, the line actually comes from Schlageter, a play by Hanns Johst, written in Germany in 1933, which celebrates Nazi ideology. At some point, one of the characters breaks into a longish rant on why he hates the concept of culture – he sees it as ‘lofty’, ‘idealistic’, and in many ways distant from what he perceives to be ‘real struggles’, guns and ammo – which is why it crescendoes in the famous “When I hear the word culture, I release the safety on my Browning”. This idea of ‘culture’ as fundamentally opposed to the vagaries of material existence has informed many anti-intellectualist movements, but, equally importantly, it has also penetrated the reaction to them, resulting in the often unreflexive glorification of ‘folk’ poetry, drama, or art, as almost instantaneously effective expressions of resistance to anti-intellectualism.

Yet, in contemporary political discourse, the concept of culture has been equally appropriated by the left and the right: witness the ‘culture wars’ in the US, or the more recent use of the term to describe social divisions in the UK. Rather than disappearing, political struggles, I believe, will be increasingly framed in terms of culture. The ‘burkini ban’ in France is one case. Some societies deal with cultural diversity differently, at least on the face of it. New Zealand, where I did a part of my research, is a bicultural society. Its universities are founded on the explicit recognition of the concept of mātauranga Māori, which implies the existence of fundamentally culturally different epistemologies. This, of course, raises a number of other interesting issues; but those issues are not something we shouldn’t be prepared to face.

 As we are becoming better at dealing with culture and with the economy, it still remains a challenge to translate these insights to the political. An obvious case where we’re failing at this is knowledge production itself – cultural political economy is very well suited for analysing the transformation of universities in neoliberalism, yet none the wiser – or more efficient – in tackling these challenges in ways that provide a lasting political alternative.

——-

Later that evening, I go see two of my closest friends from Bristol. Walking back to the flat where I’m staying – right between Clifton and Stokes Croft – I run across a fox. Foxes are not particularly exceptional in Bristol, but I still remember my first encounter with one, as I was walking across Cotham side in 2014: I thought it was a large cat at first, and it was only the tail that gave it away. Having grown up in a highly urbanised environment, I cannot help but see encounters with wildlife as somewhat magical. They are, to me, visitors from another world, creatures temporarily inhabiting the same plane of existence, but subject to different motivations and rules of behaviour: in other words, completely alien. This particular night, this particular fox crosses the road and goes through the gates of Cotham School, which I find so patently symbolic that I am reluctant to share it for fear of being accused of peddling clichés.

And this, of course, marks the return of culture en pleine force. As a concept, it is constructed in opposition to ‘nature’; as a practice, its primary role is to draw boundaries – between the sacred and the profane, between the living and the dead, the civilised and the wild. I know – from my training in anthropology, if nothing else – that fascination with this particular encounter stems from the feeling of it being ‘out of place’: foxes in Bristol are magical because they transgress boundaries – in this case, between ‘cultured’, human worlds, and ‘nature’, the outer world.

I walk on, and right around St. Matthew’s church, there is another one. This one stops, actually, and looks at me. “Hey”, I say, “Hello, fox”. It waits for about six seconds, and then slowly turns around and disappears through the hedge.

I wish I could say that there was sense in that stare, or that I was able to attribute it purpose. There was none, and this is what made it so poignant. The ultimate indecipherability of its gaze made me realise I was as much out of place as the fox was. From its point of view, I was as immaterial and as transgressive as it was from mine: creature from another realm, temporarily inhabiting the same plane, but ultimately of no interest. And there it was, condensed in one moment: what it means to be human, what it means to be somewhere, what it means to belong – and the fragility, precariousness, and eternal incertitude it comes with.

[*] In truth, I’m still planning to write a book that hybridises magical realism with critical realism, but this is not the place to elaborate on that particular project.

[†] I’ve written a bit on the particular intersection of class- and identity-based projects in From Class to Identity; the rich literature on liberalism, multiculturalism, and politics of recognition is impossible to summarise here, but the Stanford Encyclopaedia of Philosophy has a decent summary overview under the entry “Identity Politics”.

[‡] I am grateful to Federico Brandmayr who initially drew my attention to this article.

Do we need academic celebrities?

 

[This post originally appeared on the Sociological Review blog on 3 August, 2016].

Why do we need academic celebrities? In this post, I would like to extend the discussion of academic celebrities from the focus on these intellectuals’ strategies, or ‘acts of positioning’, to what makes them possible in the first place, in the sense of Kant’s ‘conditions of possibility’. In other words, I want to frame the conversation in the broader framework of a critical cultural political economy. This is based on a belief that, if we want to develop an understanding of knowledge production that is truly relational, we need to analyse not only what public intellectuals or ‘academic celebrities’ do, but also what makes, maintains, and, sometimes, breaks, their wider appeal, including – not least importantly – our own fascination with them.

To begin with, an obvious point is that academic stardom necessitates a transnational audience, and a global market for intellectual products. As Peter Walsh argues, academic publishers play an important role in creating and maintaining such a market; Mark Carrigan and Eliran Bar-El remind us that celebrities like Giddens or Žižek are very good at cultivating relationships with that side of the industry. However, in order for publishers to operate at an even minimal profit, someone needs to buy the product. Simply put, public intellectuals necessitate a public.

While intellectual elites have always been to some degree transnational, two trends associated with late modernity are, in this sense, of paramount importance. One is the expansion and internationalization of higher education; the other is the supremacy of English as the language of global academic communication, coupled with the growing digitalization of the process and products of intellectual labour. Despite the fact that access to knowledge still remains largely inequitable, they have contributed to the creation of an expanded potential ‘customer base’. And yet – just like in the case of MOOCs – the availability or accessibility of a product is not sufficient to explain (or guarantee) interest in it. Regardless of whether someone can read Giddens’ books in English, or is able to watch Žižek’s RSA talk online, their arguments, presumably, still need to resonate: in other words, there must be something that people derive from them. What could this be?

In ‘The Existentialist Moment’, Patrick Baert suggests the global popularity of existentialism can be explained by Sartre’s (and other philosophers’ who came to be identified with it, such as De Beauvoir and Camus) successful connecting of core concepts of existentialist philosophy, such as choice and responsibility, to the concerns of post-WWII France. To some degree, this analysis could be applied to contemporary academic celebrities – Giddens and Bauman wrote about the problems of late or liquid modernity, and Žižek frequently comments on the contradictions and failures of liberal democracy. It is not difficult to see how they would strike a chord with the concerns of a liberal, educated, Western audience. Yet, just like in the case of Sartre, this doesn’t mean their arguments are always presented in the most palatable manner: Žižek’s writing is complex to the point of obscurantism, and Bauman is no stranger to ‘thick description’. Of the three, Giddens’ work is probably the most accessible, although this might have more to do with good editing and academic English’s predilection for short sentences, than with the simplicity of ideas themselves. Either way, it could be argued that reading their work requires a relatively advanced understanding of the core concepts of social theory and philosophy, and the patience to plough through at times arcane language – all at seemingly no or very little direct benefit to the audience.

I want to argue that the appeal of star academics has very little to do with their ideas or the ways in which they are framed, and more to do with the combination of charismatic authority they exude, and the feeling of belonging, or shared understanding, that the consumption of their ideas provides. Similarly to Weber’s priests and magicians, star academics offer a public performance of the transfiguration of abstract ideas into concrete diagnosis of social evils. They offer an interpretation of the travails of late moderns – instability, job insecurity, surveillance, etc. – and, at the same time, the promise that there is something in the very act of intellectual reflection, or the work of social critique, that allows one to achieve a degree of distance from their immediate impact. What academic celebrities thus provide is – even if temporary – (re)‘enchantment’ of the world in which the production of knowledge, so long reserved for the small elite of the ‘initiated’, has become increasingly ‘profaned’, both through the massification of higher education and the requirement to make the stages of its production, as well as its outcomes, measurable and accountable to the public.

For the ‘common’ (read: Western, left-leaning, highly educated) person, the consumption of these celebrities’ ideas offers something akin to the combination of a music festival and a mindfulness retreat: opportunity to commune with the ‘like-minded’ and take home a piece of hope, if not for salvation, then at least for temporary exemption from the grind of neoliberal capitalism. Reflection is, after all, as Marx taught us, the privilege of the leisurely; engaging in collective acts of reflection thus equals belonging to (or at least affinity with) ‘the priesthood of the intellect’. As Bourdieu noted in his reading of Weber’s sociology of religion, laity expect of religion “not only justifications of their existence that can offer them deliverance from the existential anguish of contingency or abandonment, [but] justification of their existence as occupants of a particular position in the social structure”. Thus, Giddens’ or Žižek’s books become the structural or cultural equivalent of the Bible (or Qur’an, or any religious text): not many people know what is actually in them, even fewer can get the oblique references, but everyone will want one on the bookshelf – not necessarily for what they say, but because of what having them signifies.

This helps explain why people flock to hear Žižek or, for instance, Yannis Varoufakis, another leftist star intellectual. In public performances, their ideas are distilled to the point of simplicity, and conveniently latched onto something the public can relate to. At the Subversive Festival in Zagreb, Croatia in 2013, for instance, Žižek propounded the idea of the concept of ‘love’ as a political act. Nothing new, one would say – but who in the audience would not want to believe their crush has potential to turn into an act of political subversion? Therefore, these intellectuals’ utterances represent ‘speech acts’ in quite a literal sense of the term: not because they are truly (or consequentially) performative, but because they offer the public an illusion that listening (to them) and speaking (about their work) represents, in itself, a political act.

From this perspective, the mixture of admiration, envy and resentment with which these celebrities are treated in the academic establishment represents a reflection of their evangelical status. Those who admire them quarrel about the ‘correct’ interpretation of their works and vie for the status of the nominal successor, which would, of course, also feature ritualistic patricide – which may be the reason why, although surrounded by followers, so few academic celebrities actually elect one. Those who envy them monitor their rise to fame in hope of emulating it one day. Those who resent them, finally, tend to criticize their work for intellectual ‘baseness’, an argument that is in itself predicated on the distinction between academic (and thus ‘sacred’) and popular, ‘common’ knowledge.

Many are, of course, shocked when their idols turn out not to be ‘original’ thinkers channeling divine wisdom, but plagiarists or serial repeaters. Yet, there is very little to be surprised by; academic celebrities, after all, are creatures of flesh and blood. Discovering their humanity and thus ultimate fallibility – in other words, the fact that they cheat, copy, rely on unverified information, etc. – reminds us that, in the final instance, knowledge production is work like any other. In other words, it reminds us of our own mortality. And yet, acknowledging it may be the necessary step in dismantling the structures of rigid, masculine, God-like authority that still permeate the academia. In this regard, it makes sense to kill your idols.