Tár, or the (im)possibility of female genius

“One is not born a genius, one becomes a genius;”, wrote Simone de Beauvoir in The Second Sex; “and the feminine situation has up to the present rendered this becoming practically impossible.”

Of course, the fact that the book, and its author, are much better known for the other quote on processual/relational ontology – “one is not born a woman, one becomes a woman” – is a self-fulfilling prophecy of the first. A statement about geniuses cannot be a statement about women. A woman writing about geniuses must, in fact, be writing about women. And because women cannot be geniuses, she cannot be writing about geniuses. Nor can she be one herself.

I saw Tár, Todd Field’s lauded drama about the (fictional) first woman conductor of the Berlin Philharmonic Orchestra, earlier this year (most of this blog post was written before the Oscars and reviews). There were many reasons why I was poised to love it: the plot/premise, the scenario, the music (obviously), the visuals (and let’s be honest, Kate Blanchett could probably play a Christmas tree and be brilliant). All the same, it ended up riling me for its unabashed exploitation of most stereotypes in the women x ambition box. Of course the lead character (Lydia Tár, played by Blanchett) is cold, narcissistic, and calculating; of course she is a lesbian; of course she is ruthless towards long-term collaborators and exploitative of junior assistants; of course she is dismissive of identity politics; and of course she is, also, a sexual predator. What we perceive in this equation is that a woman who desires – and attains – power will inevitably end up reproducing exactly the behaviours that define men in those roles, down to the very stereotype of Weinstein-like ogre. What is it that makes directors not be able to imagine a woman with a modicum of talent, determination, or (shhh) ambition as anything other than a monster – or alternatively, as a man, and thus by definition a ‘monster’?

To be fair, this movement only repeats what institutions tend to do with women geniuses: they typecast them; make sure that their contributions are strictly domained; and penalize those who depart from the boundaries of prescribed stereotypical ‘feminine’ behaviour (fickle, insecure, borderline ‘hysterical’; or soft, motherly, caring; or ‘girlbossing’ in a way that combines the volume of the first with the protective urges of the second). Often, like in Tár, by literally dragging them off the stage.

The sad thing is that it does not have to be this way. The opening scene of Tár is a stark contrast with the closing one in this regard. In the opening scene, a (staged) interview with Adam Gopnik, Lydia Tár takes the stage in a way that resists, refuses, and downplays gendered stereotypes. Her demeanor is neither masculine nor feminine; her authority is not negotiated, forced to prove itself, endlessly demonstrated. She handles the interview with an equanimity that does not try to impress, convince, cajole, or amuse; but also not charm, outwit, or patronize. In fact, she does not try at all. She approaches the interviewer from a position of intellectual equality, a position that, in my experience, relatively few men can comfortably handle. But of course, this has to turn out to be a pretense. There is no way to exist as a woman in the competitive world of classical music – or, for that matter, anywhere else – without paying heed to the gendered stereotypes.

A particularly poignant (and, I thought, very successful) depiction of this is in the audition scene, in which Olga – the cellist whose career Tár will help and who will eventually become the object of her predation – plays behind a screen. Screening off performers during auditions (‘blind auditions’) was, by the way, initially introduced to challenge gender bias in hiring musicians to major orchestras – to resounding (sorry) success, making it 50% more likely women would be hired. But Tár recognizes the cellist by her shoes (quite stereotypically feminine shoes, by the way). The implication is that even ‘blind’ auditions are not really blind. You can be either a ‘woman’ (like Olga, young, bold, straight, and feminine); or a ‘man’ (like Lydia, masculine, lesbian, and without scruples). There is no outside, and there is no without.

As entertaining as it is to engage in cultural criticism of stereotypical gendered depiction in cinemas, one question from Tár remains. Is there a way to perform authority and expertise in a gender-neutral way? If so, what would it be?

People often tell me I perform authority in a distinctly non-(stereotypically)-feminine way; this both is and is not a surprise. It is a surprise because I am still occasionally shocked by the degree to which intellectual environments in the UK, and in particular those that are traditionally academic, are structurally, relationally, and casually misogynist, even in contexts supposedly explicitly designed to counter it. It is not a surprise, on the other hand, as I was raised by women who did not desire to please and men who were more than comfortable with women’s intellects, but also, I think, because the education system I grew up in had no problems accepting and integrating these intellects. I attribute this to the competitive streak of Communist education – after all, the Soviets sent the first woman into space. But being (at the point of conception, not reception, sadly) bereft of gendered constraints when it comes to intellect does not solve the other part of the equation. If power is also, always, violence, is there a way to perform power that does not ultimately involve hurting others?

This, I think, is the challenge that any woman – or, for that matter, anyone in a position of power who does not automatically benefit from male privilege – must consider. As Dr Autumn Asher BlackDeer brilliantly summarized it recently, decolonization (or any other kind of diversification) is not about replacing one set of oppressors with another, so having more diverse oppressors. Yet, all too frequently, this kind of work – willingly or not – becomes appropriated and used in exactly these ways.

Working in institutions of knowledge production, and especially working both on and within multiple intersecting structures of oppression – gender, ethnicity/race, ability, nationality, class, you name it – makes these challenges, for me, present on a daily basis in both theoretical and practical work., One of the things I try to teach my students is that, in situations of injustice, it is all too appealing to react to perceived slight or offence by turning it inside out, by perpetuating violence in turn. If we are wronged, it becomes easy to attribute blame and mete out punishment. But real intellectual fortitude lies in resisting this impulse. Not in some meek turning-the-other-cheek kind of way, but in realizing that handing down violence will only, ever, perpetuate the cycle of violence. It is breaking – or, failing that, breaking out of – this cycle we must work towards.

As we do, however, we are faced with another kind of problem. This is something Lauren Berlant explicitly addressed in one of their best texts ever, Feminism and the Institutions of Intimacy: most people in and around institutions of knowledge production find authority appealing. This, of course, does not mean that all intellectual authority lends itself automatically to objectification (on either of the sides), but it does and will happen. Some of this, I think, is very comprehensively addressed in Amia Srinivasan‘s The Right to Sex; some of it is usefully dispensed with by Berlant, who argues against seeing pedagogical relations as indexical for transference (or the other way around?). But, as important as these insights are, questions of knowledge – and thus questions of authority – are not limited to questions of pedagogy. Rather, they are related to the very relational nature of knowledge production itself.

For any woman who is an intellectual, then, the challenge rests in walking the very thin line between seduction and reduction – that is, the degree to which intellectual work (an argument, a book, a work of art) has to seduce, but in turn risks being reduced to an act of seduction (the more successful it is, the more likely this will happen). Virginie Despentes’ King Kong Theory, which I’m reading at the moment (shout out to Phlox Books in London where I bought it), is a case in point. Despentes argues against reducing women’s voices to ‘experience’, or to women as epistemic object (well, OK, the latter formulation is mine). Yet, in the reception of the book, it is often Despentes herself – her clothes, her mannerisms, her history, her sexuality – that takes centre stage.

Come to think of it, this version of ‘damned if you do, damned if you don’t’ applies to all women’s performances: how many times have I heard people say they find, for instance, Judith Butler’s or Lauren Berlant’s arguments or language “too complex” or “too difficult”, but on occasions when they do make an effort to engage with them reduce them to being “about gender” or “about sexuality” (hardly warrants mentioning that the same people are likely to diligently plod through Heidegger, Sartre or Foucault without batting an eyelid and, speaking of sexuality, without reducing Foucault’s work on power to it). The implication, of course, is that writers or thinkers who are not men have the obligation to persuade, to enchant readers/consumers into thinking their argument is worth giving time to.

This is something I’ve often observed in how people relate to the arguments of women and nonbinary intellectuals: “They did not manage to convince me” or “Well, let’s see if she can get away with it”. The problem is not just the casualized use of pronouns (note how men thinkers retain their proper names: Sartre, Foucault, but women slip into being a “she”). It’s the expectation that it is their (her) job to convince you, to lure you. Because, of course, your time is more valuable than hers, and of course, there are all these other men you would/should be reading instead, so why bother? It is not the slightest bit surprising that this kind of intellectual habit lends itself too easily to epistemic positioning that leads to epistemic erasure, but also that it becomes all too easily perpetuated by everyone, including those who claim to care about such things.

One of the things I hope I managed to convey in the Ethics of Ambiguity reading group I ran at the end of 2022 and beginning of 2023 is to not read intellectuals who are not white men in this way. To not sit back with your arms folded and let “her” convince you. Simone Weil, another genius – and a woman – wrote that attention is the primary quality of love we can give to each other. The quality of intellectual attention we give to pieces we read has to be the same to count as anything but a narrow, self-aggrandizing gesture. In other words, a commitment to equality means nothing without a commitment to equality of intellectual attention, and a constant practice and reflection required to sustain and improve it.

Enjoyed this? Try https://journals.sagepub.com/doi/10.1177/00113921211057609

and https://www.thephilosopher1923.org/post/philosophy-herself

You can’t ever go back home again

At the start of December, I took the boat from Newcastle to Amsterdam. I was in Amsterdam for a conference, but it is also true I used to spend a lot of time in Amsterdam – Holland in general – both for private reasons and for work, between 2010 and 2016. Then, after a while, I took a train to Berlin. Then another, sleeper train, to Budapest. Then, a bus to Belgrade.

To wake up in Eastern Europe is to wake up in a context in which history has always already happened. To state this, of course, is a cliché; thinking, and writing, about Eastern Europe is always already infused with clichés. Those of us who come from this part of the world – what Maria Tumarkin marks so aptly as “Eastern European elsewheres” – know. In England, we exist only as shadow projections of a self, not even important enough to be former victims/subjects of the Empire. We are born into the world where we are the Other, so we learn to think, talk, and write of ourselves as the Other. Simone de Beauvoir wrote about this; Frantz Fanon wrote about this too.

To wake up in Berlin is to already wake up in Eastern Europe. This is where it used to begin. To wake up in Berlin is to know that we are always already living in the aftermath of a separation. In Eastern Europe, you know the world was never whole.

I was eight when the Berlin Wall fell. I remember watching it on TV. Not long after, I remember watching a very long session of the Yugoslav League of Communists (perhaps this is where my obsession with watching Parliament TV comes from?). It seemed to go on forever. My grandfather seemed agitated. My dad – whom I only saw rarely – said “Don’t worry, Slovenia will never secede from Yugoslavia”. “Oh, I think it will”, I said*.

When you ask “Are you going home for Christmas?”, you mean Belgrade. To you, Belgrade is a place of clubs and pubs, of cheap beer and abundant grilled meat**. To me, Belgrade is a long dreadful winter, smells of car fumes and something polluting (coal?) used for fuel. Belgrade is waves of refugees and endless war I felt powerless to stop, despite joining the first anti-regime protest in 1992 (at the age of 11), organizing my class to join one in 1996 (which almost got me kicked out of school, not for the last time), and inhaling oceans of tear gas when the regime actually fell, in 2000.

Belgrade is briefly hoping things would get better, then seeing your Prime Minister assassinated in 2003; seeing looting in the streets of Belgrade after Kosovo declared independence in 2008, and while already watching the latter on Youtube, from England, deciding that maybe there was nowhere to return to. Nowadays, Belgrade is a haven of crony capitalism equally indebted to Russian money, Gulf real estate, and Chinese fossil fuel exploitation that makes its air one of the most polluted in the world. So no, Belgrade never felt like home.

Budapest did, though.

It may seem weird that the place I felt most at home is a place where I barely spent three years. My CV will testify that I lived in Budapest between 2010 and 2013, first as a visiting fellow, then as an adjunct professor at the Central European University (CEU). I don’t have a drop of Hungarian blood (not that I know of, at least, thought with the Balkans you can never tell). My command of language was, at best, perfunctory; CEU is an American university and its official language is English. Among my friends – most of whom were East-Central European – we spoke English; some of us have other languages in common, but we still do. And while this group of friends did include some people who would be described as ‘locals’ – that is, Budapest-born and raised – we were, all of us, outsiders, brought together by something that was more than chance and a shared understanding of what it meant to be part of the city***.

Of course, the CV will say that what brought us together was the fact that we were all affiliated with CEU. But CEU is no longer in Budapest; since 2020, it has relocated to Vienna, forced out by the Hungarian regime’s increasingly relentless pursuit against anything that smacks of ‘progressivism’ (are you listening, fellow UK academics?). Almost all of my friends had left before that, just like I did. In 2012, increasingly skeptical about my chances to acquire a permanent position in Western academia with a PhD that said ‘University of Belgrade’ (imagine, it’s not about merit), I applied to do a second PhD at Cambridge. I was on the verge of accepting the offer, when I also landed that most coveted of academic premia, a Marie Curie postdoc position attached to an offer of a permanent – tenured – position, in Denmark****.

Other friends also left. For jobs. For partners’ jobs. For parenthood. For politics. In academia, this is what you did. You swallowed and moved on. Your CV was your life, not its reflection.

So no, there is no longer a home I can return to.

And yet, once there, it comes back. First as a few casually squeezed out words to the Hungarian conductors on the night train from Berlin, then, as a vocabulary of 200+ items that, though rarely used, enabled me to navigate the city, its subways, markets, and occasionally even public services (the high point of my Hungarian fluency was being able to follow – and even part-translate – the Semmelweis Museum curator’s talk! :)). Massolit, the bookshop which also exists in Krakow, which I’ve visited on a goodbye-to-Eastern-Europe trip from Budapest via Prague and Krakow to Ukraine (in 2013, right before the annexation). Gerlóczy utca, where is the French restaurant in which I once left a massive tip for a pianist who played so beautifully that I was happy to be squeezed at the smallest table, right next to the coat stand. Most, which means ‘bridge’ in Serbian (and Bosnian, and Croatian) and ‘now’ in Hungarian. In Belgrade, I now sometimes rely on Google maps to get around; in Budapest, the map of the city is buried so deep in my mental compass that I end up wherever I am supposed to be going.

This is what makes the city your own. Flow, like the Danube, massive as it meanders between the city’s two halves, which do not exactly make a whole. Like that book by psychologist Mihaly Csikszentmihalyi, which is a Hungarian name, btw. Like my academic writing, which, uncoupled from the strictures of British university term, flows.

Budapest has changed, but the old and the new overlay in ways that make it impossible not to remember. Like the ‘twin’ cities of Besźel and Ul Qoma in the fictional universe of China Miéville’s The City and the City (the universe was, of course, modelled on Berlin, but Besźel is Budapest out and out, save for the sea), the memory and its present overlap in distinct patterns that we are trained not to see. Being in one precludes being in the other. But there are rumours of a third city, Orciny, one that predates both. Believing in Orciny is considered a crime, though. There cannot be a place where the past and the future are equally within touching distance. Right?

CEU, granted, is no longer there as an institution; though the building (and the library) remains, most of its services, students, and staff are now in Vienna. I don’t even dare go into the campus; the last time I was there, in 2017, I gave a keynote about how universities mediate disagreement. The green coffee shop with the perennially grim-faced person behind the counter, the one where we went to get good coffee before Espresso Embassy opened, is no longer there. But Espresso Embassy still stands; bigger. Now, of course, there are places to get good coffee everywhere: Budapest is literally overrun by them. The best I pick up is from the Australian coffee shop, which predates my move. Their shop front celebrates their 10th anniversary. Soon, it will be 10 years since I left Budapest.

Home: the word used to fill me with dread. “When are you going home?”, they would ask in Denmark, perhaps to signify the expectation I would be going to Belgrade for the winter break, perhaps to reflect the idea that all immigrants are, fundamentally, guests. “I live here”, I used to respond. “This is my home”. On bad days, I’d add some of the combo of information I used to point out just how far from assumed identities I was: I don’t celebrate Christmas (I’m atheist, for census purposes); if I did, it would be on a different date (Orthodox Christian holidays in Serbia observe the Julian calendar, which is 10 days behind the Gregorian); thanks, I’ll be going to India (I did, in fact, go to India including over the Christmas holidays the first year I lived in Denmark, though not exactly in order to spite everyone). But above and beyond all this, there was a simpler, flatter line: home is not where you return, it’s the place you never left.

In Always Coming Home, another SF novel about finding the places we (n)ever left, Ursula LeGuin retraces a past from the point of view of a speculative future. This future is one in which the world – in fact, multiple worlds – have failed. Like Eastern Europe, it is a sequence of apocalypses whose relationship can only be discovered through a combination of anthropology and archaeology, but one that knows space and its materiality exist only as we have already left it behind; we cannot dig forwards, as it were.

Am I doing the same, now? Am I coming home to find out why I have left? Or did I return from the future to find out I have, in fact, never left?

Towards the end of The City and the City, the main character, Tyador Borlú, gets apprehended by the secret police monitoring – and punishing – instances of trespass (Breach) between two cities, the two worlds. But then he is taken out by one of the Breach – Ashil – and led through the city in a way that allows him to finally see them not as distinct, but as parts of a whole.

Everything I had been unseeing now jostled into sudden close-up. Sound and smell came in: the calls of Besźel; the ringing of its clocktowers; the clattering and old metal percussion of the trams; the chimney smell; the old smells; they came in a tide with the spice and Illitan yells of Ul Qoma, the clatter of a militsya copter, the gunning of German cars. The colours of Ul Qoma light and plastic window displays no longer effaced the ochres and stone of its neighbour, my home.

‘Where are you?’ Ashil said. He spoke so only I could hear. ‘I . . .’

‘Are you in Besźel or Ul Qoma?’

‘. . . Neither. I’m in Breach.’ ‘You’re with me here.’

We moved through a crosshatched morning crowd. ‘In Breach. No one knows if they’re seeing you or unseeing you. Don’t creep. You’re not in neither: you’re in both.’

He tapped my chest. ‘Breathe.’

(Loc. 3944)

Breathe.

*Maybe this is where the tendency not to be overtly impressed by the authority of men comes from (or authority in general, given my father was a professor of sociology and I was, at that point, nine years old, and also right).

** Which I also do not benefit from, as I do not eat meat.

*** Some years later, I will understand that this is why the opening lines of the Alexandria Quartet always resonated so much.

**** How I ended up doing a second PhD at Cambridge after all and relocating to England permanently is a different story, one that I part-told here.

On reparative reading and critique in/of anthropology: postdisciplinary perspectives on discipline-hopping

*This is a more-or-less unedited text of the plenary (keynote) address to the international conference ‘Anthropology of the future/The Future of Anthropology‘, hosted by the Institute of Ethnography of the Serbian Academy of Sciences and Arts, in Viminacium, 8-9 September 2022. If citing, please refer to as Bacevic, J. [Title]. Keynote address, [Conference].

Hi all. It’s odd to be addressing you at a conference entitled ‘Anthropology of the Future/The Future of Anthropology’, as I feel like an outsider for several reasons. Most notably, I am not an anthropologist. This is despite the fact that I have a PhD in anthropology, from the University of Belgrade, awarded in 2008. What I mean is that I do not identify as an anthropologist, I do not work in a department or institute of anthropology, nor do I publish in anthropology journals. In fact, I went so far in the opposite direction that I got another PhD, in sociology, from the University of Cambridge. I work at a department of sociology, at Durham University, which is a university in the north-east of England, which looks remarkably like Oxford and Cambridge. So I am an outsider in two senses: I am not an anthropologist, and I no longer live, reside, or work in Serbia. However, between 2004 and 2007 I taught at the Department of Ethnology and Anthropology of the University of Belgrade, and also briefly worked at the Institute that is organizing this very conference, as part of the research stipend awarded by the Serbian Ministry of Science to young, promising, scientific talent. Between 2005 and 2007, and then again briefly in 2008-9, I was the Programme Leader for Antropology in Petnica Science Centre. I don’t think it would be too exaggerated to say, I was, once, anthropology’s future; and anthropology was mine. So what happened since?

By undertaking a retelling of a disciplinary transition – what would in common parlance be dubbed ‘career change’ or ‘reorientation’ – my intention is not to engage in autoethnography, but to offer a reparative reading. I borrow the concept of reparative reading from the late theorist Eve Kosofsky Sedgwick’s essay entitled “On paranoid reading and reparative reading, or: You’re so paranoid, you probably think this essay is about you”, first published in 1997 and then, with edits, in 2003; I will say more about its content and key concepts shortly.

For the time being, however, I would like to note that the disinclination from autoethnography was one of the major reasons why I left anthropology; it was matched by the desire to do theory, by which I mean the possibility of deriving mid-range generalizations about human behaviour that could aspire not to be merely local, by which I mean not apply only to the cases studied. This, as we know, is not particularly popular in anthropology. This particular brand of ethnographic realism was explicitly targeted for critique during anthropology’s postmodern turn. On the other hand, Theory in anthropology itself had relatively little to commend it, all too easily and too often developing into a totalizing master-narrative of the early evolutionism or, for that matter, its late 20th– and early 21st-century correlates, including what is usually referred to as cognitive psychology, a ‘refresh’ of evolutionary theory I had the opportunity to encounter during my fellowship at the University of Oxford (2007-8). So there were, certainly, a few reasons to be suspicious of theory in anthropology.

For someone theoretically inclined, thus, one option became to flee into another discipline. Doing a PhD in philosophy in the UK is a path only open to people who have undergraduate degrees in philosophy (and I, despite a significant proportion of my undergrad coursework going into philosophy, had not), which is why a lot of the most interesting work in philosophy in the UK happens – or at least used to happen – in other departments, including literature and language studies, the Classics, gender studies, or social sciences like sociology and geography. I chose to work with those theorists who had found their institutional homes in sociology; I found a mentor at the University of Cambridge, and the rest is history (by which I mean I went on to a postdoctoral research fellowship at Cambridge and then on to a permanent position at Durham).  

Or that, at any rate, is one story. Another story would tell you that I got my PhD in 2008, the year when the economic crisis hit, and job markets collapsed alongside several other markets. On a slightly precarious footing, freshly back from Oxford, I decided to start doing policy research and advising in an area I had been researching before: education policies, in particular as part of processes of negotiation of multiple political identities and reconciliation in post-conflict societies. Something that had hitherto been a passion, politics, soon became a bona fide object of scholarly interest, so I spent the subsequent few years developing a dual career, eventually a rather high-profile one, as, on the one hand, policy advisor in the area of postconflict higher education, and, on the other, visiting (adjunct) lecturer at the Central European University in Budapest, after doing a brief research fellowship in its institute of advanced study. But because I was not educated as a political scientist – I did not, in other words, have a degree in political science; anthropology was closer to ‘humanities’ and my research was too ‘qualitative’ (this is despite the fact I taught myself basic statistics as well as relatively advanced data analysis) – I could not aspire to a permanent job there. So I started looking for routes out, eventually securing a postdoc position (a rather prestigious Marie Curie, and a tenure-track one) in Denmark.

I did not like Denmark very much, and my boss in this job – otherwise one of the foremost critics of the rise of audit culture in higher education – turned out to be a bully, so I spent most of my time in my two fieldwork destinations, University of Bristol, UK, and University of Auckland, New Zealand. I left after two years, taking up an offer of a funded PhD at Cambridge I had previously turned down. Another story would tell you that I was disappointed with the level of corruption and nepotism in Serbian academia so have decided to leave. Another, with disturbing frequency attached to women scholars, would tell you that being involved in an international relationship I naturally sought to move somewhere I could settle down with my partner, even if that meant abandoning the tenured position I had at Singidunum University in Serbia (this reading is, by the way, so prominent and so unquestioned that after I announced I had got the Marie Curie postdoc and would be moving to Denmark several people commented “Oh, that makes sense, isn’t your partner from somewhere out there” – despite the fact my partner was Dutch).

Yet another story, of course, would join the precarity narrative with the migration/exile and decoloniality narrative, stipulating that as someone who was aspiring to do theory I (naturally) had to move to the (former) colonial centre, given that theory is, as we know, produced in the ‘centre’ whereas countries of the (semi)periphery are only ever tasked with providing ‘examples’, ‘case-‘, or, at best, regional or area studies. And so on and so on, as one of the few people who have managed to trade their regional academic capital for a global (read: Global North/-driven and -defined) one, Slavoj Žižek, would say.

The point here is not to engage in a demonstration of multifocality by showing all these stories could be, and in a certain register, are true. It is also not to point out that any personal life-story or institutional trajectory can be viewed from multiple (possibly mutually irreconcilable) registers, and that we pick a narrative depending on occasion, location, and collocutor. Sociologists have produced a thorough analysis of how CVs, ‘career paths’ or  trajectories in the academia are narratively constructed so as to establish a relatively seamless sequence that adheres to, but also, obviously, by the virtue of doing that, reproduces ideas and concepts of ‘success’ (and failure; see also ‘CV of failures‘). Rather, it is to observe something interesting: all these stories, no matter how multifocal or multivocal, also posit master narratives of social forces – forces like neoliberalism, or precarity, for instance; and a master narrative of human motivation – why people do the things they do, and what they desire – things like permanent jobs and high incomes, for instance. They read a direction, and a directionality, into human lives; even if – or, perhaps, especially when – they narrate instances of ‘interruption’, ‘failure’, or inconsistency.

This kind of reading is what Eve Kosofsky Segdwick dubs paranoid reading. Associated with what Paul Ricoeur termed ‘hermeneutics of suspicion’ in Nietzsche, Marx, and Freud, and building on the affect theories of Melanie Klein and Silvan Tomkins, paranoid reading is a tendency that has arguably become synonymous with critique, or critical theory in general: to assume that there is always a ‘behind’, an explanatory/motivational hinterland that, if only unmasked, can not only provide a compelling explanation for the past, but also an efficient strategy for orienting towards the future. Paranoid reading, for instance, characterizes a lot of the critique in and of anthropology, not least of the Writing Culture school, including in the ways the discipline deals with the legacy of its colonial past.

To me, it seems like anthropology in Serbia today is primarily oriented towards a paranoid reading, both in relation to its present (and future) and in relation to its past. This reading of the atmosphere is something it shares with a lot of social sciences and humanities internationally, one of increasing instability/hostility, of the feeling of being ‘under attack’ not only by governments’ neoliberal policies but also by increasingly conservative and reactionary social forces that see any discipline with an openly progressive, egalitarian and inclusive political agenda as leftie woke Satanism, or something. This paranoia, however, is not limited only to those agents or social forces clearly inimical or oppositional to its own project; it extends, sometimes, to proximate and cognate disciplines and forms of life, including sociology, and to different fractions or theoretical schools within anthropology, even those that should be programmatically opposed to paranoid styles of inquiry, such as the phenomenological or ontological turn – as witnessed, for instance, by the relatively recent debate between the late David Graeber and Eduardo Viveiros de Castro on ontological alterity.

Of course, in the twenty-five years that have passed from the first edition of Sedgwick’s essay, many species of theory that explicitly diverge from paranoid style of critique have evolved, not least the ‘postcritical’ turn. But, curiously, when it comes to understanding the conditions of our own existence – that is, the conditions of our own knowledge production – we revert into paranoid readings of not only the social, cultural, and political context, but also of people’s motivations and trajectories. As I argued elsewhere, this analytical gesture reinscribes its own authority by theoretically disavowing it. To paraphrase the title of Sedgwick’s essay, we’re so anti-theoretical that we’re failing to theorize our own inability to stop aspiring to the position of power we believe our discipline, or our predecessors, once occupied, the same power we believe is responsible for our present travails. In other words, we are failing to theorize ambiguity.

My point here is not to chastise anthropology in particular or critical theory in more general terms for failing to live up to political implications of its own ontological commitments (or the other way round?); I have explained at length elsewhere – notably in “Knowing neoliberalism” – why I think this is an impossibility (to summarize, it has to do with the inability to undo the conditions of our own knowledge – to, barely metaphorically, cut our own epistemological branch). Rather, my question is what we could learn if we tried to think of the history and thus future of anthropology, and our position in it, from a reparative, rather than paranoid, position.

This in itself, is a fraught process; not least because anthropology (including in Serbia) has not been exempt from revelations concerning sexual harassment, and it would not be surprising if many more are yet to come. In the context of re-encounter with past trauma and violence, not least the violence of sexual harassment, it is nothing if not natural to re-examine every bit of the past, but also to endlessly, tirelessly scrutinize the present: was I there? Did I do something? Could I have done something? What if what I did made things worse? From this perspective, it is fully justified to ask what could it, possibly, mean to turn towards a reparative reading – can it even, ever, be justified?

Sedgwick – perhaps not surprisingly – has relatively little to say about what reparative reading entails. From my point of view, reparative reading is the kind of reading that is oriented towards reconstructing the past in a way that does not seek to avoid, erase or deny past traumas, but engages with the narrative so as to afford a care of the self and connection – or reconnection – with the past selves, including those that made mistakes or have a lot to answer for. It is, in essence, a profoundly different orientation towards the past as well as the future, one that refuses to reproduce cultures – even if cultures of critique – and to claim that future, in some ways, will be exactly like the past.

Sedgwick aligns this reorientation with queer temporalities, characterized by a relationship to time that refuses to see it in (usually heteronormatively-coded) generationally reproductive terms: my father’s father did this, who in turn passed it to my father, who passed it to me, just like I will pass it to my children. Or, to frame this in more precisely academic terms: my supervisor(s) did this, so I will do it [in order to become successful/recognized like my academic predecessors], and I will teach my students/successors to do it. Understanding that it can be otherwise, and that we can practise other, including non-generational (non-generative?) and non-reproductive politics of knowledge/academic filiation/intellectual friendship is, I think, one important step in making the discussion about the future, including of scientific discipline, anything other than a vague gesturing towards its ever-receding glorious past.

Of course, as a straight and, in most contexts, cis-passing woman, I am a bit reluctant to claim the label of queerness, especially when speaking in Serbia, an intensely and increasingly institutionally homophobic and compulsorily heterosexual society. However, I hope my queer friends, partners, and colleagues will forgive me for borrowing queerness as a term to signify refusal to embody or conform to diagnostic narratives (neoliberalism, precarity, [post]socialism); refusal or disinvestment from normatively and regulatively prescribed vocabularies of motivation and objects of desire – a permanent (tenured) academic position; a stable and growing income; a permanent relationship culminating in children and a house with a garden (I have a house, but I live alone and it does not have a garden). And, of course, the ultimate betrayal for anyone who has come from “here” and ‘made it’ “over there”: the refusal to perform the role of an academic migrant in a way that would allow to once and for all settle the question of whether everything is better ‘over there’ or ‘here’, and thus vindicate the omnipresent reflexive chauvinism (‘corrupt West’) or, alternatively, autochauvinism (‘corrupt Serbia’).

What I hope to have achieved instead, through this refusal, is to offer a postdisciplinary or at least undisciplined narrative and an example of how to extract sustenance from cultures inimical to your lifeplans or intellectual projects. To quote from Sedgwick:

“The vocabulary for articulating any reader’s reparative motive toward a text or a culture has long been so sappy, aestheticizing, defensive, anti-intellectual, or reactionary that it’s no wonder few critics are willing to describe their acquaintance with such motives. The prohibitive problem, however, has been in the limitations of present theoretical vocabularies rather than in the reparative motive itself. No less acute than a paranoid position, no less realistic, no less attached to a project of survival, and neither less nor more delusional or fantasmatic, the reparative reading position undertakes a different range of affects, ambitions, and risks. What we can best learn from such practices are, perhaps, the many ways selves and communities succeed in extracting sustenance from the objects of a culture—even of a culture whose avowed desire has often been not to sustain them.“

All of the cultures I’ve inhabited have been this to some extent – Serbia for its patriarchy, male-dominated public sphere, or excessive gregarious socialisation, something that sits very uncomfortably with my introversion; England for its horrid anti-immigrant attitude only marginally (and not always profitably) mediated by my ostensible ’Whiteness’; Denmark for its oppressive conformism; Hungary, where I was admittedly happiest among the plethora of other English-speaking cosmopolitan academics, which could not provide the institutional home I required (eventually, as is well-known, not even to CEU). But, in a different way, they have also been incredibly sustaining; I love my friends, many of whom are academic friends (former colleagues) in Serbia; I love the Danish egalitarianism and absolute refusal of excess; and I love England in many ways, in no particular order, the most exciting intellectual journey, some great friendships (many of those, I do feel the need to add, with other immigrants), and the most beautiful landscapes, especially in the North-East, where I live now (I also particularly loved New Zealand, but hope to expand on that on a different occasion).

To theorize from a reparative position is to understand that all of these things could be true at the same time. That there is, in other words, no pleasure without pain, that the things that sustain us will, in most cases, also harm us. It is to understand that there is no complete career trajectory, just like there is no position , epistemic or otherwise, from which we could safely and for once answer the question what the future will be like. It is to refuse to pre-emptively know the future, not least so that we could be surprised.

Sally’s boys, Daddy’s girls

I’ve finished reading Sally Rooney’s most recent novel, Beautiful World, Where are you? It turned out to be much better than I expected – as an early adopter of Conversations with Friends (‘read it – and loved it – before  it was cool’), but have subsequently found Normal People quite flat – by which I mean I spent most of the first half struggling, but found the very last bits actually quite good. In an intervening visit to The Bound, I also picked up one of Rooney’s short stories, Mr Salary, and read it on the metro back from Whitley Bay.

I became intrigued by the ‘good boy’ characters of both – Simon in Beautiful World, Nathan in Mr Salary. For context (and hopefully without too many spoilers), Simon is the childhood friend-cum-paramour of Eileen, who is the best friend of Alice (BW’s narrator, and Rooney’s likely alter-ego); Nathan, the titular character of Mr Salary, is clearly a character study for Simon, and in a similar – avuncular – relationship to the story’s narrator. Both Simon and Nathan are older than their (potential) girlfriends in sufficient amounts to make the relationship illegal or at least slightly predatory when they first meet, but also to hold it as a realistic and thus increasingly tantalizing promise once they have grown up a bit. But neither men are predatory creeps; in fact, exactly the opposite. They are kind, understanding, unfailingly supportive, and forever willing to come back to their volatile, indecisive, self-doubting, and often plainly unreliable women.

Who are these fantastic men? Here is an almost perfect reversal of the traditional romance portrayal of gender roles – instead of unreliable, egotistic, unsure-about-their-own-feelings-and-how-to-demonstrate-them guys, we are getting more-or-less the same, but with girls, with the men providing a reliable safe haven from which they can weather their emotional, professional, and sexual storms. This, of course, is not to deny that women can be as indecisive and as fickle as the stereotypical ‘Bad Boys’ of toxic romance; it’s to wonder what this kind of role reversal – even in fantasy, or the para-fantasy para-ethnography that is contemporary literature – does.

On the one hand, men like Simon and Nathan may seem like godsend to anyone who has ever gone through the cycle of emotional exhaustion connected to relationships with people who are, purely, assholes. (I’ve been exceptionally lucky in this regard, insofar as my encounters with the latter kind were blissfully few; but sufficient to be able to confirm that this kind does, indeed, exist in the wild). I mean, who would not want a man who is reliable, supportive of your professional ambitions, patient, organized, good in bed, and does laundry (yours included)? Someone who could withstand your emotional rollercoasters *and* buy you a ticket home when you needed it – and be there waiting for you? Almost like a personal assistant, just with the emotions involved.

And here, precisely, is the rub. For what these men provide is not a model of a partnership; it’s a model of a parent. The way they relate to the women characters – and, obviously, the narrative device of age difference amplifies this – is less that of a partner and  more of a benevolent older brother or, in a (n only slight) paraphrase of Winnicott, a good-enough father.

In Daddy Issues, Katherine Angel argues that feminism never engaged fully with the figure of the father – other than as the absent, distant or mildly (or not so mildly) violent and abusive figure. But somewhere outside the axis between Sylvia Plath’s Daddy and Valerie Solanas’ SCUM manifesto is the need to define exactly what the role of the father is once it is removed from its dual shell of object of hate/object of love. Is there, in fact, a role at all?

I have been thinking about this a lot, not only in relation to the intellectual (and political) problem of relationality in theory/knowledge production practices  – what Sara Ahmed so poignantly summarized as ‘can one not be in relation to white men?’ – but also personally. Having grown up effectively without a father (who was also unknown to me in my early childhood), what, exactly, was the Freudian triangle going to be in my case? (no this does not mean I believe the Electra complex applies literally; if you’re looking to mansplain psychoanalytic theory, I’d strongly urge you to reconsider, given I’ve read Freud at the age of 13 and have read post-Freudians since; I’d also urge you to read the following paragraph and consider how it relates to the legacies of Anna Freud/Melanie Klein divide, something Adam Philips writes about).

In the domain of theory, claims of originality (or originarity, as in coining or discovering something) is nearly always attributed to men, women’s contributions almost unfailingly framed in terms of ‘application or elaboration of *his* ideas’ or ‘[minor] contribution to the study’ (I’ve written about this in the cases of Sartre/de Beauvoir and Robert Merton/Harriet Zuckerman’s the ‘Matthew Effect’, but other examples abound). As Marilyn Frye points out in “Politics of reality”, the force of genealogy does not necessarily diminish even for those whose criticism of patriarchy extends to refusing anything to do with men altogether; Frye remarks having observed many a lesbian separatist still asking to be recognized – intellectually and academically – by the white men ‘forefathers’ who sit on academic panels. The shadow of the father is a long one. For those of us who have chosen to be romantically involved with men, and have chosen to work in patriarchal mysoginistic institutions that the universities surely are, not relating to men at all is not exactly an option.

It is from this perspective that I think we’d benefit from a discussion on how men can be reliable partners without turning into good-enough daddies, because – as welcome and as necessary as this role sometimes is, especially for women whose own fathers were not – it is ultimately not a relationship between two adults. I remember reading an early feminist critique of the Bridget Jones industry that really hit the nail on the head: it was not so much Jones’ dedication to all things ‘60s and ‘70s feminism abhorred – obsession with weight loss and pursuit of ill-advised men (i.e. Daniel Cleaver); it was even more that when ‘Mr Right’ (Mark Darcy, the barely disguised equivalent of Austen’s Mr Darcy) arrives, he still falls for Bridget – despite the utter absence of anything from elementary competence at her job to the capacity to feed herself in any form that departs from binge eating to recommend her to a seemingly top-notch human rights attorney. Which really begs the question: what is Mr Darcy seeing in Bridget?

Don’t get me wrong: I am sure that there are men who are attracted to the chaotic, manic-pixie-who-keeps-losing-her-credit-card kind of girl. Regardless of what manifestation or point on the irresponsibility spectrum they occupy, these women certainly play a role for such men – allowing them to feel useful, powerful, respected, even perhaps feeding a bit their saviour complex. But ultimately, playing this role leaves these men entirely outside of the relationship; if the only way they relate to their partners is by reacting (to their moods, their needs, their lives), this ultimately absolves them of equal responsibility for the relationship. Sadly, there is a way to avoid equal division of the ‘mental load’ even while doing the dishes.

And I am sure this does something for the women in question too; after all, there is nothing wrong in knowing that there *is* going to be someone to pick you up if you go out and there are no taxis to get you back home, who will always provide a listening ear and a shoulder to cry on, seemingly completely irrespectively of their own needs (Simon is supposed to have a relatively high-profile political job, yet, interestingly, never feels tired when Ellaine calls or offers to come over). But what at first seems like a fantasy come true – a reliable man who is not afraid to show his love and admiration – can quickly turn into a somewhat toxic set of interdependencies: why, for instance, learn to drive if someone is always there to pick you up and drop you off? (honestly: even among the supposedly-super-egalitarian straight partnerships I know, the number of men drivers vastly outstrips that of women). The point is not to always insist on being a jack-of-all-trades (nor on being the designated driver), as much as to realize that most kinds of freedom (for instance, the freedom to drink when out) embed a whole set of dependencies (for instance, dependence on urban networks of taxis/Ubers or kind self-effacing mensaviours there to pick you up – in Cars’ slightly creepy formulation, drive you home).

Of course, as Simone de Beauvoir recognized, there is no freedom without dependency. We cannot, simply, will ourselves free without willing the same for others; but, at the same time, we cannot will them to be free, as this turns them into objects. In Ethics of Ambiguity – one of the finest books of existentialist philosophy – de Beauvoir turns this into the main conundrum (thus: source of ambiguity) for how to act ethically. Acknowledging our fundamental reliance on others does not mean we need to remain locked into the same set of interdependencies (e.g., we could build safe and reliable public transport and then we would not have to rely on people to drive us home?), but it also does not mean we need to kick out of them by denying or reversing their force – not least because it, ultimately, does not work.

The idea that gender equality, especially in heterosexual partnerships, benefits from the reversal of the trope of the uncommitted, eternally unreliable bachelor in the way that tips the balance in an entirely opposite direction (other than for very short periods of time, of course) strikes me as one of the manifestations of the long tail of post- or anti-feminist backlash – admittedly, a mild and certainly less harmful one than, for instance, the idea that feminism means ‘women are better than men’ or that feminists seek to eliminate men from politics, work, or anything else (both, worryingly, have filtered into public discourse). It also strikes me that the long-suffering Sacrificial Men who have politely taken shit from their objects of affection can all-too-easily be converted into Men’s Rights Activists or incels if and when their long suffering fails to yield results – for instance, when their Manic Pixie leaves with someone with a spine (not a Bad Boy, just a man with boundaries) – or when they realize that the person they have been playing Good Daddy to has finally grown up and left home.

They’ll come for you next

I saw ‘A Night of Knowing Nothing’ tonight, probably the best film I’ve seen this year (alongside The Wheel of Fortune and Fantasy, but they’re completely different genres – I could say ‘A Night of Knowing Nothing is the best political film I saw this year, but that would take us down the annoying path of ‘what is political’). There was only one other person in the cinema; this may be a depressing reflection of the local audiences’ autofocus (though this autofocus, at least in my experience, did tend to encompass corners of the former Empire), but given my standard response to the lovely people at Tyneside‘s ‘Where would you like to sit?’ – ‘Close to the aisle, as far away from other people’ – I couldn’t complain.

The film is part-documentary, part fiction, told from the angle of an anonymous woman student (who goes by ‘L.’) whose letters document the period of student strikes at the Film and Television Institute of India (FTII), but also, more broadly, the relationship between the ascendance of Modi’s regime and student protests at Jawaharlal Nehru University (JNU) in New Delhi in 2016, as well as related events – including violent attacks of masked mobs on JNU and arrests at Aligarh Muslim University in 2020*.

Where the (scant) reviews are right, and correct, is that the film is also about religion, caste, and the (both ‘slow’ and rapid) violence unleashed by supporters of the nationalist (‘Hinduttva’) project in the Bharatiya Janata Party (BJP) and its student wing, the Akhil Bharatiya Vidyarthi Parishad (ABVP).

What they don’t mention, however, is that it is also about student (and campus) politics, solidarity, and what to do when your right to protest is literally being crushed (one particularly harrowing scene – at least to anyone who has experienced police violence – consists of CCTV footage of what seem like uniformed men breaking into the premises of one of the universities and then randomly beating students trying to escape through the small door; according to reports, policemen were on site but did nothing). Many of the names mentioned in the film – both through documentary footage and L’s letters – will end up in prison, some possibly tortured (one of L’s interlocutors says he does not want to talk about it for fear of dissuading other students from protest); one will commit suicide. Throughout this, yet, what the footage shows are nights of dancing; impassioned speeches; banners and placards that call out the neo-nationalist government and its complicity not only with violence but also with perpetuating poverty, casteism, and Islamophobia. And solidarity, solidarity, solidarity.

This is the message that transpires most clearly throughout the film. The students have managed to connect two things: the role of perpetuating class/caste divisions in education – dismissiveness and abuse towards Dalit students, the increase of tuition meant to exclude those whose student bursaries support their families too – and the strenghtening of nationalism, or neo-nationalism. That the right-wing rearguard rules through stoking envy and resentment towards ‘undeserving’ poor (e.g. ‘welfare scroungers’) is not new; that it can use higher education, including initiatives aimed at widening participation, to do this, is. In this sense, Modi’s supporters’ strategy seems to be to co-opt the contempt for ‘lazy’ and ‘privileged’ students (particularly those with state bursaries) and turn it into accusation of ‘anti-nationalism’, which is equated with being critical of any governmental policy that deepens existing social inequalities.

It wouldn’t be very anthropological to draw easy parallels with the UK government’s war on Critical Race Theory, which equally tends to locate racism in attempts to call it out, rather than in the institutions – and policies – that perpetuate it; but the analogy almost presents itself. Where it fails, more obviously, is that students – and academics – in the UK still (but just about) have a broader scope for protest than their Indian counterparts. Of course, the new Bill on Freedom of Speech (Academic Freedom) proposes to eliminate some of that, too. But until it does, it makes sense to remember that rights that are not exercised tend to get lost.

Finally, what struck me about A Night of Knowing Nothing is the remarkable show of solidarity not only from workers, actors, and just (‘normal’) people, but also from students across campuses (it bears remembering that in India these are often universities in different states and thousands of miles away from each other). This was particularly salient in relation to the increasingly localized nature of fights for both pensions and ‘Four Fights’ of union members in UK higher education. Of course, union laws make it mandatory that there is both a local and a national mandate for strike action, and it is true that we express solidarity when cuts are threatened to colleagues in the sector (e.g. Goldsmiths, or Leicester a bit before that). But what I think we do not realize is that that is, eventually, going to happen everywhere – there is no university, no job, and no senior position safe enough. The night of knowing nothing has lasted for too long; it is, perhaps, time to stop pretending.

Btw, if you happen to live in Toon, the film is showing tomorrow (4 May) and on a few other days. Or catch it in your local – you won’t regret it.

*If you’re wondering why you haven’t heard of these, my guess is they were obscured by the pandemic; I say this as someone who both has friends from India and as been following Indian HE quite closely between 2013 and 2016, though somewhat less since, and I still *barely* recall reading/hearing about any of these.

On doing it badly

I’m reading Christine Korsgaard’sSelf-Constitution: Agency, Identity, and Integrity‘ (2009) – I’ve found myself increasingly drawn recently to questions of normative political philosophy or ‘ideal theory’, which I’ve previously tended to analytically eschew, I presume as part-pluralism, part-anthropological reflex.

In chapter 2 (‘The Metaphysics of Normativity’), Korsgaard engages with Aristotle’s analysis of objects as an outcome of organizing principles. For instance, what makes a house a house rather than just a ‘heap of stones and mortar and bricks’ is its function of keeping out the weather, and this is also how we should judge the house – a ‘good’ house is one that fulfils this function, a bad house is one that does not not, or at least not so much.

This argument, of course, is a well-known one and endlessly discussed in social ontology (at least among the Cambridge Social Ontology crowd, which I still visit). But Korsgaard emphasizes something that has previously completely escaped my attention, which is the implicit argument about the relationship between normativity and knowledge:

Now, it is entirely true that ‘seeing what things do’ is a pretty neat description of my work as a theorist. But there is an equally important one, which is seeing what things can or could do. This means looking at (I’m parking the discussion about privileging the visual/observer approach to theory for the time being, as it’s both a well-known criticism in e.g. feminist & Indigenous philosophy *and* other people have written about it much better than I ever could) ‘things’ – in my case, usually concepts – and understanding what using them can do, that is, looking at them relationally. You are not the same person looking at one kind of social object and another, nor is it, importantly, the same social object ‘unproblematically’ (meaning that yes, it is possible to reach consensus about social objects – e.g. what is a university, or a man, or a woman, or fascism, but it is not possible to reach it without disagreement – the only difference being whether it is open or suppressed). I’m also parking the discussion about observer effects, indefinitely: if you’re interested in how that theoretical argument looks without butchering theoretical physics, I’ve written about it here.

This also makes the normative element of the argument more difficult, as it requires delving not only into the ‘satisficing’ or ‘fitness’ analysis (a good house is a house that does the job of being a house), but also into the performative effects analysis (is a good house a house that does its job in a way that eventually turns ‘houseness’ into something bad?). To note, this is distinct from other issues Korsgaard recognizes – e.g. that a house constructed in a place that obscures the neighbours’ view is bad, but not a bad house, as its ‘badness’ is not derived from its being a house, but from its position in space (the ‘where’, not the ‘what’). This analysis may – and I emphasize may – be sufficient for discrete (and Western) ontologies, where it is entirely conceivable of the same house being positioned somewhere else and thus remaining a good house, while no longer being ‘bad’ for the neighbourhood as a whole. But it clearly encounters problems on any kind of relational, environment-based, or contextual ontologies (a house is not a house only by the virtue of being sufficient to keep out elements for the inhabitants, but also – and, possibly, more importantly – by being positioned in a community, and a community that is ‘poisoned’ by a house that blocks everyone’s view is not a good community for houses).

In this sense, it makes sense to ask when what an object does turns into badness for the object itself? I.e., what would it mean that a ‘good’ house is at the same time a bad house? Plot spoiler: I believe this is likely true for all social objects. (I’ve written about ambiguity here and also here). The task of the (social) theorist – what, I think, makes my work social (both in the sense of applying to the domain of interaction between multiple human beings and in the sense of having relevance to someone beyond me) is to figure out what kind of contexts make one more likely than the other. Under what conditions do mostly good things (like, for instance, academic freedom) become mostly bad things (like, for instance, a form of exclusion)?

I’ve been thinking about this a lot in relation to what constitutes ‘bad’ scholarship (and, I guess, by extension, a bad scholar). Having had the dubious pleasure of encountering people who teach different combinations of neocolonial, right-wing, and anti-feminist ‘scholarship’ over the past couple of years (England, and especially the place where I work, is a trove of surprises in this sense), it strikes me that the key question is under what conditions this kind of work – which universities tend to ignore because it ‘passes’ as scholarship and gives them the veneer of presenting ‘both sides’ – turns the whole idea of scholarship into little more than competition for followers on either of the ‘sides’. This brings me to the question which, I think, should be the source of normativity for academic speech, if anything: when is ‘two-sideism’ destructive to knowledge production as a whole?

This is what Korsgaard says:


Is bad scholarship just bad scholarship, or is it something else? When does the choice to not know about the effects of ‘platforming’ certain kinds of speakers turn from the principle of liberal neutrality to wilful ignorance? Most importantly, how would we know the difference?

Why you’re never working to contract

During the last #USSstrike, on non-picketing days, I practiced working to contract. Working to contract is part of the broader strategy known as ASOS – action short of a strike – and it means fulfilling your contractual obligations, but not more than that. Together with many other UCU members, I will be moving to ASOS from Thursday. But how does one actually practice ASOS in the neoliberal academia?

 

I am currently paid to work 2.5 days a week. Normally, I am in the office on Thursdays and Fridays, and sometimes half a Monday or Tuesday. The rest of the time, I write and plan my own research, supervise (that’s Cambridgish for ‘teaching’), or attend seminars and reading groups. Last year, I was mostly writing my dissertation; this year, I am mostly panickedly filling out research grant and job applications, for fear of being without a position when my contract ends in August.

Yet I am also, obviously, not ‘working’ only when I do these things. Books that I read are, more often than not, related to what I am writing, teaching, or just thinking about. Often, I will read ‘theory’ books at all times of day (a former partner once raised the issue of the excess of Marx on the bedside table), but the same can apply to science fiction (or any fiction, for that matter). Films I watch will make it into courses. Even time spent on Twitter occasionally yields important insights, including links to articles, events, or just generic mood of a certain category of people.

I am hardly exceptional in this sense. Most academics work much more than the contracted hours. Estimates vary from 45 to as much as 100 hours/week; regardless of what is a ‘realistic’ assessment, the majority of academics report not being able to finish their expected workload within a 37.5-40hr working week. Working on weekends is ‘industry standard’; there is even a dangerous overwork ethic. Yet increasingly, academics have begun to unite around the unsustainability of the system in which we are increasingly feeling overwhelmed, underpaid, and with mental and other health issues on the rise. This is why rising workloads are one of the key elements of the current wave of UCU strikes. It also led to coining of a parallel hashtag: #ExhaustionRebellion. It seems like the culture is slowly beginning to shift.

From Thursday onwards, I will be on ASOS. I look forward to it: being precarious makes not working sometimes almost as exhausting as working. Yet, the problem with the ethic of overwork is not only that is is unsustainable, or that is directly harmful to the health and well-being of individuals, institutions, and the environment. It is also that it is remarkably resilient: and it is resilient precisely because it relies on some of the things academics value the most.

Marx’s theory of value* tells us that the origins of exploitation in industrial capitalism lie in the fact workers do not have ownership over means of production; thus, they are forced to sell their labour. Those who own means of production, on the other hand, are driven by the need to keep capital flowing, for which they need profit. Thus, they are naturally inclined to pay their workers as little as possible, as long as that is sufficient to actually keep them working. For most universities, a steady supply of newly minted graduate students, coupled with seemingly unpalatable working conditions in most other branches of employment, means they are well positioned to drive wages further down (in the UK, 17.5% in real terms since 2009).

This, however, is where the usefulness of classical Marxist theory stops. It is immediately obvious that many of the conditions the late 19th-century industrial capitalism no longer apply. To begin with, most academics own the most important means of production: their minds. Of course, many academics use and require relatively expensive equipment, or work in teams where skills are relatively distributed. Yet, even in the most collective of research teams and the most collaborative of labs, the one ingredient that is absolutely necessary is precisely human thoughts. In social sciences and humanities, this is even more the case: while a lot of the work we do is in libraries, or in seminars, or through conversations, ultimately – what we know and do rests within us**.

Neither, for that matter, can academics simply written off as unwitting victims of ‘false consciousness’. Even if the majority could have conceivably been unaware of the direction or speed of the transformation of the sector in the 1990s or in the early 2000s, after the last year’s industrial action this is certainly no longer the case. Nor is this true only of those who are certainly disproportionately affected by its dual face of exploitation and precarity: even academics on secure contracts and in senior positions are increasingly viewing changes to the sector as harmful not only to their younger colleagues, but to themselves. If nothing else, what USS strikes achieved was to help the critique of neoliberalism, marketization and precarity migrate from the pages of left-leaning political periodicals and critical theory seminars into mainstream media discourse. Knowing that current conditions of knowledge production are exploitative, however, does not necessarily translate into knowing what to do about them.

This is why contemporary academic knowledge production is better characterized as extractive or rentier capitalism. Employers, in most cases, do not own – certainly not exclusively – the means of production of knowledge. What they do instead is provide the setting or platform through which knowledge can be valorized, certified, and exchanged; and charge a hefty rent in the process (this is one part of what tuition fees are about). This ‘platform’ can include anything from degrees to learning spaces; from labs and equipment to email servers and libraries. It can also be adjusted, improved, fitted to suit the interests of users (or consumers – in this case, students); this is what endless investment in buildings is about.

The cunning of extractive capitalism lies in the fact that it does not, in fact, require workers to do very much. You are a resource: in industrial capitalism, your body is a resource; in cognitive capitalism, your mind is a resource too. In extractive capitalism, it gets even better: there is almost nothing you do, a single aspect of your thoughts, feelings, or actions, that the university cannot turn into profit. Reading Marxist theory on the side? It will make it into your courses. Interested in politics? Your awareness of social inequalities will be reflected in your teaching philosophy. Involved in community action? It will be listed in your online profile under ‘public engagement and impact’. It gets better still: even your critique of extractive, neoliberal conditions of knowledge production can be used to generate value for your employer – just make sure it is published in the appropriate journals, and before the REF deadline.

This is the secret to the remarkable resilience of extractive capitalism. It feeds on exactly what academics love most: on the desire to know more, to explore, to learn. This is, possibly, one of the most basic human needs past the point of food, shelter, and warmth. The fact that the system is designed to make access to all of the latter dependent on being exploited for the former speaks, I think, volumes (it also makes The Matrix look like less of a metaphor and more of an early blueprint, with technology just waiting to catch up). This makes ‘working to contract’ quite tricky: even if you pack up and leave your office at 16.38 on the dot, Monday to Friday, your employer will still be monetizing your labour. You are probably, even if unwittingly, helping them do so.

What, then, are we to do? It would be obviously easy to end with a vague call a las barricadas, conveniently positioned so as to boost one’s political cred. Not infrequently, my own work’s been read in this way: as if it ‘reminds academics of the necessity of activism’ or (worse) ‘invites to concrete action’ (bleurgh). Nothing could be farther from the truth: I absolutely disagree with the idea that critical analysis somehow magically transmigrates into political action. (In fact, why we are prone to mistaking one for the other is one of the key topics of my work, but this is an ASOS post, so I will not be writing about it). In other words, what you will do – tomorrow, on (or off?) the picket line, in a bit over a week, in the polling booth, in the next few months, when you are asked to join that and that committee or to a review a junior colleague’s tenure/promotion folder – is your problem and yours alone. What this post is about, however, is what to do when you’re on ASOS.

Therefore, I want to propose a collective reclaiming of the life of the mind. Too much of our collective capacity – for thinking, for listening, for learning, for teaching – is currently absorbed by institutions that turn it, willy-nilly, into capital. We need to re-learn to draw boundaries. We need thinking, learning, and caring to become independent of process that turns them into profit. There are many ways to do it – and many have been tried before: workers and cooperative universities; social science centres; summer schools; and, last but not least, our own teach-outs and picket line pedagogy. But even when these are not happening, we need to seriously rethink how we use the one resource that universities cannot replace: our own thoughts.

So from Thursday next week, I am going to be reclaiming my own. I will do the things I usually do – read; research; write; teach and supervise students; plan and attend meetings; analyse data; attend seminars; and so on – until 4.40. After that, however, my mind is mine – and mine alone.

 

*Rest assured that the students I teach get treated to a much more sophisticated version of the labour theory of value (Soc1), together with variations and critiques of Marxism (Soc2), as well as ontological assumptions of heterodox vs. ‘neoclassical’ economics (Econ8). If you are an academic bro, please resist the urge to try to ‘explain’ any of these as you will both waste my time and not like the result. Meanwhile, I strongly encourage you to read the *academic* work I have published on these questions over the past decade, which you can find under Publications.

**This is one of the reasons why some of the most interesting debates about knowledge production today concern ownership, copyright, or legal access. I do not have time to enter into these debates in this post; for a relatively recent take, see here.

Area Y: The Necropolitics of Post-Socialism

This summer, I spent almost a month in Serbia and Montenegro (yes, these are two different countries, despite New York Times still refusing to acknowledge this). This is about seven times as long as I normally would. The two principal reasons are that my mother, who lives in Belgrade, is ill, and that I was planning to get a bit of time to quietly sit and write my thesis on the Adriatic coast of Montenegro. How the latter turned out in light of the knowledge of the former I leave to imagination (tl;dr: not well). It did, however, give me ample time to reflect on the post-socialist condition, which I haven’t done in a while, and to get outside Belgrade, to which I normally confine my brief visits.

The way in which perverse necro/bio-politics of post-socialism obtain in my mother’s illness, in the landscape, and in the socio-material, fits almost too perfectly into what has been for years the dominant style of writing about places that used to be behind the Iron Curtain (or, in the case of Yugoslavia, on its borders). Social theory’s favourite ruins – the ruins of socialism – are repeatedly re-valorised through being dusted off and resurrected, as yet another alter-world to provide the mirror image to the here and the now (the here and the now, obviously, being capitalism). During the Cold War, the Left had its alter-image in the Soviet Union; now, the antidote to neoliberalism is provided not through the actual ruins of real socialism – that would be a tad too much to handle – but through the re-invention of the potential of socialism to provide, in a tellingly polysemic title of MoMA’s recently-opened exhibition on architecture in Yugoslavia, concrete utopias.

Don’t get me wrong: I would love to see the exhibition, and I am sure that it offers much to learn, especially for those who did not have the dubious privilege of having grown up on both sides of socialism. It’s not the absence of nuance that makes me nauseous in encounters with socialist nostalgia: a lot of it, as a form of cultural production, is made by well-meaning people and, in some cases, incredibly well-researched. It’s that  resurrecting hipsterified golems of post-socialism serves little purpose other than to underline their ontological status as a source of comparison for the West, cannon-fodder for imaginaries of the world so bereft of hope that it would rather replay its past dreams than face the potential waking nightmare of its future.

It’s precisely this process that leaves them unable to die, much like the ghosts/apparitions/copies in Lem’s (and Tarkovsky’s) Solaris, and in VanderMeer’s Southern Reach trilogy. In VanderMeer’s books, members of the eleventh expedition (or, rather, their copies) who return to the ‘real world’ after exposure to the Area X develop cancer and die pretty quickly. Life in post-socialism is very much this: shadows or copies of former people confusedly going about their daily business, or revisiting the places that once made sense to them, which, sometimes, they have to purchase as repackaged ‘post-socialism’; in this sense, the parable of Roadside Picnic/Stalker as the perennial museum of post-communism is really prophetic.

The necropolitical profile of these parts of former Yugoslavia, in fact, is pretty unexceptional. For years, research has shown that rapid privatisation increases mortality, even controlled for other factors. Obviously, the state still feigns perfunctory care for the elderly, but healthcare is cumbersome, inefficient and, in most cases, barely palliative. Smoking and heavy drinking are de rigueur: in winter, Belgrade cafés and pubs turn into proper smokehouses. Speaking of that, vegetarianism is still often, if benevolently, ridiculed. Fossil fuel extraction is ubiquitous. According to this report from 2014, Serbia had the second highest rate of premature deaths due to air pollution in Europe. That’s not even getting closer to the Thing That Can’t Be Talked About – the environmental effects of the NATO intervention in 1999.

An apt illustration comes as I travel to Western Serbia to give a talk at the anthropology seminar at Petnica Science Centre, where I used to work between 2000 and 2008. Petnica is a unique institution that developed in the 1980s and 1990s as part science camp, part extracurricular interdisciplinary  research institute, where electronics researchers would share tables in the canteen with geologists, and physicists would talk (arguably, not always agreeing) to anthropologists. Founded in part by the Young Researchers of Serbia (then Yugoslavia), a forward-looking environmental exploration and protection group, the place used to float its green credentials. Today, it is funded by the state – and fully branded by the Oil Industry of Serbia. The latter is Serbian only in its name, having become a subsidiary of the Russian fossil fuel giant Gazpromneft. What could arguably be dubbed Serbia’s future research elite, thus, is raised in view of full acceptance of the ubiquity of fossil fuels not only for providing energy, but, literally, for running the facilities they need to work.

These researchers can still consider themselves lucky. The other part of Serbian economy that is actually working are factories, or rather production facilities, of multinational companies. In these companies, workers are given 12-hour shifts, banned from unionising, and, as a series of relatively recent reports revealed, issued with adult diapers so as to render toilet breaks unnecessary.

As Elizabeth Povinelli argued, following Achille Mbembe, geontopower – the production of life and nonlife, and the creation of the distinction between them, including what is allowed to live and what is allowed to die – is the primary mode of exercise of power in late liberalism. Less frequently examined way of sustaining the late liberal order is the production of semi-dependent semi-peripheries. Precisely because they are not the world’s slums, and because they are not former colonies, they receive comparatively little attention. Instead, they are mined for resources (human and inhuman). That the interaction between the two regularly produces outcomes guaranteed to deplete the first is of little relevance. The reserves, unlike those of fossil fuels, are almost endless.

Serbian government does its share in ensuring that the supply of cheap labour force never runs out, by launching endless campaigns to stimulate reproduction. It seems to be working: babies are increasingly the ‘it’ accessory in cafés and bars. Officially, stimulating the birth rate is to offset the ‘cost’ of pensions, which IMF insists should not increase. Unofficially, of course, the easiest way to adjust for this is to make sure pensioners are left behind. Much like the current hype about its legacy, the necropolitics of post-socialism operates primarily through foregrounding its Instagrammable elements, and hiding the ugly, non-productive ones.

Much like in VanderMeer’s Area X, knowledge that the border is advancing could be a mixed blessing: as Danowski and Viveiros de Castro argued in a different context, end of the world comes more easily to those for whom the world has already ended, more than once. Not unlike what Scranton argued in Learning to Die in the Anthropocene – this, perhaps, rather than sanitised dreams of a utopian future, is one thing worth resurrecting from post-socialism.

Life or business as usual? Lessons of the USS strike

[Shortened version of this blog post was published on Times Higher Education blog on 14 March under the title ‘USS strike: picket line debates will reenergise scholarship’].

 

Until recently, Professor Marenbon writes, university strikes in Cambridge were a hardly noticeable affair. Life, he says, went on as usual. The ongoing industrial action that UCU members are engaging in at UK’s universities has changed all that. Dons, rarely concerned with the affairs of the lesser mortals, seem to be up in arms. They are picketing, almost every day, in the wind and the snow; marching; shouting slogans. For Heaven’s sake, some are even dancing. Cambridge, as pointed out on Twitter, has not seen such upheaval ever since we considered awarding Derrida an honorary degree.

This is possibly the best thing that has happened to UK higher education, at least since the end of the 1990s. Not that there’s much competition: this period, after all, brought us the introduction, then removal of tuition fee caps; abolishment of maintenance grants; REF and TEF; and as crowning (though short-lived) glory, appointment of Toby Young to the Office for Students. Yet, for most of this period, academics’ opposition to these reforms conformed to ‘civilised’ ways of protest: writing a book, giving a lecture, publishing a blog post or an article in Times Higher Education, or, at best, complaining on Twitter. While most would agree that British universities have been under threat for decades, concerted effort to counter these reforms – with a few notable exceptions – remained the provenance of the people Professor Marenbon calls ‘amiable but over-ideological eccentrics’.

This is how we have truly let down our students. Resistance was left to student protests and occupations. Longer-lasting, transgenerational solidarity was all but absent: at the end of the day, professors retreated to their ivory towers, precarious academics engaged in activism on the side of ever-increasing competition and pressure to land a permanent job. Students picked up the tab: not only when it came to tuition fees, used to finance expensive accommodation blocks designed to attract more (tuition-paying) students, but also when it came to the quality of teaching and learning, increasingly delivered by an underpaid, overworked, and precarious labour force.

This is why the charge that teach-outs of dubious quality are replacing lectures comes across as particularly disingenuous. We are told that ‘although students are denied lectures on philosophy, history or mathematics, the union wants them to show up to “teach-outs” on vital topics such as “How UK policy fuels war and repression in the Middle East” and “Neoliberal Capitalism versus Collective Imaginaries”’. Although this is but one snippet of Cambridge UCU’s programme of teach-outs, the choice is illustrative.

The link between history and UK’s foreign policy in the Middle East strikes me as obvious. Students in philosophy, politics or economics could do worse than a seminar on the development of neoliberal ideology (the event was initially scheduled as part of the Cambridge seminar in political thought). As for mathematics – anybody who, over the past weeks, has had to engage with the details of actuarial calculation and projections tied to the USS pension scheme has had more than a crash refresher course: I dare say they learned more than they ever hoped they would.

Teach-outs, in this sense, are not a replacement for education “as usual”. They are a way to begin bridging the infamous divide between “town and gown”, both by being held in more open spaces, and by, for instance, discussing how the university’s lucrative development projects are impacting on the regional economy. They are not meant to make up for the shortcomings of higher education: if anything, they render them more visible.

What the strikes have made clear is that academics’ ‘life as usual’ is vice-chancellors’ business as usual. In other words, it is precisely the attitude of studied depoliticisation that allowed the marketization of higher education to continue. Markets, after all, are presumably ‘apolitical’. Other scholars have expanded considerable effort in showing how this assumption had been used to further policies whose results we are now seeing, among other places, in the reform of the pensions system. Rather than repeat their arguments, I would like to end with the words of another philosopher, Hannah Arendt, who understood well the ambiguous relationship between the academia and politics:

 

‘Very unwelcome truths have emerged from the universities, and very unwelcome judgments have been handed down from the bench time and again; and these institutions, like other refuges of truth, have remained exposed to all the dangers arising from social and political power. Yet the chances for truth to prevail in public are, of course, greatly improved by the mere existence of such places and by the organization of independent, supposedly disinterested scholars associated with them.

This authentically political significance of the Academe is today easily overlooked because of the prominence of its professional schools and the evolution of its natural science divisions, where, unexpectedly, pure research has yielded so many decisive results that have proved vital to the country at large. No one can possibly gainsay the social and technical usefulness of the universities, but this importance is not political. The historical sciences and the humanities, which are supposed to find out, stand guard over, and interpret factual truth and human documents, are politically of greater relevance.’

In this sense, teach-outs, and industrial action in general, are a way to for us to recognise our responsibility to protect the university from the undue incursion of political power, while acknowledging that such responsibility is in itself political. At this moment in history, I can think of no service to scholarship greater than that.

The paradox of resistance: critique, neoliberalism, and the limits of performativity

The critique of neoliberalism in academia is almost as old as its object. Paradoxically, it is the only element of the ‘old’ academia that seems to be thriving amid steadily worsening conditions: as I’ve argued in this book review, hardly a week goes by without a new book, volume, or collection of articles denouncing the neoliberal onslaught or ‘war’ on universities and, not less frequently, announcing their (untimely) death.

What makes the proliferation of critique of the transformation of universities particularly striking is the relative absence – at least until recently – of sustained modes of resistance to the changes it describes. While the UCU strike in reaction to the changes to the universities’ pension scheme offers some hope, by and large, forms of resistance have much more often taken the form of a book or blog post than strike, demo, or occupation. Relatedly, given the level of agreement among academics about the general direction of these changes, engagement with developing long-term, sustainable alternatives to exploitative modes of knowledge production has been surprisingly scattered.

It was this relationship between the abundance of critique and paucity of political action that initially got me interested in arguments and forms of intellectual positioning in what is increasingly referred to as the ‘[culture] war on universities’. Of course, the question of the relationship between critique and resistance – or knowledge and political action – concerns much more than the future of English higher education, and reaches into the constitutive categories of Western political and social thought (I’ve addressed some of this in this talk). In this post, however, my intention is to focus on its implications for how we can conceive critique in and of neoliberal academia.

Varieties of neoliberalism, varieties of critique?

While critique of neoliberalism in the academia tends to converge around the causes as well as consequences of this transformation, this doesn’t mean that there is no theoretical variation. Marxist critique, for instance, tends to emphasise the changes in working conditions of academic staff, increased exploitation, and growing commodification of knowledge. It usually identifies precarity as the problem that prevents academics from exercising the form of political agency – labour organizing – that is seen as the primary source of potential resistance to these changes.

Poststructuralist critique, most of it drawing on Foucault, tends to focus on changing status of knowledge, which is increasingly portrayed as a private rather than a public good. The reframing of knowledge in terms of economic growth is further tied to measurement – reduction to a single, unitary, comparable standard – and competition, which is meant to ensure maximum productivity. This also gives rise to mechanisms of constant assessment, such as the TEF and the REF, captured in the phrase ‘audit culture‘. Academics, in this view, become undifferentiated objects of assessment, which is used to not only instill fear but also keep them in constant competition against each other in hope of eventual conferral of ‘tenure’ or permanent employment, through which they can be constituted as full subjects with political agency.

Last, but not least, the type of critique that can broadly be referred to as ‘new materialist’ shifts the source of political power directly to instruments for measurement and sorting, such as algorithms, metrics, and Big Data. In the neoliberal university, the argument goes, there is no need for anyone to even ‘push the button’; metrics run on their own, with the social world already so imbricated by them that it becomes difficult, if not entirely impossible, to resist. The source of political agency, in this sense, becomes the ‘humanity’ of academics, what Arendt called ‘mere’ and Agamben ‘bare’ life. A significant portion of new materialist critique, in this vein, focuses on emotions and affect in the neoliberal university, as if to underscore the contrast between lived and felt experiences of academics on the one hand, and the inhumanity of algorithms or their ‘human executioners’ on the other.

Despite possibly divergent theoretical genealogies, these forms of critique seem to move in the same direction. Namely, the object or target of critique becomes increasingly elusive, murky, and de-differentiated: but, strangely enough, so does the subject. As power grows opaque (or, in Foucault’s terms, ‘capillary’), the source of resistance shifts from a relatively defined position or identity (workers or members of the academic profession) into a relatively amorphous concept of humanity, or precarious humanity, as a whole.

Of course, there is nothing particularly original in the observation that neoliberalism has eroded traditional grounds for solidarity, such as union membership. Wendy Brown’s Undoing the Demos and Judith Butler’s Notes towards a performative theory of assembly, for instance, address the possibilities for political agency – including cross-sectional approaches such as that of the Occupy movement – in view of this broader transformation of the ‘public’. Here, however, I would like to engage with the implications of this shift in the specific context of academic resistance.

Nerdish subject? The absent centre of [academic] political ontology

The academic political subject, which is why the pun on Žižek, is profoundly haunted by its Cartesian legacy: the distinction between thinking and being, and, by extension, between subject and object. This is hardly surprising: critique is predicated on thinking about the world, which proceeds through ‘apprehending’ the world as distinct from the self; but the self  is also predicated on thinking about that world. Though they may have disagreed on many other things, Boltanski and Bourdieu – both  feature prominently in my work – converge on the importance of this element for understanding the academic predicament: Bourdieu calls it the scholastic fallacy, and Boltanski complex exteriority.

Nowhere is the Cartesian legacy of critique more evident than in its approach to neoliberalism. From Foucault onwards, academic critique has approached neoliberalism as an intellectual project: the product of a ‘thought collective’ or a small group of intellectuals, initially concentrated in the Mont Pelerin society, from which they went on to ‘conquer’ not only economics departments but also, more importantly, centres of political power. Critique, in other words, projects back onto neoliberalism its own way of coming to terms with the world: knowledge. From here, the Weberian assumption that ideas precede political action is transposed to forms of resistance: the more we know about how neoliberalism operates, the better we will be able to resist it. This is why, as neoliberalism proliferates, the books, journal articles, etc. that somehow seek to ‘denounce’ it multiply as well.

Speech acts: the lost hyphen

The fundamental notion of critique, in this sense, is (J.L Austin‘s and Searle’s) notion of speech acts: the assumption that words can have effects. What gets lost in dropping the hyphen in speech(-)acts is a very important bit in the theory of performativity: that is, the conditions under which speech does constitute effective action. This is why Butler in Performative agency draws attention to Austin’s emphasis on perlocution: speech-acts that are effective only under certain circumstances. In other words, it’s not enough to exclaim: “Universities are not for sale! Education is not a commodity! Students are not consumers!” for this to become the case. For this begs the question: “Who is going to bring this about? What are the conditions under which this can be realized?” In other words: who has the power to act in ways that can make this claim true?

What critique bounces against, thus, is thinking its own agency within these conditions, rather than trying to paint them as if they are somehow on the ‘outside’ of critique itself. Butler recognizes this:

“If this sort of world, what we might be compelled to call ‘the bad life’, fails to reflect back my value as a living being, then I must become critical of those categories and structures that produce that form of effacement and inequality. In other words, I cannot affirm my own life without critically evaluating those structures that differentially value life itself [my emphasis]. This practice of critique is one in which my own life is bound up with the objects that I think about” (2015: 199).

In simpler terms: my position as a political subject is predicated on the practice of critique, which entails reflecting on the conditions that make my life difficult (or unbearable). Yet, those conditions are in part what constitutes my capacity to engage in critique in the first place, as the practice of thinking (critically) is, especially in the case of academic critique, inextricably bound up in practices, institutions, and – not least importantly – economies of academic knowledge production. In formal terms, critique is a form of a Russell’s paradox: a set that at the same time both is and is not a member of itself.

Living with (Russell) paradoxes

This is why academic critique of neoliberalism has no problem with thinking about governing rationalities, exploitation of workers in Chinese factories, or VC’s salaries: practices that it perceives as outside of itself, or in which it can conceive of itself as an object. But it faces serious problems when it comes to thinking itself as a subject, and even more, acting in this context, as this – at least according to its own standards – means reflecting on all the practices that make it ‘complicit’ in exactly what it aims to expunge, or criticize.

This means coming to terms with the fact that neoliberalism is the Research Excellence Framework, but neoliberalism is also when you discuss ideas for a super-cool collaborative project. Neoliberalism is the requirement to submit all your research outputs to the faculty website, but neoliberalism is also the pride you feel when your most recent article is Tweeted about. Neoliberalism is the incessant corporate emails about ‘wellbeing’, but it is also the craft beer you have with your friends in the pub. This is why, in the seemingly interminable debates about the ‘validity’ of neoliberalism as an analytical term, both sides are right: yes, on the one hand, the term is vague and can seemingly be applied to any manifestation of power, but, on the other, it does cover everything, which means it cannot be avoided either.

This is exactly the sort of ambiguity – the fact that things can be two different things at the same time – that critique in neoliberalism needs to come to terms with. This could possibly help us move beyond the futile iconoclastic gesture of revealing the ‘true nature’ of things, expecting that action will naturally follow from this (Martijn Konings’ Capital and Time has a really good take on the limits of ‘ontological’ critique of neoliberalism). In this sense, if there is something critique can learn from neoliberalism, it is the art of speculation. If economic discourses are performative, then, by definition, critique can be performative too. This means that futures can be created – but the assumption that ‘voice’ is sufficient to create the conditions under which this can be the case needs to be dispensed with.

 

 

Between legitimation and imagination: epistemic attachment, ontological bias, and thinking about the future

Greyswans
Some swans are…grey (Cambridge, August 2017)

 

A serious line of division runs through my household. It does not concern politics, music, or even sports: it concerns the possibility of large-scale collapse of social and political order, which I consider very likely. Specific scenarios aside for the time being, let’s just say we are talking more human-made climate-change-induced breakdown involving possibly protracted and almost certainly lethal conflict over resources, than ‘giant asteroid wipes out Earth’ or ‘rogue AI takes over and destroys humanity’.

Ontological security or epistemic positioning?

It may be tempting to attribute the tendency towards catastrophic predictions to psychological factors rooted in individual histories. My childhood and adolescence took place alongside the multi-stage collapse of the country once known as the Socialist Federal Republic of Yugoslavia. First came the economic crisis, when the failure of ‘shock therapy’ to boost stalling productivity (surprise!) resulted in massive inflation; then social and political disintegration, as the country descended into a series of violent conflicts whose consequences went far beyond the actual front lines; and then actual physical collapse, as Serbia’s long involvement in wars in the region was brought to a halt by the NATO intervention in 1999, which destroyed most of the country’s infrastructure, including parts of Belgrade, where I was living at the time*. It makes sense to assume this results in quite a different sense of ontological security than one, say, the predictability of a middle-class English childhood would afford.

But does predictability actually work against the capacity to make accurate predictions? This may seem not only contradictory but also counterintuitive – any calculation of risk has to take into account not just the likelihood, but also the nature of the source of threat involved, and thus necessarily draws on the assumption of (some degree of) empirical regularity. However, what about events outside of this scope? A recent article by Faulkner, Feduzi and Runde offers a good formalization of this problem (the Black Swans and ‘unknown unknowns’) in the context of the (limited) possibility to imagine different outcomes (see table below). Of course, as Beck noted a while ago, the perception of ‘risk’ (as well as, by extension, any other kind of future-oriented thinking) is profoundly social: it depends on ‘calculative devices‘ and procedures employed by networks and institutions of knowledge production (universities, research institutes, think tanks, and the like), as well as on how they are presented in, for instance, literature and the media.

Screen shot 2017-12-18 at 3.58.23 PM
From: Faulkner, Feduzi and Runde: Unknowns, Black Swans and the risk/uncertainty distinction, Cambridge Journal of Economics 41 (5), August 2017, 1279-1302

 

Unknown unknowns

In The Great Derangement (probably the best book I’ve read in 2017), Amitav Gosh argues that this can explain, for instance, the surprising absence of literary engagement with the problem of climate change. The problem, he claims, is endemic to Western modernity: a linear vision of history cannot conceive of a problem that exceeds its own scale**. This isn’t the case only with ‘really big problems’ such as economic crises, climate change, or wars: it also applies to specific cases such as elections or referendums. Of course, social scientists – especially those qualitatively inclined – tend to emphasise that, at best, we aim to explain events retroactively. Methodological modesty is good (and advisable), but avoiding thinking about the ways in which academic knowledge production is intertwined with the possibility of prediction is useless, for at least two reasons.

One is that, as reflected in the (by now overwrought and overdetermined) crisis of expertise and ‘post-truth’, social researchers increasingly find themselves in situations where they are expected to give authoritative statements about the future direction of events (for instance, about the impact of Brexit). Even if they disavow this form of positioning, the very idea of social science rests on (no matter how implicit) assumption that at least some mechanisms or classes or objects will exhibit the same characteristics across cases; consequently, the possibility of inference is implied, if not always practised. Secondly, given the scope of challenges societies face at present, it seems ridiculous to not even attempt to engage with – and, if possibly, refine – the capacity to think how they will develop in the future. While there is quite a bit of research on individual predictive capacity and the way collective reasoning can correct for cognitive bias, most of these models – given that they are usually based on experiments, or simulations – cannot account for the way in which social structures, institutions, and cultures of knowledge production interact with the capacity to theorise, model, and think about the future.

The relationship between social, political, and economic factors, on the one hand, and knowledge (including knowledge about those factors), on the other, has been at the core of my work, including my current PhD. While it may seem minor compared to issues such as wars or revolutions, the future of universities offers a perfect case to study the relationship between epistemic positioning, positionality, and the capacity to make authoritative statements about reality: what Boltanski’s sociology of critique refers to as ‘complex externality’. One of the things it allowed me to realise is that while there is a good tradition of reflecting on positionality (or, in positivist terms, cognitive ‘bias’) in relation to categories such as gender, race, or class, we are still far from successfully theorising something we could call ‘ontological bias’: epistemic attachment to the object of research.

The postdoctoral project I am developing extends this question and aims to understand its implications in the context of generating and disseminating knowledge that can allow us to predict – make more accurate assessments of – the future of complex social phenomena such as global warming or the development of artificial intelligence. This question has, in fact, been informed by my own history, but in a slightly different manner than the one implied by the concept of ontological security.

Legitimation and prediction: the case of former Yugoslavia

Socialist Federal Republic of Yugoslavia had a relatively sophisticated and well developed networks of social scientists, which both of my parents were involved in***. Yet, of all the philosophers, sociologists, political scientists etc. writing about the future of the Yugoslav federation, only one – to the best of my knowledge – predicted, in eerie detail, the political crisis that would lead to its collapse: Bogdan Denitch, whose Legitimation of a revolution: the Yugoslav case (1976) is, in my opinion, one of the best books about former Yugoslavia ever written.

A Yugoslav-American, Denitch was a professor of sociology at the City University of New York. He was also a family friend, a fact I considered of little significance (having only met him once, when I was four, and my mother and I were spending a part of our summer holiday at his house in Croatia; my only memory of it is being terrified of tortoises roaming freely in the garden), until I began researching the material for my book on education policies and the Yugoslav crisis. In the years that followed (I managed to talk to him again in 2012; he passed away in 2016), I kept coming back to the question: what made Denitch more successful in ‘predicting’ the crisis that would ultimately lead to the dissolution of former Yugoslavia than virtually anyone writing on Yugoslavia at the time?

Denitch had a pretty interesting trajectory. Born in 1929 to Croat Serb parents, he spent his childhood in a series of countries (including Greece and Egypt), following his diplomat father; in 1946, the family emigrated to the United States (the fact his father was a civil servant in the previous government would have made it impossible for them to continue living in Yugoslavia after the Communist regime, led by Josip Broz Tito, formally took over). There, Denitch (in evident defiance of his upper-middle-class legacy) trained as a factory worker, while studying for a degree in sociology at CUNY. He also joined the Democratic Socialist Alliance – one of American socialist parties – whose member (and later functionary) he would remain for the rest of his life.

In 1968, Denitch was awarded a major research grant to study Yugoslav elites. The project was not without risks: while Yugoslavia was more open to ‘the West’ than other countries in Eastern Europe, visits by international scholars were strictly monitored. My mother recalls receiving a house visit from an agent of the UDBA, the Yugoslav secret police – not quite the KGB but you get the drift – who tried to elicit the confession that Denitch was indeed a CIA agent, and, in the absence of that, the promise that she would occasionally report on him****.

Despite these minor throwbacks, the research continued: Legitimation of a revolution is one of its outcomes. In 1973, Denitch was awarded a PhD by the Columbia University and started teaching at CUNY, eventually retiring in 1994. His last book, Ethnic nationalism: the tragic death of Yugoslavia came out in the same year, a reflection on the conflict that was still going on at the time, and whose architecture he had foreseen with such clarity eighteen years earlier (the book is remarkably bereft of “told-you-so”-isms, so warmly recommended for those wishing to learn more about Yugoslavia’s dissolution).

Did personal history, in this sense, have a bearing on one’s epistemic position, and by extension, on the capacity to predict events? One explanation (prevalent in certain versions of popular intellectual history) would be that Denitch’s position as both a Yugoslav and an American would have allowed him to escape the ideological traps other scholars were more likely to fall into. Yugoslavs, presumably,  would be at pains to prove socialism was functioning; Americans, on the other hand, perhaps egalitarian in theory but certainly suspicious of Communist revolutions in practice, would be looking to prove it wasn’t, at least not as an economic model. Yet this assumption hardly stands even the lightest empirical interrogation. At least up until the show trials of Praxis philosophers, there was a lively critique of Yugoslav socialism within Yugoslavia itself; despite the mandatory coating of jargon, Yugoslav scholars were quite far from being uniformly bright-eyed and bushy-tailed about socialism. Similarly, quite a few American scholars were very much in favour of the Yugoslav model, eager, if anything, to show that market socialism was possible – that is, that it’s possible to have a relatively progressive social policy and still be able to afford nice things. Herein, I believe, lies the beginning of the answer as to why neither of these groups was able to predict the type or the scale of the crisis that will eventually lead to the dissolution of former Yugoslavia.

Simply put, both groups of scholars depended on Yugoslavia as a source of legitimation of their work, though for different reasons. For Yugoslav scholars, the ‘exceptionality’ of the Yugoslav model was the source of epistemic legitimacy, particularly in the context of international scientific collaboration: their authority was, in part at least, constructed on their identity and positioning as possessors of ‘local’ knowledge (Bockman and Eyal’s excellent analysis of the transnational roots of neoliberalism makes an analogous point in terms of positioning in the context of the collaboration between ‘Eastern’ and ‘Western’ economists). In addition to this, many of Yugoslav scholars were born and raised in socialism: while, some of them did travel to the West, the opportunities were still scarce and many were subject to ideological pre-screening. In this sense, both their professional and their personal identity depended on the continued existence of Yugoslavia as an object; they could imagine different ways in which it could be transformed, but not really that it could be obliterated.

For scholars from the West, on the other hand, Yugoslavia served as a perfect experiment in mixing capitalism and socialism. Those more on the left saw it as a beacon of hope that socialism need not go hand-in-hand with Stalinist-style repression. Those who were more on the right saw it as proof that limited market exchange can function even in command economies, and deduced (correctly) that the promise of supporting failing economies in exchange for access to future consumer markets could be used as a lever to bring the Eastern Bloc in line with the rest of the capitalist world. If no one foresaw the war, it was because it played no role in either of these epistemic constructs.

This is where Denitch’s background would have afforded a distinct advantage. The fact his parents came from a Serb minority in Croatia meant he never lost sight of the salience of ethnicity as a form of political identification, despite the fact socialism glossed over local nationalisms. His Yugoslav upbringing provided him not only with fluency in the language(s), but a degree of shared cultural references that made it easier to participate in local communities, including those composed of intellectuals. On the other hand, his entire professional and political socialization took place in the States: this meant he was attached to Yugoslavia as a case, but not necessarily as an object. Not only was his childhood spent away from the country; the fact his parents had left Yugoslavia after the regime change at the end of World War II meant that, in a way, for him, Yugoslavia-as-object was already dead. Last, but not least, Denitch was a socialist, but one committed to building socialism ‘at home’. This means that his investment in the Yugoslav model of socialism was, if anything, practical rather than principled: in other words, he was interested in its actual functioning, not in demonstrating its successes as a marriage of markets and social justice. This epistemic position, in sum, would have provided the combination needed to imagine the scenario of Yugoslav dissolution: a sufficient degree of attachment to be able to look deeply into a problem and understand its possible transformations; and a sufficient degree of detachment to be able to see that the object of knowledge may not be there forever.

Onwards to the…future?

What can we learn from the story? Balancing between attachment and detachment is, I think, one of the key challenges in any practice of knowing the social world. It’s always been there; it cannot be, in any meaningful way, resolved. But I think it will become more and more important as the objects – or ‘problems’ – we engage with grow in complexity and become increasingly central to the definition of humanity as such. Which means we need to be getting better at it.

 

———————————-

(*) I rarely bring this up as I think it overdramatizes the point – Belgrade was relatively safe, especially compared to other parts of former Yugoslavia, and I had the fortune to never experience the trauma or hardship people in places like Bosnia, Kosovo, or Croatia did.

(**) As Jane Bennett noted in Vibrant Matter, this resonates with Adorno’s notion of non-identity in Negative Dialectics: a concept always exceeds our capacity to know it. We can see object-oriented ontology, (e.g. Timothy Morton’s Hyperobjects) as the ontological version of the same argument: the sheer size of the problem acts as a deterrent from the possibility to grasp it in its entirety.

(***) This bit lends itself easily to the Bourdieusian “aha!” argument – academics breed academics, etc. The picture, however, is a bit more complex – I didn’t grow up with my father and, until about 16, had a very vague idea of what my mother did for a living.

(****) Legend has it my mother showed the agent the door and told him never to call on her again, prompting my grandmother – her mother – to buy funeral attire, assuming her only daughter would soon be thrown into prison and possibly murdered. Luckily, Yugoslavia was not really the Soviet Union, so this did not come to pass.

The biopolitics of higher education, or: what’s the problem with two-year degrees?

[Note: a shorter version of this post was published in Times Higher Education’s online edition, 26 December 2017]

The Government’s most recent proposal to introduce the possibility of two-year (‘accelerated’) degrees has already attracted quite a lot of criticism. One aspect is student debt: given that universities will be allowed to charge up to £2,000 more for these ‘fast-track’ degrees, there are doubts in terms of how students will be able to afford them. Another concerns the lack of mobility: since the Bologna Process assumes comparability of degrees across European higher education systems, students in courses shorter than three or four years would find it very difficult to participate in Erasmus or other forms of student exchange. Last, but not least, many academics have said the idea of ‘accelerated’ learning is at odds with the nature of academic knowledge, and trivializes or debases the time and effort necessary for critical reflection.

However, perhaps the most curious element of the proposal is its similarity to the Diploma of Higher Education (DipHE), a two-year qualification proposed by Mrs Thatcher at the time when she was State Secretary for Education and Science. Of course, DipHE had a more vocational character, meant to enable access equally to further education and the labour market. In this sense, it was both a foundation degree and a finishing qualification. But there is no reason to believe those in new two-year programmes would not consider continuing their education through a ‘top-up’ year, especially if the labour market turns out not to be as receptive for their qualification as the proposal seems to hope. So the real question is: why introduce something that serves no obvious purpose – for the students or, for that matter, for the economy – and, furthermore, base it on resurrecting a policy that proved unpopular in 1972 and was abandoned soon after introduction?

One obvious answer is that the Conservative government is desperate for a higher education policy to match Labour’s proposal to abolish tuition fees (despite the fact that, no matter how commendable, abolishing tuition fees is little but a reversal of measures put in place by the last Labour government). But the case of higher education in Britain is more curious than that. If one sees policy as a set of measures designed to bring about a specific vision of society, Britain never had much of a higher education policy to begin with.

Historically, British universities evolved as highly autonomous units, which meant that the Government felt little need to regulate them until well into the 20th century. Until the 1960s, the University Grants Committee succeeded in maintaining the ‘gentlemanly conversation’ between the universities and the Government. The 1963 report of the Robbins Committee, thus, was to be the first serious step into higher education policy-making. Yet, despite the fact that the Robbins report was more complex than many who cite it approvingly give it credit for, its main contribution was to open the door of universities for, in the memorable phrase, “all who qualify by ability and attainment”. What it sought to regulate was thus primarily who should access higher education – not necessarily how it should be done, nor, for that matter, what the purpose of this was.

Even the combined pressures of the economic crisis and an uneven rate of expansion in the 1970s and the 1980s did little to orient the government towards a more coherent strategy for higher education. This led Peter Scott to comment in 1982 “so far as we have in Britain any policy for higher education it is the binary policy…[it] is the nearest thing we have to an authoritative statement about the purposes of higher education”. The ‘watershed’ moment of 1992, abolishing the division between universities and polytechnics, was, in that sense, less of a policy and more of an attempt to undo the previous forays into regulating the sector.

Two major reviews of higher education since Robbins, the Dearing report and the Browne review, represented little more than attempts to deal with the consequences of massification through, first, tying education more closely to the supposed needs of the economy, and, second, introducing tuition fees. The difference between Robbins and subsequent reports in terms of scope of consultation and collected evidence suggests there was little interest in asking serious questions about the strategic direction of higher education, the role of the government, and its relationship to universities. Political responsibility was thus outsourced to ‘the Market’, that rare point of convergence between New Labour and Conservatives – at best a highly abstract aggregate of unreliable data concerning student preferences, and, at worst, utter fiction.

Rather than as a policy in a strict sense of the term, this latest proposal should be seen as another attempt at governing populations, what Michel Foucault called biopolitics. Of course, there is nothing wrong with the fact that people learn at different speeds: anyone who has taught in a higher education institution is more than aware that students have varying learning styles. But the Neo-Darwinian tone of “highly motivated students hungry for a quicker pace of learning” combined with the pseudo-widening-participation pitch of “mature students who have missed out on the chance to go to university as a young person” neither acknowledges this, nor actually engages with the need to enable multiple pathways into higher education. Rather, funneling students through a two-year degree and into the labour market is meant to ensure they swiftly become productive (and consuming) subjects.

 

IMAG3397
People’s history museum, Manchester

 

Of course, whether the labour market will actually have the need for these ‘accelerated’ subjects, and whether universities will have the capacity to teach them, remains an open question. But the biopolitics of higher education is never about the actual use of degrees or specific forms of learning. As I have shown in my earlier work on vocationalism and education for labour, this type of political technology is always about social control; in other words, it aims to prevent potentially unruly subjects from channeling their energy into forms of action that could be disruptive of the political order.

Education – in fact, any kind of education policy – is perfect in this sense because it is fundamentally oriented towards the future. It occupies the subject now, but transposes the horizon of expectation into the ever-receding future – future employment, future fulfillment, future happiness. The promise of quicker, that is, accelerated delivery into this future is a particularly insidious form of displacement of political agency: the language of certainty (“when most students are completing their third year of study, an accelerated degree student will be starting work and getting a salary”) is meant to convey that there is a job and salary awaiting, as it were, at the end of the proverbial rainbow.

The problem is not simply that such predictions (or promises) are based on an empty rhetoric, rather than any form of objective assessment of the ‘needs’ of the labour market. Rather, it is that future needs of the labour market are notoriously difficult to assess, and even more so in periods of economic contraction. Two-year degrees, in this sense, are just a way to defer the compounding problems of inequality, unemployment, and social insecurity. Unfortunately, to this date, no higher education qualification has proven capable of doing that.

Why is it more difficult to imagine the end of universities than the end of capitalism, or: is the crisis of the university in fact a crisis of imagination?

neoliberalismwhatwillyoube
Graffiti at the back of a chair in a lecture theatre at Goldsmiths, University of London, October 2017

 

Hardly anyone needs convincing that the university today is in deep crisis. Critics warn that the idea of the University (at least in the form in which it emerged from Western modernity) is endangered, under attack, under fire; that governments or corporations are waging a war against them. Some even pronounce public university already dead, or at least lying in ruins. The narrative about the causes of the crisis is well known: shift in public policy towards deregulation and the introduction of market principles – usually known as neoliberalism – meant the decline of public investment, especially for social sciences and humanities, introduction of performance-based funding dependent on quantifiable output, and, of course, tuition fees. This, in turn, led to the rising precarity and insecurity among faculty and students, reflected, among other things, in a mental health crisis. Paradoxically, the only surviving element of the public university that seems to be doing relatively well in all this is critique. But what if the crisis of the university is, in fact, a crisis of imagination?

Don’t worry, this is not one of those posts that try to convince you that capitalism can be wished away by the power of positive thinking. Nor is it going to claim that neoliberalism offers unprecedented opportunities, if only we would be ‘creative’ enough to seize them. The crisis is real, it is felt viscerally by almost everyone in higher education, and – importantly – it is neither exceptional nor unique to universities. Exactly because it cannot be wished away, and exactly because it is deeply intertwined with the structures of the current crisis of capitalism, opposition to the current transformation of universities would need to involve serious thinking about long-term alternatives to current modes of knowledge production. Unfortunately, this is precisely the bit that tends to be missing from a lot of contemporary critique.

Present-day critique of neoliberalism in higher education often takes the form of nostalgic evocation of the glory days when universities were few, and funds for them plentiful. Other problems with this mythical Golden Age aside, what this sort of critique conveniently omits to mention is that institutions that usually provide the background imagery for these fantastic constructs were both highly selective and highly exclusionary, and that they were built on the back of centuries of colonial exploitation. If it seemed like they imparted a life of relatively carefree privilege on those who studied and worked in them, that is exactly because this is what they were designed to do: cater to the “life of the mind” via excluding all forms of interference, particularly if they took the form of domestic (or any other material) labour, women, or minorities. This tendency is reproduced in Ivory Tower nostalgia as a defensive strategy: the dominant response to what critics tend to claim is the biggest challenge to universities since their founding (which, as they like to remind us, was a long, long time ago) is to stick their head in the sand and collectively dream back to the time when, as Pink Floyd might put it, grass was greener and lights were brighter.

Ivory Tower nostalgia, however, is just one aspect of this crisis of imagination. A much broader symptom is that contemporary critique seems unable to imagine a world without the university. Since ideas of online disembedded learning were successfully monopolized by technolibertarian utopians, the best most academics seem to be able to come up with is to re-erect the walls of the institution, but make them slightly more porous. It’s as if the U of University and the U of Utopia were somehow magically merged. To extend the oft-cited and oft-misattributed saying, if it seems easier to imagine the end of the world than the end of capitalism, it is nonetheless easier to imagine the end of capitalism than the end of universities.

Why does the institution like a university have such a purchase on (utopian and dystopian) imagination? Thinking about universities is, in most cases, already imbued by the university, so one element pertains to the difficulty of perceiving conditions of reproduction of one’s own position (this mode of access from the outside, as object-oriented ontologists would put it, or complex externality, as Boltanski does, is something I’m particularly interested in). However, it isn’t the case just with academic critique; fictional accounts of universities or other educational institutions are proliferating, and, in most cases (as I hope to show once I finally get around to writing the book on magical realism and universities), they reproduce the assumption of the value of the institution as such, as well as a lot of associated ideas, as this tweet conveys succinctly:

Screen shot 2017-10-11 at 11.06.11 PM

This is, unfortunately, often the case even with projects whose explicit aim is to subvert existing  inequalities in the context of knowledge production, including open, free, and workers’ universities (Social Science Centre in Lincoln maintains a useful map of these initiatives globally). While these are fantastic initiatives, most either have to ‘piggyback’ on university labour – that is, on the free or voluntary labour of people employed or otherwise paid by universities – or, at least, rely on existing universities for credentialisation. Again, this isn’t to devalue those who invest time, effort, and emotions into such forms of education; rather, it is to flag that thinking about serious, long-term alternatives is necessary, and quickly, at that. This is a theme I spend a lot of time thinking about, and I hope to make one of central topics in my work in the future.

 

So what are we to do?

There’s an obvious bit of irony in suggesting a panel for a conference in order to discuss how the system is broken, but, in the absence of other forms, I am thinking of putting together a proposal for a workshop for Sociological Review’s 2018 “Undisciplining: Conversations from the edges” conference. The good news is that the format is supposed to go outside of the ‘orthodox’ confines of panels and presentations, which means we could do something potentially exciting. The tentative title Thinking about (sustainable?) alternatives to academic knowledge production.

I’m particularly interested in questions such as:

  • Qualifications and credentials: can we imagine a society where universities do not hold a monopoly on credentials? What would this look like?
  • Knowledge work: can we conceive of knowledge production (teaching and research) not only ‘outside of’, but without the university? What would this look like?
  • Financing: what other modes of funding for knowledge production are conceivable? Is there a form of public funding that does not involve universities (e.g., through an academic workers’ cooperative – Mondragon University in Spain is one example – or guild)? What would be the implications of this, and how it would be regulated?
  • Built environment/space: can we think of knowledge not confined to specific buildings or an institution? What would this look like – how would it be organised? What would be the consequences for learning, teaching and research?

The format would need to be interactive – possibly a blend of on/off-line conversations – and can address the above, or any of the other questions related to thinking about alternatives to current modes of knowledge production.

If you’d like to participate/contribute/discuss ideas, get in touch by the end of October (the conference deadline is 27 November).

[UPDATE: Our panel got accepted! See you at Undisciplining conference, 18-21 June, Newcastle, UK. Watch this space for more news].

Theory as practice: for a politics of social theory, or how to get out of the theory zoo

 

[These are my thoughts/notes for the “Practice of Social Theory, which Mark Carrigan and I are running at the Department of Sociology of the University of Cambridge from 4 to 6 September, 2017].

 

Revival of theory?

 

It seems we are witnessing something akin to a revival of theory, or at least of an interest in it. In 2016, the British Journal of Sociology published Swedberg’s “Before theory comes theorizing, or how to make social sciences more interesting”, a longer version of its 2015 Annual public lecture, followed by responses from – among others – Krause, Schneiderhan, Tavory, and Karleheden. A string of recent books – including Matt Dawson’s Social Theory for Alternative Societies, Alex Law’s Social Theory for Today, and Craig Browne’s Critical Social Theory, to name but a few – set out to consider the relevance or contribution of social theory to understanding contemporary social problems. This is in addition to the renewal of interest in biography or contemporary relevance of social-philosophical schools such as Existentialism (1, 2) and the Frankfurt School [1, 2].

To a degree, this revival happens on the back of the challenges posed to the status of theory by the rise of data science, leading Lizardo and Hay to engage in defense of the value and contributions of theory to sociology and international relations, respectively. In broader terms, however, it addresses the question of the status of social sciences – and, by extension, academic knowledge – more generally; and, as such, it brings us back to the justification of expertise, a question of particular relevance in the current political context.

The meaning of theory

Surely enough, theory has many meanings (Abend, 2008), and consequently many forms in which it is practiced. However, one of the characteristics that seem to be shared across the board is that it is  part of (under)graduate training, after which it gets bracketed off in the form of “the theory chapter” of dissertations/theses. In this sense, theory is framed as foundational in terms of socialization into a particular discipline, but, at the same time, rarely revisited – at least not explicitly – after the initial demonstration of aptitude. In other words, rather than doing, theory becomes something that is ‘done with’. The exception, of course, are those who decide to make theory the centre of their intellectual pursuits; however, “doing theory” in this sense all too often becomes limited to the exegesis of existing texts (what Krause refers to as ‘theory a’ and Abend as ‘theory 4’) that leads to the competition among theorists for the best interpretation of “what theorist x really wanted to say”, or, alternatively, the application of existing concepts to new observations or ‘problems’ (‘theory b and c’, in Krause’s terms). Either way, the field of social theory resembles less the groves of Plato’s Academy, and more a zoo in which different species (‘Marxists’, ‘critical realists’, ‘Bourdieusians’, ‘rational-choice theorists’) delve in their respective enclosures or fight with members of the same species for dominance of a circumscribed domain.

 

Screen shot 2017-06-12 at 8.11.36 PM
Competitive behaviour among social theorists

 

This summer school started from the ambition to change that: to go beyond rivalries or allegiances to specific schools of thought, and think about what doing theory really means. I often told people that wanting to do social theory was a major reason why I decided to do a second PhD; but what was this about? I did not say ‘learn more’ about social theory (my previous education provided a good foundation), ‘teach’ social theory (though supervising students at Cambridge is really good practice for this), read, or even write social theory (though, obviously, this was going to be a major component). While all of these are essential elements of becoming a theorist, the practice of social theory certainly isn’t reducible to them. Here are some of the other aspects I think we need to bear in mind when we discuss the return, importance, or practice of theory.

Theory is performance

This may appear self-evident once the focus shifts to ‘doing’, but we rarely talk about what practicing theory is meant to convey – that is, about theorising as a performative act. Some elements of this are not difficult to establish: doing theory usually means  identification with a specific group, or form of professional or disciplinary association. Most professional societies have committees, groups, and specific conference sessions devoted to theory – but that does not mean theory is exclusively practiced within them. In addition to belonging, theory also signifies status. In many disciplines, theoretical work has for years been held in high esteem; the flipside, of course, is that ‘theoretical’ is often taken to mean too abstract or divorced from everyday life, something that became a more pressing problem with the decline of funding for social sciences and the concomitant expectation to make them socially relevant. While the status of theory is a longer (and separate) topic, one that has been discussed at length in the history of sociology and other social sciences, it bears repeating that asserting one’s work as theoretical is always a form of positioning: it serves to define the standing of both the speaker, and (sometimes implicitly) others contributors. This brings to mind that…

Theory is power

Not everyone gets to be treated as a theorist: it is also a question of recognition, and thus, a question of political (and other) forms of power. ‘Theoretical’ discussions are usually held between men (mostly, though not exclusively, white men); interventions from women, people of colour, and persons outside centres of epistemic power are often interpreted as empirical illustrations, or, at best, contributions to ‘feminist’ or ‘race’ theory*. Raewyn Connell wrote about this in Southern Theory, and initiatives such as Why is my curriculum white? and Decolonizing curriculum in theory and practice have brought it to the forefront of university struggles, but it speaks to the larger point made by Spivak: that the majority of mainstream theory treats the ‘subaltern’ as only empirical or ethnographic illustration of the theories developed in the metropolis.

The problem here is not only (or primarily) that of representation, in the sense in which theory thus generated fails to accurately depict the full scope of social reality, or experiences and ideas of different people who participate in it. The problem is in a fundamentally extractive approach to people and their problems: they exist primarily, if not exclusively, in order to be explained. This leads me to the next point, which is that…

Theory is predictive

A good illustration for this is offered by pundits and political commentators’ surprise at events in the last year: the outcome of the Brexit referendum (Leave!), US elections (Donald Trump!), and last but not least, the UK General Election (surge in votes for Corbyn!). Despite differences in how these events are interpreted, they in most cases convey that, as one pundit recently confessed, nobody has a clue about what is going on. Does this mean the rule of experts really is over, and, with it, the need for general theories that explain human action? Two things are worth taking into account.

To begin with, social-scientific theories enter the public sphere in a form that’s not only simplified, but also distilled into ‘soundbites’ or clickbait adapted to the presumed needs and preferences of the audience, usually omitting all the methodological or technical caveats they normally come with. For instance, the results of opinion polls or surveys are taken to presented clear predictions, rather than reflections of general statistical tendencies; reliability is rarely discussed. Nor are social scientists always innocent victims of this media spin: some actively work on increase their visibility or impact, and thus – perhaps unwittingly – contribute to the sensationalisation of social-scientific discourse. Second, and this can’t be put delicately, some of these theories are just not very good. ‘Nudgery’ and ‘wonkery’ often rest on not particularly sophisticated models of human behaviour; which is not saying that they do not work – they can – but rather that theoretical assumptions underlying these models are rarely accessible to scrutiny.

Of course, it doesn’t take a lot of imagination to figure out why this is the case: it is easier to believe that selling vegetables in attractive packaging can solve the problem of obesity than to invest in long-term policy planning and research on decision-making that has consequences for public health. It is also easier to believe that removing caps to tuition fees will result in universities charging fees distributed normally from lowest to highest, than to bother reading theories of organizational behaviour in different economic and political environments and try to understand how this maps onto the social structure and demographics of a rapidly changing society. In other words: theories are used to inform or predict human behaviour, but often in ways that reinforce existing divisions of power. So, just in case you didn’t see this coming…

Theory is political

All social theories are about constraints, including those that are self-imposed. From Marx to Freud and from Durkheim to Weber (and many non-white, non-male theorists who never made it into ‘the canon’), theories are about what humans can and cannot do; they are about how relatively durable relations (structures) limit and enable how they act (agency). Politics is, fundamentally, about the same thing: things we can and things we cannot change. We may denounce Bismarck’s definition of politics as the art of the possible as insufficiently progressive, but – at the risk of sounding obvious – understanding how (and why) things stay the same is fundamental to understanding how to go about changing them. The history of social theory, among other things, can be read as a story about shifting the boundaries of what was considered fixed and immutable, on the one hand, and constructed – and thus subject to change – on the other.

In this sense, all social theory is fundamentally political. This isn’t to license bickering over different historical materialisms, or to stimulate fantasies – so dear to intellectuals – of ‘speaking truth to power’. Nor should theories be understood as weapons in the ‘war of time’, despite Débord’s poetic formulation: this is but the flipside of intellectuals’ dream of domination, in which their thoughts (i.e. themselves) inspire masses to revolt, usually culminating in their own ascendance to a position of power (thus conveniently cutting out the middleman in ‘speaking truth to power’, as they become the prime bearers of both).

Theory is political in a much simpler sense, in which it is about society and elements that constitute it. As such, it has to be about understanding what is it that those we think of as society think, want, and do, even – and possibly, especially – when we do not agree with them. Rather than aiming to ‘explain away’ people, or fit their behaviour into pre-defined social models, social theory needs to learn to listen to – to borrow a term from politics – its constituents. This isn’t to argue for a (not particularly innovative) return to grounded theory, or ethnography (despite the fact both are relevant and useful). At the risk of sounding pathetic, perhaps the next step in the development of social theory is to really make it a form of social practice – that is, make it be with the people, rather than about the people. I am not sure what this would entail, or what it would look like; but I am pretty certain it would be a welcome element of building a progressive politics. In this sense, doing social theory could become less of the practice of endlessly revising a blueprint for a social theory zoo, and more of a project of getting out from behind its bars.

 

 

*The tendency to interpret women’s interventions as if they are inevitably about ‘feminist theory’ (or, more frequently, as if they always refer to empirical examples) is a trend I have been increasingly noticing since moving into sociology, and definitely want to spend more time studying. This is obviously not to say there aren’t women in the field of social theory, but rather that gender (and race, ethnicity, and age) influence the level of generality at which one’s claims are read, thus reflecting the broader tendency to see universality and Truth as coextensive with the figure of the male and white academic.

 

 

Solving the democratic problem: intellectuals and reconciling epistemic and liberal democracy

bristols_somewhere
…but where? Bristol, October 2014

 

[This review of “Democratic problem-solving” (Cruickshank and Sassower eds., 2017) was first published in Social Epistemology Review and Reply Collective, 26 May 2017].

It is a testament to the lasting influence of Karl Popper and Richard Rorty that their work continues to provide inspiration for debates concerning the role and purpose of knowledge, democracy, and intellectuals in society. Alternatively, it is a testament to the recurrence of the problem that continues to lurk under the glossy analytical surface or occasional normative consensus of these debates: the impossibility to reconcile the concepts of liberal and epistemic democracy. Essays collected under the title Democratic Problem-Solving (Cruickshank and Sassower 2017) offer grounds for both assumptions, so this is what my review will focus on.

Boundaries of Rational Discussion

Democratic Problem-Solving is a thorough and comprehensive (if at times seemingly meandering) meditation on the implications of Popper’s and Rorty’s ideas for the social nature of knowledge and truth in contemporary Angloamerican context. This context is characterised by combined forces of neoliberalism and populism, growing social inequalities, and what has for a while now been dubbed, perhaps euphemistically, the crisis of democracy. Cruickshank’s (in other contexts almost certainly heretical) opening that questions the tenability of distinctions between Popper and Rorty, then, serves to remind us that both were devoted to the purpose of defining the criteria for and setting the boundaries of rational discussion, seen as the road to problem-solving. Jürgen Habermas, whose name also resonates throughout this volume, elevated communicative rationality to the foundational principle of Western democracies, as the unifying/normalizing ground from which to ensure the participation of the greatest number of members in the public sphere.

Intellectuals were, in this view, positioned as guardians—epistemic police, of sorts—of this discursive space. Popper’s take on epistemic ‘policing’ (see DPS, 42) was to use the standards of scientific inquiry as exemplars for maintaining a high level, and, more importantly, neutrality of public debates. Rorty saw it as the minimal instrument that ensured civility without questioning, or at least without implicitly dismissing, others’ cultural premises, or even ontological assumptions. The assumption they and authors in this volume have in common is that rational dialogue is, indeed, both possible and necessary: possible because standards of rationality were shared across humanity, and necessary because it was the best way to ensure consensus around the basic functioning principles of democracy. This also ensured the pairing of knowledge and politics: by rendering visible the normative (or political) commitments of knowledge claims, sociology of knowledge (as Reed shows) contributed to affirming the link between the epistemic and the political. As Agassi’s syllogism succinctly demonstrates, this link quickly morphed from signifying correlation (knowledge and power are related) to causation (the more knowledge, the more power), suggesting that epistemic democracy was if not a precursor, then certainly a correlate of liberal democracy.

This is why Democratic Problem-Solving cannot avoid running up against the issue of public intellectuals (qua epistemic police), and, obviously, their relationship to ‘Other minds’ (communities being policed). In the current political context, however, to the well-exercised questions Sassower raises such as—

should public intellectuals retain their Socratic gadfly motto and remain on the sidelines, or must they become more organically engaged (Gramsci 2011) in the political affairs of their local communities? Can some academics translate their intellectual capital into a socio-political one? Must they be outrageous or only witty when they do so? Do they see themselves as leaders or rather as critics of the leaders they find around them (149)?

—we might need to add the following: “And what if none of this matters?”

After all, differences in vocabularies of debate matter only if access to it depends on their convergence to a minimal common denominator. The problem for the guardians of public sphere today is not whom to include in these debates and how, but rather what to do when those ‘others’ refuse, metaphorically speaking, to share the same table. Populist right-wing politicians have at their disposal the wealth of ‘alternative’ outlets (Breitbart, Fox News, and increasingly, it seems, even the BBC), not to mention ‘fake news’ or the ubiquitous social media. The public sphere, in this sense, resembles less a (however cacophonous) town hall meeting than a series of disparate village tribunals. Of course, as Fraser (1990) noted, fragmentation of the public sphere has been inherent since its inception within the Western bourgeois liberal order.

The problem, however, is less what happens when other modes of arguing emerge and demand to be recognized, and more what happens when they aspire for redistribution of political power that threatens to overturn the very principles that gave rise to them in the first place. We are used to these terms denoting progressive politics, but there is little that prevents them from being appropriated for more problematic ideologies: after all, a substantial portion of the current conservative critique of the ‘culture of political correctness’, especially on campuses in the US, rests on the argument that ‘alternative’ political ideologies have been ‘repressed’, sometimes justifying this through appeals to the freedom of speech.

Dialogic Knowledge

In assuming a relatively benevolent reception of scientific knowledge, then, appeals such as Chis and Cruickshank’s to engage with different publics—whether as academics, intellectuals, workers, or activists—remain faithful to Popper’s normative ideal concerning the relationship between reasoning and decision-making: ‘the people’ would see the truth, if only we were allowed to explain it a bit better. Obviously, in arguing for dialogical, co-produced modes of knowledge, we are disavowing the assumption of a privileged position from which to do so; but, all too often, we let in through the back door the implicit assumption of the normative force of our arguments. It rarely, if ever, occurs to us that those we wish to persuade may have nothing to say to us, may be immune or impervious to our logic, or, worse, that we might not want to argue with them.

For if social studies of science taught us anything, it is that scientific knowledge is, among other things, a culture. An epistemic democracy of the Rortian type would mean that it’s a culture like any other, and thus not automatically entitled to a privileged status among other epistemic cultures, particularly not if its political correlates are weakened—or missing (cf. Hart 2016). Populist politics certainly has no use for critical slow dialogue, but it is increasingly questionable whether it has use for dialogue at all (at the time of writing of this piece, in the period leading up to the 2017 UK General Election, the Prime Minister is refusing to debate the Leader of the Opposition). Sassower’s suggestion that neoliberalism exhibits a penchant for justification may hold a promise, but, as Cruickshank and Chis (among others) show on the example of UK higher education, ‘evidence’ can be adjusted to suit a number of policies, and political actors are all too happy to do that.

Does this mean that we should, as Steve Fuller suggested in another SERRC article see in ‘post-truth’ the STS symmetry principle? I am skeptical. After all, judgments of validity are the privilege of those who can still exert a degree of control over access to the debate. In this context, I believe that questions of epistemic democracy, such as who has the right to make authoritative knowledge claims, in what context, and how, need to, at least temporarily, come second in relation to questions of liberal democracy. This is not to be teary-eyed about liberal democracy: if anything, my political positions lie closer to Cruickshank and Chis’ anarchism. But it is the only system that can—hopefully—be preserved without a massive cost in human lives, and perhaps repurposed so as to make them more bearable.

In this sense, I wish the essays in the volume confronted head-on questions such as whether we should defend epistemic democracy (and what versions of it) if its principles are mutually exclusive with liberal democracy, or, conversely, would we uphold liberal democracy if it threatened to suppress epistemic democracy. For the question of standards of public discourse is going to keep coming up, but it may decreasingly have the character of an academic debate, and increasingly concern the possibility to have one at all. This may turn out to be, so to speak, a problem that precedes all other problems. Essays in this volume have opened up important venues for thinking about it, and I look forward to seeing them discussed in the future.

References

Cruickshank, Justin and Raphael Sassower. Democratic Problem Solving: Dialogues in Social Epistemology. London: Rowman & Littlefield, 2017.

Fraser, Nancy. “Rethinking the Public Sphere: A Contribution to the Critique of Actually Existing Democracy.” Social Text 25/26 (1990): 56-80.

Fuller, Steve. “Embrace the Inner Fox: Post-Truth as the STS Symmetry Principle Universalized.” Social Epistemology Review and Reply Collective, December 25, 2016. http://wp.me/p1Bfg0-3nx

Hart, Randle J. “Is a Rortian Sociology Desirable? Will It Help Us Use Words Like ‘Cruelty’?” Humanity and Society, 40, no. 3 (2016): 229-241.

Boundaries and barbarians: ontological (in)security and the [cyber?] war on universities

baradurPrologue

One Saturday in late January, I go to the PhD office at the Department of Sociology at the University of Cambridge’s New Museums site (yes, PhD students shouldn’t work on Saturdays, and yes, we do). I swipe my card at the main gate of the building. Nothing happens.

I try again, and again, and still nothing. The sensor stays red. An interaction with a security guard who seems to appear from nowhere conveys there is nothing wrong with my card; apparently, there has been a power outage and the whole system has been reset. A rather distraught-looking man from the Department History and Philosophy of Science appears around the corner, insisting to be let back inside the building, where he had left a computer on with, he claims, sensitive data. The very amicable security guard apologises. There’s nothing he can do to let us in. His card doesn’t work, either, and the system has to be manually reset from within the computers inside each departmental building.

You mean the building noone can currently access, I ask.

I walk away (after being assured the issue would be resolved on Monday) plotting sci-fi campus novels in which Skynet is not part of a Ministry of Defense, but of a university; rogue algorithms claim GCSE test results; and classes are rescheduled in a way that sends engineering undergrads to colloquia in feminist theory, and vice versa (the distances one’ s mind will go to avoid thinking about impending deadlines)*. Regretfully pushing prospective pitches to fiction publishers aside (temporarily)**, I find the incident particularly interesting for the perspective it offers on how we think about the university as an institution: its spatiality, its materiality, its boundaries, and the way its existence relates to these categories – in other words, its social ontology.

War on universities?

Critiques of the current transformation of higher education and research in the UK often frame it as an attack, or ‘war’, on universities (this is where the first part of the title of my thesis comes from). Exaggeration for rhetorical purposes notwithstanding, being ‘under attack’ suggests is that it is possible to distinguish the University (and the intellectual world more broadly) from its environment, in this case at least in part populated by forces that threaten its very existence. Notably, this distinction remains almost untouched even in policy narratives (including those that seek to promote public engagement and/or impact) that stress the need for universities to engage with the (‘surrounding’) society, which tend to frame this imperative as ‘going beyond the walls of the Ivory Tower’.

The distinction between universities and the society has a long history in the UK: the university’s built environment (buildings, campuses, gates) and rituals (dress, residence requirements/’keeping term’, conventions of language) were developed to reflect the separateness of education from ordinary experience, enshrined in the dichotomies of intellectual vs. manual labour, active life vs. ‘life of the mind’ and, not least, Town vs. Gown. Of course, with the rise of ‘redbrick’, and, later, ‘plateglass’ universities, this distinction became somewhat less pronounced. Rather than in terms of blurring, however, I would like to suggest we need to think of this as a shift in scale: the relationship between ‘Town’ and ‘Gown’, after all, is embedded in the broader framework of distinctions between urban and suburban, urban and rural, regional and national, national and global, and the myriad possible forms of hybridisation between these (recent work by Addie, Keil and Olds, as well as Robertson et al., offers very good insights into issues related to theorising scale in the context of higher education).

Policing the boundaries: relational ontology and ontological (in)security

What I find most interesting, in this setting, is the way in which boundaries between these categories are maintained and negotiated. In sociology, the negotiation of boundaries in the academia has been studied in detail by, among others, Michelle Lamont (in How Professors Think, as well as in an overview by Lamont and Molnár), Thomas Gieryn (both in Cultural Boundaries of Science and few other texts), Andrew Abbott in The Chaos of Disciplines (and, of course, in sociologically-inclined philosophy of science, including Feyerabend’s Against Method, Lakatos’ work on research programmes, and Kuhn’s on scientific revolutions, before that). Social anthropology has an even longer-standing obsession with boundaries, symbolic as well as material – Mary Douglas’ work, in particular, as well as Augé’s Non-Places offer a good entry point, converging with sociology on the ground of neo-Durkheimian reading of the distinction between the sacred and profane.

My interest in the cultural framing of boundaries goes back to my first PhD, which explored the construal of the category of (romantic) relationship through the delineation of its difference from other types of interpersonal relations. The concept resurfaced in research on public engagement in UK higher education: here, the negotiation of boundaries between ‘inside’ (academics) and ‘outside’ (different audiences), as well as between different groups within the university (e.g. administrators vs. academics) becomes evident through practices of engaging in the dissemination and, sometimes, coproduction of knowledge, (some of this is in my contribution to this volume). The thread that runs through these cases is the importance of positioning in relation to a (relatively) specified Other; in other words, a relational ontology.

It is not difficult to see the role of negotiating boundaries between ‘inside’ and ‘outside’ in the concept of ontological security (e.g. Giddens, 1991). Recent work in IR (e.g. Ejdus, 2017) has shifted the focus from Giddens’ emphasis on social relations to the importance of stability of material forms, including buildings. I think we can extend this to universities: in this case, however, it is not (only) the building itself that is ‘at risk’ (this can be observed in intensified securitisation of campuses, both through material structure such as gates and cards-only entrances, and modes of surveillance such as Prevent – see e.g. Gearon, 2017), but also the materiality of the institution itself. While the MOOC hype may have (thankfully) subsided (though not dissappeared) there is the ubiquitous social media, which, as quite a few people have argued, tests the salience of the distinction between ‘inside’ and ‘outside’ (I’ve written a bit about digital technologies as mediating the boundary between universities and the ‘outside world’ here as well in an upcoming article in Globalisation, Education, Societies special issue that deals with reassembling knowledge production with/out the university).

Barbarians at the gates

In this context, it should not be surprising that many academics fear digital technologies: anything that tests the material/symbolic boundaries of our own existence is bound to be seen as troubling/dirty/dangerous. This brings to mind Kavafy’s poem (and J.M. Coetzee’s novel) Waiting for the Barbarians, in which an outpost of the Empire prepares for the attack of ‘the barbarians’ – that, in fact, never arrives. The trope of the university as a bulwark against and/or at danger of descending into barbarism has been explored by a number of writers, including Thorstein Veblen and, more recently, Roy Coleman. Regardless of the accuracy or historical stretchability of the trope, what I am most interested in is its use as a simultaneously diagnostic and normative narrative that frames and situates the current transformation of higher education and research.

As the last line of Kavafy’s poem suggests, barbarians represent ‘a kind of solution’: a solution for the otherwise unanswered question of the role and purpose of universities in the 21st century, which began to be asked ever more urgently with the post-war expansion of higher education, only to be shut down by the integration/normalization of the soixante-huitards in what Boltanski and Chiapello have recognised as contemporary capitalism’s almost infinite capacity to appropriate critique. Disentangling this dynamic is key to understanding contemporary clashes and conflicts over the nature of knowledge production. Rather than locating dangers to the university firmly beyond the gates, then, perhaps we could use the current crisis to think about how we perceive, negotiate, and preserve the boundaries between ‘in’ and ‘out’. Until we have a space to do that, I believe we will continue building walls only to realise we have been left on the wrong side.

(*) I have a strong interest in campus novels, both for PhD-related and unrelated reasons, as well as a long-standing interest in Sci-Fi, but with the exception of DeLillo’s White Noise can think of very few works that straddle both genres; would very much appreciate suggestions in this domain!

(**) I have been thinking for a while about a book that would be a spin-off from my current PhD that would combine social theory, literature, and critical cultural political economy, drawing on similarities and differences between critical and magical realism to look at universities. This can be taken as a sketch for one of the chapters, so all thoughts and comments are welcome.

@Grand_Hotel_Abyss: digital university and the future of critique

[This post was originally published on 03/01 2017 in Discover Society Special Issue on Digital Futures. I am also working on a longer (article) version of it, which will be uploaded soon].

It is by now commonplace to claim that digital technologies have fundamentally transformed knowledge production. This applies not only to how we create, disseminate, and consume knowledge, but also who, in this case, counts as ‘we’. Science and technology studies (STS) scholars argue that knowledge is an outcome of coproduction between (human) scientists and objects of their inquiry; object-oriented ontology and speculative realism go further, rejecting the ontological primacy of humans in the process. For many, it would not be overstretching to say machines do not only process knowledge, but are actively involved in its creation.

What remains somewhat underexplored in this context is the production of critique. Scholars in social sciences and humanities fear that the changing funding and political landscape of knowledge production will diminish the capacity of their disciplines to engage critically with the society, leading to what some have dubbed the ‘crisis’ of the university. Digital technologies are often framed as contributing to this process, speeding up the rate of production, simultaneously multiplying and obfuscating the labour of academics, perhaps even, as Lyotard predicted, displacing it entirely. Tensions between more traditional views of the academic role and new digital technologies are reflected in, often heated, debates over academics’ use of social media (see, for instance, #seriousacademic on Twitter). Yet, despite polarized opinions, there is little systematic research into links between the transformation of the conditions of knowledge production and critique.

My work is concerned with the possibility – that is, the epistemological and ontological foundations – of critique, and, more precisely, how academics negotiate it in contemporary (‘neoliberal’) universities. Rather than trying to figure out whether digital technologies are ‘good’ or ‘bad’, I think we need to consider what it is about the way they are framed and used that makes them either. From this perspective, which could be termed the social ontology of critique, we can ask: what is it about ‘the social’ that makes critique possible, and how does it relate to ‘the digital’? How is this relationship constituted, historically and institutionally? Lastly, what does this mean for the future of knowledge production?

Between pre-digital and post-critical 

There are a number of ways one can go about studying the relationship between digital technologies and critique in the contemporary context of knowledge production. David Berry and Christian Fuchs, for instance, both use critical theory to think about the digital. Scholars in political science, STS, and sociology of intellectuals have written on the multiplication of platforms from which scholars can engage with the public, such as Twitter and blogs. In “Uberfication of the University”, Gary Hall discusses how digital platforms transform the structure of academic labour. This joins the longer thread of discussions about precarity, new publishing landscapes, and what this means for the concept of ‘public intellectual’.

One of the challenges of theorising this relationship is that it has to be developed out of the very conditions it sets out to criticise. This points to limitations of viewing ‘critique’ as a defined and bounded practice, or the ‘public intellectual’ as a fixed and separate figure, and trying to observe how either has changed with the introduction of the digital. While the use of social media may be a more recent phenomenon, it is worth recalling that the bourgeois public sphere that gave rise to the practice of critique in its contemporary form was already profoundly mediatised. Whether one thinks of petitions and pamphlets in the Dreyfus affair, or discussions on Twitter and Facebook – there is no critique without an audience, and digital technologies are essential in how we imagine them. In this sense, grounding an analysis of the contemporary relationship between the conditions of knowledge production and critique in the ‘pre-digital’ is similar to grounding it in the post-critical: both are a technique of ‘ejecting’ oneself from the confines of the present situation.

The dismissiveness Adorno and other members of the Frankfurt school could exercise towards mass media, however, is more difficult to parallel in a world in which it is virtually impossible to remain isolated from digital technologies. Today’s critics may, for instance, avoid having a professional profile on Twitter or Facebook, but they are probably still using at least some type of social media in their private lives, not to mention responding to emails, reading articles, and searching and gathering information through online platforms. To this end, one could say that academics publicly criticising social media engage, in fact, in a performative contradiction: their critical stance is predicated on the existence of digital technologies both as objects of critique and main vehicles for its dissemination.

This, I believe, is an important source of perceived tensions between the concept of critique and digital technologies. Traditionally, critique implies a form of distancing from one’s social environment. This distancing is seen as both spatial and temporal: spatial, in the sense of providing a vantage point from which the critic can observe and (choose to) engage with the society; temporal, in the sense of affording shelter from the ‘hustle and bustle’ of everyday life, necessary to stimulate critical reflection. Universities, at least in a good part of 20th century, were tasked with providing both. Lukács, in his account of the Frankfurt school, satirized this as “taking residence in the ‘Grand Hotel Abyss’”: engaging in critique from a position of relative comfort, from which one can stare ‘into nothingness’. Yet, what if the Grand Hotel Abyss has a wifi connection?

Changing temporal frames: beyond the Twitter intellectual?

Some potential perils of the ‘always-on’ culture and contracting temporal frames for critique are reflected in the widely publicized case of Steven Salaita, an internationally recognized scholar in the field of Native American studies and American literature. In 2013, Salaita was offered a tenured position at the University of Illinois. However, in 2014 the Board of Trustees withdrew the offer, citing Salaita’s “incendiary” posts on Twitter as the reason. Salaita is a vocal critic of Israel, and his Tweets at the time concerned Israeli military offensive in the Gaza Strip; some of the University’s donors found this problematic and pressured the Board to withdraw the offer. Salaita has in the meanwhile appealed the decision and received a settlement from the University of Illinois, but the case – though by no means unique – drew attention to the issue of the (im)possibility of separating the personal, political and professional on social media.

At the same time, social media can provide venues for practicing critique in ways not confined by the conventions or temporal cycles of the academia. The example of Eric Jarosinski, “The rock star philosopher of Twitter”, shows this clearly. Jarosinski is a Germanist whose Tweets contain clever puns on the Frankfurt school, as well as, among others, Hegel and Nietzsche. In 2013, he took himself out of consideration for tenure at the University of Pennsylvania, but continued to compose philosophically-inspired Tweets, eventually earning a huge following, as well as a column in the two largest newspapers in Germany and The Netherlands. Jarosinski’s moniker, #failedintellectual, is an auto-ironic reminder that it is possible to succeed whilst deviating from the established routes of intellectual critique.

Different ways in which it can be performed on Twitter should not, however, detract from the fact that critique operates in fundamentally politicized and stratified spaces; digital technologies can render them more accessible, but that does not mean that they are more democratic or offer a better view of ‘the public’. This is particularly worth remembering in the light of recent political events in the UK and the US. Once the initial shock following the US election and the British EU referendum had subsided, many academics (and intellectuals more broadly) have taken to social media to comment, evaluate, or explain what had happened. Yet, for the most part, these interventions end exactly where they began – on social media. This amounts to live Tweeting from the balcony of the Grand Hotel Abyss: the view is good, but the abyss no less gaping for it.

By sticking to critique on social media, intellectuals are, essentially, doing what they have always been good at – engaging with audiences and in ways they feel comfortable with. To this end, criticizing the ‘alt-right’ on Twitter is not altogether different from criticising it in lecture halls. Of course, no intellectual critique can aspire to address all possible publics, let alone equally. However, it makes sense to think how the ways in which we imagine our publics influences our capacity to understand the society we live in; and, perhaps more importantly, how it influences our ability to predict – or imagine – its future. In its present form, critique seems far better suited to an idealized Habermasian public sphere, than to the political landscape that will carry on in the 21st century. Digital technologies can offer an approximation, perhaps even a good simulation, of the former; but that, in and of itself, does not mean that they can solve problems of the latter.

Jana Bacevic is a PhD researcher at the Department of Sociology at the University of Cambridge. She works on social theory and the politics of knowledge production; her thesis deals with the social, epistemological and ontological foundations of the critique of neoliberalism in higher education and research in the UK. Previously, she was Marie Curie fellow at the University of Aarhus in Denmark at Universities in Knowledge Economies (UNIKE). She tweets at @jana_bacevic

One more time with [structures of] feeling: anxiety, labour, and social critique in/of the neoliberal academia

906736_10151382284833302_1277162293_o
Florence, April 2013

Last month, I attended the symposium on Anxiety and Work in the Accelerated Academy, the second in the Accelerated Academy series that explores the changing scapes of time, work, and productivity in the academia. Given that my research is fundamentally concerned with the changing relationships between universities and publics, and the concomitant reframing of the subjectivity, agency, and reflexivity of academics, I naturally found the question of the intersection of academic labour and time relevant. One particular bit resonated for a long time: in her presentation, Maggie O’Neill from the University of York suggested anxiety has become the primary structure of feeling in the neoliberal academia. Having found myself, in the period leading up to the workshop, increasingly reflecting on the structures of feeling,  I was intrigued by the salience of the concept. Is there a place for theoretical concepts such as this in research on the transformations of knowledge production in contemporary capitalism, and where is it?

All the feels

“Structure of feeling” may well be one of those ideas whose half-life way superseded their initial purview. Raymond Williams introduced it in a brief chapter included in Marxism and Literature, contributing to carving out what would become known as the distinctly British take on the relationship between “base” and “superstructure”: cultural studies. In it, he says:

Specific qualitative changes are not assumed to be epiphenomena of changed institutions, formations, and beliefs, or merely secondary evidence of changed social and economic relations between and within classes. At the same time they are from the beginning taken as social experience, rather than as ‘personal’ experience or as the merely superficial or incidental ‘small change’ of society. They are social in two ways that distinguish them from reduced senses of the social as the institutional and the formal: first, in that they are changes of presence (while they are being lived this is obvious; when they have been lived it is still their substantial characteristic); second, in that although they are emergent or pre-emergent, they do not have to await definition, classification, or rationalization before they exert palpable pressures and set effective limits on experience and on action. Such changes can be defined as changes in structures of feeling. (Williams, 1977:130).

Williams thus introduces structures of feeling as a form of social diagnostic; he posits it against the more durable but also more formal concepts of ‘world-view’ or ‘ideology’. Indeed, the whole chapter is devoted to the critique of the reificatory tendencies of Marxist social analysis: the idea of things (or ideas) being always ‘finished’, always ‘in the past’, in order for them to be subjected to analytical scrutiny. The concept of “structure of feeling” is thus invoked in order to keep tabs on social change and capture the perhaps less palpable elements of transformation as they are happening.

Emotions and the scholastic disposition

Over the past years, discourse of feelings has certainly become more prominent in the academia. Just last week, Cambridge’s Festival of Ideas featured a discussion on the topic, framing it within issues of free speech and trigger warnings on campus. While the debate itself has a longer history in the US, it had begun to attract more attention in the UK – most recently in relation to challenging colonial legacies at both Oxford and Cambridge.

Despite multiple nuances of political context and the complex interrelation between imperialism and higher education, the debate in the media predominantly plays out in dichotomies of ‘thinking’ and ‘feeling’. Opponents tend to pit trigger warnings or the “culture of offence” against the concept of academic freedom, arguing that today’s students are too sensitive and “coddled” which, in their view, runs against the very purpose of university education. From this perspective, education is about ‘cultivating’ feelings: exercising control, submerging them under the strict institutional structures of the intellect.

Feminist scholars, in particular, have extensively criticised this view for its reductionist properties and, not least, its propensity to translate into institutional and disciplinary policies that seek to exclude everything framed as ‘emotional’, bodily, or material (and, by association, ‘feminine’) from academic knowledge production. But the cleavage runs deeper. Research in social sciences is often framed in the dynamic of ‘closeness’ and ‘distancing’, ‘immersion’ and ‘purification’: one first collects data by aiming to be as close as possible to the social context of the object of research, but then withdraws from it in order to carry out analysis. While approaches such as grounded theory or participatory methods (cl)aim to transcend this boundary, its echoes persist in the structure of presentation of academic knowledge (for instance, the division between data and results), as well as the temporal organisation of graduate education (for instance, the idea that the road to PhD includes a period of training in methods and theories, followed by data collection/fieldwork, followed by analysis and the ‘writing up’ of results).

The idea of ‘distanced reflection’ is deeply embedded in the history of academic knowledge production. In Pascalian Meditations, Bourdieu relates it to the concept of skholē – the scholarly disposition – predicated on the distinction between intellectual and manual labour. In other words, in order for reflection to exist, it needed to be separated from the vagaries of everyday existence. One of its radical manifestations is the idea of the university as monastic community. Oxford and Cambridge, for instance, were explicitly constructed on this model, giving rise to animosities between ‘town’ and ‘gown’: concerns of the ‘lay’ folk were thought to be diametrically opposed to those of the educated. While arguably less prominent in (most) contemporary institutions of knowledge production, the dichotomy is still unproblematically transposed in concepts such as “university’s contribution to society”, which assumes universities are distinct from the society, or at least their interests radically different from those of “the society” – raising obvious questions about who, in fact, is this society.

Emotions, reason, and critique

Paradoxically, perhaps, one of the strongest reverberations of the idea is to be found in the domain of social critique. On the one hand, this sounds counter-intuitive – after all, critical social science should be about abandoning the ‘veneer’ of neutrality and engaging with the world in all of its manifestations. However, establishing the link between social science and critique rests on something that Boltanski, in his critique of Bourdieu’s sociology of domination, calls the metacritical position:

For this reason we shall say that critical theories of domination are metacritical in order. The project of taking society as an object and describing the components of social life or, if you like, its framework, appeals to a thought experiment that consists in positioning oneself outside this framework in order to consider it as a whole. In fact, a framework cannot be grasped from within. From an internal perspective, the framework coincides with reality in its imperious necessity. (Boltanski, 2011:6-7)

Academic critique, in Boltanski’s view, requires assuming a position of exteriority. A ‘simple’ form of exteriority rests on description: it requires ‘translation’ of lived experience (or practices) into categories of text. However, passing the kind of moral judgements critical theory rests on calls for, he argues, a different form of distancing: complex exteriority.

In the case of sociology, which at this level of generality can be regarded as a history of the present, with the result that the observer is part of what she intends to describe, adopting a position of exteri­ority is far from self-evident… This imaginary exit from the viscosity of the real initially assumes stripping reality of its character of implicit necessity and proceeding as if it were arbitrary (as if it could be other than it is or even not be);

This “exit from the viscosity of the real” (a lovely phrase!) proceeds in two steps. The first takes the form of “control of desire”, that is, procedural distancing from the object of research. The second is the act of judgement by which a social order is ‘ejected’, seen in its totality, and as such evaluated from the outside:

In sociology the possibility of this externalization rests on the existence of a laboratory – that is to say, the employment of protocols and instructions respect for which must constrain the sociologist to control her desires (conscious or unconscious). In the case of theories of domination, the exteriority on which cri­tique is based can be called complex, in the sense that it is established at two different levels. It must first of all be based on an exteriority of the first kind to equip itself with the requisite data to create the picture of the social order that will be submitted to critique. A meta­ critical theory is in fact necessarily reliant on a descriptive sociology or anthropology. But to be critical, such a theory also needs to furnish itself, in ways that can be explicit to very different degrees, with the means of passing a judgement on the value of the social order being described. (ibid.)

Critique: inside, outside, in-between?

To what degree could we say that this categorisation can be applied to the current critique of conditions of knowledge production in the academia? After all, most of those who criticize the neoliberal transformation of higher education and research are academics. In this sense, it would make sense to question the degree to which they can lay claims to a position of exteriority. However, more problematically (or interestingly), it is also questionable to which degree a position of exteriority is achievable at all.

Boltanski draws attention to this problem by emphasising the distinction between the cognition – awareness – of ‘ordinary’ actors, and that of sociologists (or other social scientists), the latter, presumably, able to perceive structures of domination that the subjects of their research do not:

Metacritical theories of domination tackle these asymmetries from a particular angle – that of the miscognition by the actors themselves of the exploitation to which they are subject and, above all, of the social conditions that make this exploitation possible and also, as a result, of the means by which they could stop it. That is why they present themselves indivisibly as theories of power, theories of exploitation and theories of knowledge. By this token, they encounter in an especially vexed fashion the issue of the relationship between the knowledge of social reality which is that of ordinary actors, reflexively engaged in practice, and the knowledge of social reality conceived from a reflexivity reliant on forms and instruments of totalization – an issue which is itself at the heart of the tensions out of which the possibility of a social science must be created (Boltanski, 2011:7)

Hotel Academia: you can check out any time you like, but you can never leave?

How does one go about thinking about the transformation of the conditions of knowledge production when one is at the same time reflexively engaged in practice and relying on the reflexivity provided by sociological instruments? Is it at all possible? The feelings of anxiety, to this end, could be provoked exactly by this lack of opportunity to step aside – to disembed oneself from the academic life and reflect on it at the leisurely pace of skholē. On the one hand, this certainly has to do with the changing structure and tempo of academic life – acceleration and demands for increased output: in this sense, anxiety is a reaction to the changes perceived and felt, the feeling that the ground is no longer stable, like a sense of vertigo. On the other hand, however, this feeling of decentredness could be exactly what contemporary critique calls for.

The challenge, of course, is how to turn this “structure of feeling” into something that has analytical as well as affective power – and can transform the practice itself. Stravinsky’s Rite of Spring, I think, is a wonderful example of this. As a melody, it is fundamentally disquieting: its impact primarily drawn from the fact that it disrupted what were, at the time, expectations of the (musical) genre, and in the process, rewrote them.

In other words, anxiety could be both creative and destructive. This, however, is not some broad call to “embrace anxiety”. There is a clear and pertinent need to understand the way in which the transformations of working conditions – everywhere, and also in the context of knowledge production – are influencing the sense of self and what is commonly referred to as mental health or well-being.

However, in this process, there is no need to externalise anxiety (nor other feelings): that is, frame it as if caused by forces outside of, or completely independent from, human influence, including within the academia itself (for instance, government policies, or political changes on supranational level). Conversely, there is no need to completely internalise it, in the sense of ascribing it to the embodied experience of individuals only. If feelings occupy the unstable ‘middle ground’ between institutions and individuals, this is the position from which they will have to be thought. If anxiety is an interpretation of the changes of the structures of knowledge production, its critique cannot but stem from the same position. This position is not ‘outside’, but rather ‘in-between’; insecure and thought-provoking, but no less potent for that.

Which, come to think of it, may be what Williams was trying to say all along.

All the feels

This poster drew my attention while I was working in the library of Cambridge University a couple of weeks ago:

lovethelib

 

For a while now, I have been fascinated with the way in which the language of emotions, or affect, has penetrated public discourse. People ‘love’ all sorts of things: the way a film uses interior light, the icing on a cake, their friend’s new hairstyle. They ‘hate’ Donald Trump, the weather, next door neighbours’ music. More often than not, conversations involving emotions would not be complete without mentioning online expressions of affect, such as ‘likes’ or ‘loves’ on Facebook or on Twitter.

Of course, the presence of emotions in human communication is nothing new. Even ‘ordinary’ statements – such as, for instance, “it’s going to rain tomorrow” – frequently entail an affective dimension (most people would tend to get at least slightly disappointed at the announcement). Yet, what I find peculiar is that the language of affect is becoming increasingly present not only in non-human-mediated communication, but also in relation to non-human entities. Can you really ‘love’ a library? Or be ‘friends’ with your local coffee place?

This isn’t to in any way concede ground to techno-pessimists who blame social media for ‘declining’ standards in human communication, nor even to express concern over the ways in which affective ‘reaction’ buttons allow tracking online behaviour (privacy is always a problem, and ‘unmediated’ communication largely a fiction). Even if face-to-face is qualitatively different from online interaction, there is nothing to support the claim that makes it inherently more valuable, or, indeed, ‘real’ (see: “IRL fetish[i]). It is the social and cultural framing of these emotions, and, especially, the way social sciences think about it – the social theory of affect, if you wish – that concerns me here.

Fetishism and feeling

So what is different about ‘loving’ your library as opposed to, say, ‘loving’ another human being? One possible way of going about this is to interpret expressions of emotion directed at or through non-human entities as ‘shorthand’ for those aimed at other human beings. The kernel of this idea is contained in Marx’s concept of commodity fetishism: emotion, or affect, directed at an object obscures the all-too-human (in his case, capital) relationship behind it. In this sense, ‘liking’ your local coffee place would be an expression of appreciation for the people who work there, for the way they make double macchiato, or just for the times you spent there with friends or other significant others. In human-to-human communication, things would be even more straightforward: generally speaking, ‘liking’ someone’s status updates, photos, or Tweets would signify appreciation of/for the person, agreement with, or general interest in, what they’re saying.

But what if it is actually the inverse? What if, in ‘liking’ something on Facebook or on Twitter, the human-to-human relationship is, in fact, epiphenomenal to the act? The prime currency of online communication is thus the expenditure of (emotional) energy, not the relationship that it may (or may not) establish or signify. In this sense, it is entirely irrelevant whether one is liking an inanimate object (or concept), or a person. Likes or other forms of affective engagement do not constitute any sort of human relationship; the only thing they ‘feed’ is the network itself. The network, at the same time, is not an expression, reflection, or (even) simulation of human relationships: it is the primary structure of feeling.

All hail…

Yuval Noah Harari’s latest book, Homo Deus, puts the issue of emotions at the centre of the discussion of the relationship between human and AI. In a review in The Guardian, David Runciman writes:

“Human nature will be transformed in the 21st century because intelligence is uncoupling from consciousness. We are not going to build machines any time soon that have feelings like we have feelings: that’s consciousness. Robots won’t be falling in love with each other (which doesn’t mean we are incapable of falling in love with robots). But we have already built machines – vast data-processing networks – that can know our feelings better than we know them ourselves: that’s intelligence. Google – the search engine, not the company – doesn’t have beliefs and desires of its own. It doesn’t care what we search for and it won’t feel hurt by our behaviour. But it can process our behaviour to know what we want before we know it ourselves. That fact has the potential to change what it means to be human.”

On the surface level, this makes sense. Algorithms can measure our ‘likes’ and other emotional reactions and combine them into ‘meaningful’ patterns – e.g., correlate them with specific background data (age, gender, location), time of day, etc., and, on the basis of this, predict how you will act (click, shop) in specific situations. However, does this amount to ‘knowledge’? In other words, if machines cannot have feelings – and Harari seems adamant that they cannot – how can they actually ‘know’ them?

Frege on Facebook

This comes close to a philosophical problem I’ve  been trying to get a grip on recently: the Frege-Geach (alternatively, the embedding, or Frege-Geach-Searle) problem. It is comprised of two steps. The first is to claim that there is a qualitative difference between moral and descriptive statements – for instance, between saying “It is wrong to kill” and “It is raining”. Most humans, I believe, would agree with this. The second is to observe that there is no basis for claiming this sort of difference based on sentence structure alone, which then leads to the problem of explaining its source – how do we know there is one? In other words, how it could be that moral and descriptive terms have exactly the same sort of semantic properties in complex sentences, even though they have different kinds of meaning? Where does this difference stem from?

The argument can be extended to feelings: how do we know that there is a qualitative difference between statements such as “I love you” and “I eat apples”? Or loving someone and ‘liking’ an online status? From a formal (syntactic) perspective, there isn’t. More interestingly, however, there is no reason why machines should not be capable of such a form of expression. In this sense, there is no way to reliably establish that likes coming from a ‘real’ person and, say, a Twitterbot, are qualitatively different. As humans, of course, we would claim to know the difference, or at least be able to spot it. But machines cannot. There is nothing inherent in the expression of online affect that would allow algorithms to distinguish between, say, the act of ‘loving’ the library and the act of loving a person. Knowledge of emotions, in other words, is not reducible to counting, even if counting takes increasingly sophisticated forms.

How do you know what you do not know?

The problem, however, is that humans do not have superior knowledge of emotions, their own or other people’s. I am not referring to situations in which people are unsure or ‘confused’ about how they feel [ii], but rather to the limited language – forms of expression – available to us. The documentary “One More Time With Feeling”, which I saw last week, engages with this issue in a way I found incredibly resonant. Reflecting on the loss of his son, Nick Cave relates how the words that he or people around him could use to describe the emotions seemed equally misplaced, maladjusted and superfluous (until the film comes back into circulation, Amanda Palmer’s review which addresses a similar question is  here) – not because they couldn’t reflect it accurately, but because there was no necessary link between them and the structure of feeling at all.

Clearly, the idea that language does not reflect, but rather constructs  – and thus also constrains – human reality is hardly new: Wittgenstein, Lacan, and Rorty (to name but a few) have offered different interpretations of how and why this is the case. What I found particularly poignant about the way Cave frames it in the film is that it questions the whole ontology of emotional expression. It’s not just that language acts as a ‘barrier’ to the expression of grief; it is the idea of the continuity of the ‘self’ supposed to ‘have’ those feelings that’s shattered as well.

Love’s labour’s lost (?): between practice and theory

This brings back some of my fieldwork experiences from 2007 and 2008, when I was doing a PhD in anthropology, writing on the concept of romantic relationships. Whereas most of my ‘informants’ – research participants – could engage in lengthy elaboration of the criteria they use in choosing (‘romantic’) partners (as well as, frequently, the reasons why they wouldn’t designate someone as a partner), when it came to emotions their narratives could frequently be reduced to one word: love (it wasn’t for lack of expressive skills: most were highly educated). It was framed as a binary phenomenon: either there or not there. At the time, I was more interested in the way their (elaborated) narratives reflected or coded markers of social inequality – for instance, class or status. Recently, however, I have been going back more to their inability (or unwillingness) to elaborate on the emotion that supposedly underpins, or at least buttresses, those choices.

Theoretical language is not immune to these limitations. For instance, whereas social sciences have made significant steps in deconstructing notions such as ‘man’, ‘woman, ‘happiness’, ‘family’, we are still miles away from seriously examining concepts such as ‘love’, ‘hate’, or ‘fear’. Moira Weigel’s and Eva Illouz’ work are welcome exceptions to the rule: Weigel uses the feminist concept of emotional labour to show how the responsibility for maintaining relationships tends to be unequally distributed between men and women, and Illouz demonstrates how modern notions of dating come to define subjectivity and agency of persons in ways conducive to the reproduction of capitalism. Yet, while both do a great job in highlighting social aspects of love, they avoid engaging with its ontological basis. This leaves the back door open for an old-school dualism that either assumes there is an (a- or pre-social?) ‘basis’ to human emotions, which can  be exploited or ‘harvested’ through relationships of power; or, conversely, that all emotional expression is defined by language, and thus its social construction the only thing worth studying. It’s almost as if ‘love’ is the last construct left standing, and we’re all too afraid to disenchant it.

For a relational ontology

A relational ontology of human emotions could, in principle, aspire to de-throne this nominalist (or, possibly worse, truth-proceduralist) notion of love in favour of one that sees it as a by-product of relationality. This isn’t claiming that ‘love’ is epiphenomenal: to the degree to which it is framed as a motivating force, it becomes part and parcel of the relationship itself. However, not seeing it as central to this inquiry would hopefully allow us to work on the diversification of the language of emotions. Instead of using a single marker (even as polysemic as ‘love’) for the relationship with one’s library and one’s significant other, we could start thinking about ways in which they are (or are not) the same thing. This isn’t, of course, to sanctify ‘live’ human-to-human emotion: I am certain that people can feel ‘love’ for pets, places, or deceased ones. Yet, calling it all ‘love’ and leaving it at that is a pretty shoddy way of going about feelings.

Furthermore, a relational ontology of human emotions would mean treating all relationships as unique. This isn’t, to be clear, a pseudoanarchist attempt to deny standards of or responsibility for (inter)personal decency; and even less a default glorification of long-lasting relationships. Most relationships change over time (as do people inside them), and this frequently means they can no longer exist; some relationships cannot coexist with other relationships; some relationships are detrimental to those involved in them, which hopefully means they cease to exist. Equally, some relationships are superficial, trivial, or barely worth a mention. However, this does not make them, analytically speaking, any less special.

This also means they cannot be reduced to the same standard, nor measured against each other. This, of course, runs against one of capitalism’s dearly-held assumptions: that all humans are comparable and, thus, mutually replaceable. This assumption is vital not only for the reproduction of labour power, but also, for instance, for the practice of dating [iii], whether online or offline. Moving towards a relational concept of emotions would allow us to challenge this notion. In this sense, ‘loving’ a library is problematic not because the library is not a human being, but because ‘love’, just like other human concepts, is a relatively bad proxy. Contrary to what pop songs would have us believe, it’s never an answer, and, quite possibly, neither the question.

Some Twitter wisdom for the end….

————————————————————————–

[i] Thanks go to Mark Carrigan who sent this to me.

[ii] While I am very interested in the question of self-knowledge (or self-ignorance), for some reason, I never found this particular aspect of the question analytically or personally intriguing.

[iii] Over the past couple of years, I’ve had numerous discussions on the topic of dating with friends, colleagues, but also acquaintances and (almost) strangers (the combination of having a theoretical interest in the topic and not being in a relationship seem to be particularly conducive to becoming involved in such conversations, regardless of whether one wants it or not). I feel compelled to say that my critique of dating (and the concomitant refusal to engage in it, at least as far as its dominant social forms go) does not, in any way, imply a criticism of people who do. There is quite a long list of people whom I should thank for helping me clarify this, but instead I promise to write another longer post on the topic, as well as, finally, develop that app  :).

Do we need academic celebrities?

 

[This post originally appeared on the Sociological Review blog on 3 August, 2016].

Why do we need academic celebrities? In this post, I would like to extend the discussion of academic celebrities from the focus on these intellectuals’ strategies, or ‘acts of positioning’, to what makes them possible in the first place, in the sense of Kant’s ‘conditions of possibility’. In other words, I want to frame the conversation in the broader framework of a critical cultural political economy. This is based on a belief that, if we want to develop an understanding of knowledge production that is truly relational, we need to analyse not only what public intellectuals or ‘academic celebrities’ do, but also what makes, maintains, and, sometimes, breaks, their wider appeal, including – not least importantly – our own fascination with them.

To begin with, an obvious point is that academic stardom necessitates a transnational audience, and a global market for intellectual products. As Peter Walsh argues, academic publishers play an important role in creating and maintaining such a market; Mark Carrigan and Eliran Bar-El remind us that celebrities like Giddens or Žižek are very good at cultivating relationships with that side of the industry. However, in order for publishers to operate at an even minimal profit, someone needs to buy the product. Simply put, public intellectuals necessitate a public.

While intellectual elites have always been to some degree transnational, two trends associated with late modernity are, in this sense, of paramount importance. One is the expansion and internationalization of higher education; the other is the supremacy of English as the language of global academic communication, coupled with the growing digitalization of the process and products of intellectual labour. Despite the fact that access to knowledge still remains largely inequitable, they have contributed to the creation of an expanded potential ‘customer base’. And yet – just like in the case of MOOCs – the availability or accessibility of a product is not sufficient to explain (or guarantee) interest in it. Regardless of whether someone can read Giddens’ books in English, or is able to watch Žižek’s RSA talk online, their arguments, presumably, still need to resonate: in other words, there must be something that people derive from them. What could this be?

In ‘The Existentialist Moment’, Patrick Baert suggests the global popularity of existentialism can be explained by Sartre’s (and other philosophers’ who came to be identified with it, such as De Beauvoir and Camus) successful connecting of core concepts of existentialist philosophy, such as choice and responsibility, to the concerns of post-WWII France. To some degree, this analysis could be applied to contemporary academic celebrities – Giddens and Bauman wrote about the problems of late or liquid modernity, and Žižek frequently comments on the contradictions and failures of liberal democracy. It is not difficult to see how they would strike a chord with the concerns of a liberal, educated, Western audience. Yet, just like in the case of Sartre, this doesn’t mean their arguments are always presented in the most palatable manner: Žižek’s writing is complex to the point of obscurantism, and Bauman is no stranger to ‘thick description’. Of the three, Giddens’ work is probably the most accessible, although this might have more to do with good editing and academic English’s predilection for short sentences, than with the simplicity of ideas themselves. Either way, it could be argued that reading their work requires a relatively advanced understanding of the core concepts of social theory and philosophy, and the patience to plough through at times arcane language – all at seemingly no or very little direct benefit to the audience.

I want to argue that the appeal of star academics has very little to do with their ideas or the ways in which they are framed, and more to do with the combination of charismatic authority they exude, and the feeling of belonging, or shared understanding, that the consumption of their ideas provides. Similarly to Weber’s priests and magicians, star academics offer a public performance of the transfiguration of abstract ideas into concrete diagnosis of social evils. They offer an interpretation of the travails of late moderns – instability, job insecurity, surveillance, etc. – and, at the same time, the promise that there is something in the very act of intellectual reflection, or the work of social critique, that allows one to achieve a degree of distance from their immediate impact. What academic celebrities thus provide is – even if temporary – (re)‘enchantment’ of the world in which the production of knowledge, so long reserved for the small elite of the ‘initiated’, has become increasingly ‘profaned’, both through the massification of higher education and the requirement to make the stages of its production, as well as its outcomes, measurable and accountable to the public.

For the ‘common’ (read: Western, left-leaning, highly educated) person, the consumption of these celebrities’ ideas offers something akin to the combination of a music festival and a mindfulness retreat: opportunity to commune with the ‘like-minded’ and take home a piece of hope, if not for salvation, then at least for temporary exemption from the grind of neoliberal capitalism. Reflection is, after all, as Marx taught us, the privilege of the leisurely; engaging in collective acts of reflection thus equals belonging to (or at least affinity with) ‘the priesthood of the intellect’. As Bourdieu noted in his reading of Weber’s sociology of religion, laity expect of religion “not only justifications of their existence that can offer them deliverance from the existential anguish of contingency or abandonment, [but] justification of their existence as occupants of a particular position in the social structure”. Thus, Giddens’ or Žižek’s books become the structural or cultural equivalent of the Bible (or Qur’an, or any religious text): not many people know what is actually in them, even fewer can get the oblique references, but everyone will want one on the bookshelf – not necessarily for what they say, but because of what having them signifies.

This helps explain why people flock to hear Žižek or, for instance, Yannis Varoufakis, another leftist star intellectual. In public performances, their ideas are distilled to the point of simplicity, and conveniently latched onto something the public can relate to. At the Subversive Festival in Zagreb, Croatia in 2013, for instance, Žižek propounded the idea of the concept of ‘love’ as a political act. Nothing new, one would say – but who in the audience would not want to believe their crush has potential to turn into an act of political subversion? Therefore, these intellectuals’ utterances represent ‘speech acts’ in quite a literal sense of the term: not because they are truly (or consequentially) performative, but because they offer the public an illusion that listening (to them) and speaking (about their work) represents, in itself, a political act.

From this perspective, the mixture of admiration, envy and resentment with which these celebrities are treated in the academic establishment represents a reflection of their evangelical status. Those who admire them quarrel about the ‘correct’ interpretation of their works and vie for the status of the nominal successor, which would, of course, also feature ritualistic patricide – which may be the reason why, although surrounded by followers, so few academic celebrities actually elect one. Those who envy them monitor their rise to fame in hope of emulating it one day. Those who resent them, finally, tend to criticize their work for intellectual ‘baseness’, an argument that is in itself predicated on the distinction between academic (and thus ‘sacred’) and popular, ‘common’ knowledge.

Many are, of course, shocked when their idols turn out not to be ‘original’ thinkers channeling divine wisdom, but plagiarists or serial repeaters. Yet, there is very little to be surprised by; academic celebrities, after all, are creatures of flesh and blood. Discovering their humanity and thus ultimate fallibility – in other words, the fact that they cheat, copy, rely on unverified information, etc. – reminds us that, in the final instance, knowledge production is work like any other. In other words, it reminds us of our own mortality. And yet, acknowledging it may be the necessary step in dismantling the structures of rigid, masculine, God-like authority that still permeate the academia. In this regard, it makes sense to kill your idols.