‘Ethics of Ambiguity’ Reading Group

This is a reading group for all those who wish to come together to discuss Simone de Beauvoir’s “Ethics of Ambiguity” (1947).

The group runs in (Northern hemisphere) winter 2022-3, mostly coinciding with the winter break, and is designed to give space for open reflection and discussion of ideas concerning ethics, responsibility, and ambiguity in relation to contemporary circumstances.

The group is open to all. Philosophical training or detailed background knowledge are not required. For specs, see FAQ (1) below.

The group runs in weekly sessions on Zoom, Fridays 1-2PM (BST, London time), starting from 16 December until 27 January inclusive of Xmas/New Year’s break. This time is chosen both for accessibility purposes and, in some cases, to accommodate the academic term. If the timing does not suit you, please see FAQ (2) below.

For instructions on how and when to join, as well as how to participate, see FAQs (3) and (4). For schedule, see bottom of page.

FAQs (or, please read this before joining):

(1) Who can participate?

The group is open to all. You do not need to have a philosophical background, detailed knowledge of existentialist (or any) philosophy, or an interest in Simone de Beauvoir to participate. The group welcomes all people regardless of gender, ethnicity, ability, or any other aspect of identity; that said, the conversation is designed to be respectful and equal, so bullying, racism and transphobia will not be tolerated.

There is no formal leadership and no assumption of authority in the group. The emphasis in the discussion is on personal impressions, thoughts, and questions that the text raises for you. That said, be mindful of the background of participants when contributing; do not use references (as in, ‘in her other work, de Beauvoir…’) or name-drops (as in, ‘as Foucault said..’) without explaining what you mean in a language accessible to everyone (or, best, skip name-dropping altogether).

(2) What if the timing does not suit me?

The group is run on an entirely informal and voluntary basis. You are free to join any of the sessions at any time between 1 and 2 PM, without expectation of continuation or repeat participation. If the timing does not suit you, you are welcome to start another reading or discussion group at a timing that suits you better.

(3) How can I join?

Below is the schedule, Zoom link, and details for each session.

16 December, 1-2PM (BST)

Chapter 1: Ambiguity and Freedom (pages 5-35 in 2015 English edition by Open Road Integrated Media)

Join


23 December, 1-2PM (BST)

Chapter 2: Personal Freedom and Others (pp. 37-78, as above)

Join

[Winter break]

6 January, 1-2PM (BST)

Chapter 3: The Positive Aspect of Ambiguity, Sections I (The Aesthetic Attitude) and II (Freedom and Liberation), pp. 79-103

Join

13 January, 1-2PM (BST)

Chapter 3: The Positive Aspect of Ambiguity, Sections III and IV (The Antinomies of Action & The Present and the Future), pp. 103-139

Join

20 January, 1-2PM (BST)

Chapter 3, Section IV (The Present and the Future), cont’d, and beginning of Chapter V: Ambiguity (pp. 139-168).

Join

27 January, 1-2PM (BST)

Conclusions (pp. 169-174) and wrap-up/further plans

Join

(4) How do I participate?

Be mindful of other participants. Try not to take more than 2-3 minutes when speaking, and give priority to those who have not already spoken in the meeting. While there will be no chairing or official moderation (unless absolutely necessary), raising your hand (Zoom lower bar in window –> Reactions –> ‘Raise hand’) function will signal to other speakers you want to speak and indicate your turn in the conversation.

Your microphone will be muted by default when joining. Please make sure you keep your mic on mute except when speaking, especially if in a noisy environment. Participants are normally expected to turn cameras on as this contributes to participation and communication, but we understand there are safety- and ability-related reasons not to do so.

De Beauvoir’s book can be found on Marxists.org (link above), in libraries, or bookshops.

Happy reading!

On reparative reading and critique in/of anthropology: postdisciplinary perspectives on discipline-hopping

*This is a more-or-less unedited text of the plenary (keynote) address to the international conference ‘Anthropology of the future/The Future of Anthropology‘, hosted by the Institute of Ethnography of the Serbian Academy of Sciences and Arts, in Viminacium, 8-9 September 2022. If citing, please refer to as Bacevic, J. [Title]. Keynote address, [Conference].

Hi all. It’s odd to be addressing you at a conference entitled ‘Anthropology of the Future/The Future of Anthropology’, as I feel like an outsider for several reasons. Most notably, I am not an anthropologist. This is despite the fact that I have a PhD in anthropology, from the University of Belgrade, awarded in 2008. What I mean is that I do not identify as an anthropologist, I do not work in a department or institute of anthropology, nor do I publish in anthropology journals. In fact, I went so far in the opposite direction that I got another PhD, in sociology, from the University of Cambridge. I work at a department of sociology, at Durham University, which is a university in the north-east of England, which looks remarkably like Oxford and Cambridge. So I am an outsider in two senses: I am not an anthropologist, and I no longer live, reside, or work in Serbia. However, between 2004 and 2007 I taught at the Department of Ethnology and Anthropology of the University of Belgrade, and also briefly worked at the Institute that is organizing this very conference, as part of the research stipend awarded by the Serbian Ministry of Science to young, promising, scientific talent. Between 2005 and 2007, and then again briefly in 2008-9, I was the Programme Leader for Antropology in Petnica Science Centre. I don’t think it would be too exaggerated to say, I was, once, anthropology’s future; and anthropology was mine. So what happened since?

By undertaking a retelling of a disciplinary transition – what would in common parlance be dubbed ‘career change’ or ‘reorientation’ – my intention is not to engage in autoethnography, but to offer a reparative reading. I borrow the concept of reparative reading from the late theorist Eve Kosofsky Sedgwick’s essay entitled “On paranoid reading and reparative reading, or: You’re so paranoid, you probably think this essay is about you”, first published in 1997 and then, with edits, in 2003; I will say more about its content and key concepts shortly.

For the time being, however, I would like to note that the disinclination from autoethnography was one of the major reasons why I left anthropology; it was matched by the desire to do theory, by which I mean the possibility of deriving mid-range generalizations about human behaviour that could aspire not to be merely local, by which I mean not apply only to the cases studied. This, as we know, is not particularly popular in anthropology. This particular brand of ethnographic realism was explicitly targeted for critique during anthropology’s postmodern turn. On the other hand, Theory in anthropology itself had relatively little to commend it, all too easily and too often developing into a totalizing master-narrative of the early evolutionism or, for that matter, its late 20th– and early 21st-century correlates, including what is usually referred to as cognitive psychology, a ‘refresh’ of evolutionary theory I had the opportunity to encounter during my fellowship at the University of Oxford (2007-8). So there were, certainly, a few reasons to be suspicious of theory in anthropology.

For someone theoretically inclined, thus, one option became to flee into another discipline. Doing a PhD in philosophy in the UK is a path only open to people who have undergraduate degrees in philosophy (and I, despite a significant proportion of my undergrad coursework going into philosophy, had not), which is why a lot of the most interesting work in philosophy in the UK happens – or at least used to happen – in other departments, including literature and language studies, the Classics, gender studies, or social sciences like sociology and geography. I chose to work with those theorists who had found their institutional homes in sociology; I found a mentor at the University of Cambridge, and the rest is history (by which I mean I went on to a postdoctoral research fellowship at Cambridge and then on to a permanent position at Durham).  

Or that, at any rate, is one story. Another story would tell you that I got my PhD in 2008, the year when the economic crisis hit, and job markets collapsed alongside several other markets. On a slightly precarious footing, freshly back from Oxford, I decided to start doing policy research and advising in an area I had been researching before: education policies, in particular as part of processes of negotiation of multiple political identities and reconciliation in post-conflict societies. Something that had hitherto been a passion, politics, soon became a bona fide object of scholarly interest, so I spent the subsequent few years developing a dual career, eventually a rather high-profile one, as, on the one hand, policy advisor in the area of postconflict higher education, and, on the other, visiting (adjunct) lecturer at the Central European University in Budapest, after doing a brief research fellowship in its institute of advanced study. But because I was not educated as a political scientist – I did not, in other words, have a degree in political science; anthropology was closer to ‘humanities’ and my research was too ‘qualitative’ (this is despite the fact I taught myself basic statistics as well as relatively advanced data analysis) – I could not aspire to a permanent job there. So I started looking for routes out, eventually securing a postdoc position (a rather prestigious Marie Curie, and a tenure-track one) in Denmark.

I did not like Denmark very much, and my boss in this job – otherwise one of the foremost critics of the rise of audit culture in higher education – turned out to be a bully, so I spent most of my time in my two fieldwork destinations, University of Bristol, UK, and University of Auckland, New Zealand. I left after two years, taking up an offer of a funded PhD at Cambridge I had previously turned down. Another story would tell you that I was disappointed with the level of corruption and nepotism in Serbian academia so have decided to leave. Another, with disturbing frequency attached to women scholars, would tell you that being involved in an international relationship I naturally sought to move somewhere I could settle down with my partner, even if that meant abandoning the tenured position I had at Singidunum University in Serbia (this reading is, by the way, so prominent and so unquestioned that after I announced I had got the Marie Curie postdoc and would be moving to Denmark several people commented “Oh, that makes sense, isn’t your partner from somewhere out there” – despite the fact my partner was Dutch).

Yet another story, of course, would join the precarity narrative with the migration/exile and decoloniality narrative, stipulating that as someone who was aspiring to do theory I (naturally) had to move to the (former) colonial centre, given that theory is, as we know, produced in the ‘centre’ whereas countries of the (semi)periphery are only ever tasked with providing ‘examples’, ‘case-‘, or, at best, regional or area studies. And so on and so on, as one of the few people who have managed to trade their regional academic capital for a global (read: Global North/-driven and -defined) one, Slavoj Žižek, would say.

The point here is not to engage in a demonstration of multifocality by showing all these stories could be, and in a certain register, are true. It is also not to point out that any personal life-story or institutional trajectory can be viewed from multiple (possibly mutually irreconcilable) registers, and that we pick a narrative depending on occasion, location, and collocutor. Sociologists have produced a thorough analysis of how CVs, ‘career paths’ or  trajectories in the academia are narratively constructed so as to establish a relatively seamless sequence that adheres to, but also, obviously, by the virtue of doing that, reproduces ideas and concepts of ‘success’ (and failure; see also ‘CV of failures‘). Rather, it is to observe something interesting: all these stories, no matter how multifocal or multivocal, also posit master narratives of social forces – forces like neoliberalism, or precarity, for instance; and a master narrative of human motivation – why people do the things they do, and what they desire – things like permanent jobs and high incomes, for instance. They read a direction, and a directionality, into human lives; even if – or, perhaps, especially when – they narrate instances of ‘interruption’, ‘failure’, or inconsistency.

This kind of reading is what Eve Kosofsky Segdwick dubs paranoid reading. Associated with what Paul Ricoeur termed ‘hermeneutics of suspicion’ in Nietzsche, Marx, and Freud, and building on the affect theories of Melanie Klein and Silvan Tomkins, paranoid reading is a tendency that has arguably become synonymous with critique, or critical theory in general: to assume that there is always a ‘behind’, an explanatory/motivational hinterland that, if only unmasked, can not only provide a compelling explanation for the past, but also an efficient strategy for orienting towards the future. Paranoid reading, for instance, characterizes a lot of the critique in and of anthropology, not least of the Writing Culture school, including in the ways the discipline deals with the legacy of its colonial past.

To me, it seems like anthropology in Serbia today is primarily oriented towards a paranoid reading, both in relation to its present (and future) and in relation to its past. This reading of the atmosphere is something it shares with a lot of social sciences and humanities internationally, one of increasing instability/hostility, of the feeling of being ‘under attack’ not only by governments’ neoliberal policies but also by increasingly conservative and reactionary social forces that see any discipline with an openly progressive, egalitarian and inclusive political agenda as leftie woke Satanism, or something. This paranoia, however, is not limited only to those agents or social forces clearly inimical or oppositional to its own project; it extends, sometimes, to proximate and cognate disciplines and forms of life, including sociology, and to different fractions or theoretical schools within anthropology, even those that should be programmatically opposed to paranoid styles of inquiry, such as the phenomenological or ontological turn – as witnessed, for instance, by the relatively recent debate between the late David Graeber and Eduardo Viveiros de Castro on ontological alterity.

Of course, in the twenty-five years that have passed from the first edition of Sedgwick’s essay, many species of theory that explicitly diverge from paranoid style of critique have evolved, not least the ‘postcritical’ turn. But, curiously, when it comes to understanding the conditions of our own existence – that is, the conditions of our own knowledge production – we revert into paranoid readings of not only the social, cultural, and political context, but also of people’s motivations and trajectories. As I argued elsewhere, this analytical gesture reinscribes its own authority by theoretically disavowing it. To paraphrase the title of Sedgwick’s essay, we’re so anti-theoretical that we’re failing to theorize our own inability to stop aspiring to the position of power we believe our discipline, or our predecessors, once occupied, the same power we believe is responsible for our present travails. In other words, we are failing to theorize ambiguity.

My point here is not to chastise anthropology in particular or critical theory in more general terms for failing to live up to political implications of its own ontological commitments (or the other way round?); I have explained at length elsewhere – notably in “Knowing neoliberalism” – why I think this is an impossibility (to summarize, it has to do with the inability to undo the conditions of our own knowledge – to, barely metaphorically, cut our own epistemological branch). Rather, my question is what we could learn if we tried to think of the history and thus future of anthropology, and our position in it, from a reparative, rather than paranoid, position.

This in itself, is a fraught process; not least because anthropology (including in Serbia) has not been exempt from revelations concerning sexual harassment, and it would not be surprising if many more are yet to come. In the context of re-encounter with past trauma and violence, not least the violence of sexual harassment, it is nothing if not natural to re-examine every bit of the past, but also to endlessly, tirelessly scrutinize the present: was I there? Did I do something? Could I have done something? What if what I did made things worse? From this perspective, it is fully justified to ask what could it, possibly, mean to turn towards a reparative reading – can it even, ever, be justified?

Sedgwick – perhaps not surprisingly – has relatively little to say about what reparative reading entails. From my point of view, reparative reading is the kind of reading that is oriented towards reconstructing the past in a way that does not seek to avoid, erase or deny past traumas, but engages with the narrative so as to afford a care of the self and connection – or reconnection – with the past selves, including those that made mistakes or have a lot to answer for. It is, in essence, a profoundly different orientation towards the past as well as the future, one that refuses to reproduce cultures – even if cultures of critique – and to claim that future, in some ways, will be exactly like the past.

Sedgwick aligns this reorientation with queer temporalities, characterized by a relationship to time that refuses to see it in (usually heteronormatively-coded) generationally reproductive terms: my father’s father did this, who in turn passed it to my father, who passed it to me, just like I will pass it to my children. Or, to frame this in more precisely academic terms: my supervisor(s) did this, so I will do it [in order to become successful/recognized like my academic predecessors], and I will teach my students/successors to do it. Understanding that it can be otherwise, and that we can practise other, including non-generational (non-generative?) and non-reproductive politics of knowledge/academic filiation/intellectual friendship is, I think, one important step in making the discussion about the future, including of scientific discipline, anything other than a vague gesturing towards its ever-receding glorious past.

Of course, as a straight and, in most contexts, cis-passing woman, I am a bit reluctant to claim the label of queerness, especially when speaking in Serbia, an intensely and increasingly institutionally homophobic and compulsorily heterosexual society. However, I hope my queer friends, partners, and colleagues will forgive me for borrowing queerness as a term to signify refusal to embody or conform to diagnostic narratives (neoliberalism, precarity, [post]socialism); refusal or disinvestment from normatively and regulatively prescribed vocabularies of motivation and objects of desire – a permanent (tenured) academic position; a stable and growing income; a permanent relationship culminating in children and a house with a garden (I have a house, but I live alone and it does not have a garden). And, of course, the ultimate betrayal for anyone who has come from “here” and ‘made it’ “over there”: the refusal to perform the role of an academic migrant in a way that would allow to once and for all settle the question of whether everything is better ‘over there’ or ‘here’, and thus vindicate the omnipresent reflexive chauvinism (‘corrupt West’) or, alternatively, autochauvinism (‘corrupt Serbia’).

What I hope to have achieved instead, through this refusal, is to offer a postdisciplinary or at least undisciplined narrative and an example of how to extract sustenance from cultures inimical to your lifeplans or intellectual projects. To quote from Sedgwick:

“The vocabulary for articulating any reader’s reparative motive toward a text or a culture has long been so sappy, aestheticizing, defensive, anti-intellectual, or reactionary that it’s no wonder few critics are willing to describe their acquaintance with such motives. The prohibitive problem, however, has been in the limitations of present theoretical vocabularies rather than in the reparative motive itself. No less acute than a paranoid position, no less realistic, no less attached to a project of survival, and neither less nor more delusional or fantasmatic, the reparative reading position undertakes a different range of affects, ambitions, and risks. What we can best learn from such practices are, perhaps, the many ways selves and communities succeed in extracting sustenance from the objects of a culture—even of a culture whose avowed desire has often been not to sustain them.“

All of the cultures I’ve inhabited have been this to some extent – Serbia for its patriarchy, male-dominated public sphere, or excessive gregarious socialisation, something that sits very uncomfortably with my introversion; England for its horrid anti-immigrant attitude only marginally (and not always profitably) mediated by my ostensible ’Whiteness’; Denmark for its oppressive conformism; Hungary, where I was admittedly happiest among the plethora of other English-speaking cosmopolitan academics, which could not provide the institutional home I required (eventually, as is well-known, not even to CEU). But, in a different way, they have also been incredibly sustaining; I love my friends, many of whom are academic friends (former colleagues) in Serbia; I love the Danish egalitarianism and absolute refusal of excess; and I love England in many ways, in no particular order, the most exciting intellectual journey, some great friendships (many of those, I do feel the need to add, with other immigrants), and the most beautiful landscapes, especially in the North-East, where I live now (I also particularly loved New Zealand, but hope to expand on that on a different occasion).

To theorize from a reparative position is to understand that all of these things could be true at the same time. That there is, in other words, no pleasure without pain, that the things that sustain us will, in most cases, also harm us. It is to understand that there is no complete career trajectory, just like there is no position , epistemic or otherwise, from which we could safely and for once answer the question what the future will be like. It is to refuse to pre-emptively know the future, not least so that we could be surprised.

When it ends

In the summer of 2018, I came back to Cambridge from one of my travels to a yellowed, dusty patch of land. The grass – the only thing that grew in the too shady back garden of the house me and my partner were renting – had not only wilted; it had literally burnt to the ground.

I burst into tears. As I sat in the garden crying, to (I think) the dismay of my increasingly bewildered partner, I pondered what a scene of death so close to home was doing – what it was doing in my back yard, and what it was doing to me. For it was neither the surprise at nor the scale that shook me – I had witnessed both human and non-human destruction much vaster than a patch of grass in Cambridge; I had spent most of the preceding year and some reading on the politics, economics, and – as the famed expression goes – ‘the science’ of climate change (starting with the excellent Anthropocene reading group I attended while living in London), so I was well-versed, by then, in precisely what was likely to happen, how and when. It wasn’t, either, the proximity, otherwise assumed to be a strong motivator: I certainly did not need climate change to happen in my literal ‘back yard’ in order to become concerned about it. If nothing else, I had come back to Cambridge from a prolonged stay in Serbia, where I have been observing the very same things, detailed here (including preparations for mineral extraction that will become the main point of contention for the protests against Rio Tinto in 2022). As to anyone who has lived outside of the protected enclaves of the Global North, climate change has felt very real, for quite some time.

What made me break down at the sight of that scorched patch of grass was its ordinariness – the fact that, in front, besides, and around what for me was quite bluntly an extinction event, life seemed to go on as usual. No-one warned me my back garden was a cemetery. Several months before that, at the very start of the first round of UCU strikes in 2018, I raised the question of pension funds invested in fossil fuels, only to be casually told one of the biggest USS shares was in Royal Dutch Shell (USS, and the University of Cambridge, have reluctantly committed to divestment since, but this is yet to yield any results in the case of USS). While universities make pompous statements about sustainability, a substantial chunk of their funding and operating revenue goes to activities that are at best one step removed from directly contributing to the climate crisis, from international (air) travel to building and construction. At Cambridge, I ran a reading group called Ontopolitics of the future, whose explicit question was: What survives in the Anthropocene? In my current experience, the raising of climate change tends to provoke uncomfortable silences, as if everyone had already accepted the inevitability of 1.5+ degree warming and the suffering it would inevitably come with.

This acceptance of death is a key feature of the concept of ‘slow death’ that Lauren Berlant introduced in Cruel Optimism:

“Slow death prospers not in traumatic events, as discrete time-framed phenomena like military encounters and genocides can appear to do, but in temporally labile environments whose qualities and whose contours in time and space are often identified with the presentness of ordinariness itself” (Berlant, 2011: 100).

Berlant’s emphasis on the ordinariness of death is a welcome addition to theoretical frameworks (like Foucault’s bio-, Mbembe’s necro- or Povinelli’s onto-politics) that see the administration of life and death as effects of sovereign power:

“Since catastrophe means change, crisis rhetoric belies the constitutive point that slow death—or the structurally induced attrition of persons keyed to their membership in certain populations—is neither a state of exception nor the opposite, mere banality, but a domain where an upsetting scene of living is revealed to be interwoven with ordinary life after all” (Berlant, 2011: 102).

Over the past year and some, I’ve spent a lot of time thinking about the concept of ‘slow death’ in relation to the Covid-19 pandemic (my contribution to the edited special issue on Encountering Berlant should be coming out in Geography Journal sometime this year). However, what brought back the scorched grass in Cambridge as I sat at home during UK’s hottest day on record in 2022 was not the (inevitable) human, non-human, or infrastructural cost of climate change; it was, rather, the observation that for most academics life seemed to go on as usual, if a little hotter. From research concerns to driving to moaning over (the absence of) AC, there seemed to be little reflection on how our own modes of knowledge production – not to mention lifestyles – were directly contributing to heating the planet.

Of course, the paradox of knowledge and (in)action – or knowing and (not) doing – has long been at the crux of my own work, from performativity and critique of neoliberalism to the use of scientific evidence in the management of the Covid-19 pandemic. But with climate change, surely it has to be obvious to everyone that there is no way to just continue business as usual, that – while effects are surely differentially distributed according to privilege and other kinds of entitlement – no-one is really exempt from it?

Or so I thought, as I took an evening walk and passed a dead magpie on the pavement, which made me think of birds dying from heat exhaustion in India earlier in May (luckily, no other signs of mass bird extinction were in sight, so I returned home, already a bit light-headed from the heat). But as I absent-mindedly scrolled through Twitter (as well as attended a part of a research meeting), what seemed obvious was that there was a clear disconnection between modes of knowing and modes of being in the world. On the one hand, everyone was too hot, commenting on the unsustainability of housing, or the inability of transport networks to sustain temperatures over 40 degrees Celsius. On the other, academic knowledge production seemed to go on, as if things such as ‘universities’, ‘promotions’, or ‘reviews’ had the span of geological time, rather than being – for the most part – a very recent blip in precisely the thing that led to this degree of warming: capitalism, and the drive to (over)produce, (over)compete, and expand.

It is true that these kinds of challenges – like existential crises – can really make people double-down on whatever positions and identities they already have. This is quite obvious in the case of some of political divisions – with, for instance, the death spirals of Covid-denialism, misogyny, and transphobia – but it happens in less explicitly polarizing ways too. In the context of knowledge production, this is something I have referred to as the combination of epistemic attachment and ontological bias. Epistemic attachment refers to being attached to our objects of knowledge; these can be as abstract as ‘class’ or ‘social structure’ or as concrete as specific people, problems, or situations. The relationship between us (as knowers) and what we know (our objects of knowledge) is the relationship between epistemic subjects and epistemic objects. Ontological bias, on the other hand, refers to the fact that our ways of knowing the world become so constitutive of who we are that we can fail to register when the conditions that rendered this mode of knowledge possible (or reliable) no longer obtain. (This, it is important to note, is different from having a ‘wrong’ or somehow ‘distorted’ image of epistemic objects; it is entirely conceivable to have an accurate representation on the wrong ontology, as is vice versa).

This is what happens when we carry on with academic research (or, as I’ve recently noted, the circuit of academic rituals) in a climate crisis. It is not that our analyses and publications stop being more or less accurate, more or less cited, more or less inspiring. On the other side, the racism, classism, ableism, and misogyny of academia do not stop either. It’s just that, technically speaking, the world in which all of these things happen is no longer the same world. The 1.5C (let alone 2 or 2.5, more-or-less certain now) degrees warmer world is no longer the same world that gave rise to the interpretative networks and theoretical frameworks we overwhelmingly use.

In this sense, to me, continuing with academia as business as usual (only with AC) isn’t even akin to the proverbial polishing of brass on the Titanic, not least because the iceberg has likely already melted or at least calved several times over. What it brings to mind, instead, was Jeff Vandermeer’s Area X trilogy, and the way in which professional identities play out in it.

I’ve already written about Area X, in part because the analogy with climate change presents itself, and in part because I think that – in addition to Margaret Atwood’s MaddAddam and Octavia Butler’s Parables – it is the best literary (sometimes almost literal) depiction of the present moment. Area X (or Southern Reach, if you’re in the US), is about an ‘event’ – that is at the same time a space – advancing on the edge of the known, ‘civilized’ world. The event/space – ‘Area’ – is, in a clear parallel to Strugatskys’ The Zone, something akin to a parallel dimension: a world like our own, within our own, and accessible from our own, but not exactly hospitable to us. In Vandermeer’s trilogy, Area X is a lush green, indeed overgrown, space; like in The Zone, ‘nature is healing’ has a more ominous sound to it, as in Area X, people, objects, and things disappear. Or reappear. Like bunnies. And husbands.

The three books of Area X are called Annihilation, Authority, and Acceptance. In the first book, the protagonist – whom we know only as the Biologist – goes on a mission to Area X, the area that has already swallowed (or maybe not) her husband. Other members of the expedition, who we also know only by profession – the Anthropologist, the Psychologist – are also women. The second book, Authority, follows the chief administrator – who we know as Control – of Area X, as the area keeps expanding. Control eventually follows the Biologist into Area X. The third book – well, I’ll stop with the plot spoilers here, but let’s just say that the Biologist is no longer called the Biologist.

This, if anything, is the source of slight reservation I have towards the use of professional identities, authority, and expertise in contexts like the climate crisis. Scientists for XR and related initiatives are both incredibly brave (especially those risking arrest, something I, as an immigrant, cannot do) and – needless to say – morally right; but the underlying emphasis on ‘the science’ too often relies on the assumption that right knowledge will lead to right action, which tends not to hold even for many ‘professional’ academics. In other words, it is not exactly that people do not act on climate change because they do not know or do not believe the science (some do, at least). It is that systems and institutions – and, in many cases, this includes systems and institutions of knowledge production, such as universities – are organized in ways that makes any kind of action that would refuse to reproduce (let alone actually disrupt) the logic of extractive capitalism increasingly difficult.

What to do? It is clear that we are now living on the boundary of Area X, and it is fast expanding. Area X is what was in my back garden in Cambridge. Area X is outside when you open windows in the north of England and what drifts inside has the temperature of a jet engine exhaust of a plane that had just landed. The magpie that was left to die in the middle of the road in Jesmond crossed Area X.

For my part, I know it is no longer sufficient to approach Area X as the Sociologist (or Theorist, or Anthropologist, or whatever other professional identity I have – relucantly, as all identities – perused); I tried doing that for Covid-19, and it did not get very far. Instead, I’d urge my academic colleagues to seriously start thinking about what we are and what we do when these labels – Sociologist, Biologist, Anthropologist, Scientist – no longer have a meaning. For this moment may come earlier than many of us can imagine; by then, we’d have better worked out the relationship between annihilation, authority, and acceptance.  

On doing it badly

I’m reading Christine Korsgaard’sSelf-Constitution: Agency, Identity, and Integrity‘ (2009) – I’ve found myself increasingly drawn recently to questions of normative political philosophy or ‘ideal theory’, which I’ve previously tended to analytically eschew, I presume as part-pluralism, part-anthropological reflex.

In chapter 2 (‘The Metaphysics of Normativity’), Korsgaard engages with Aristotle’s analysis of objects as an outcome of organizing principles. For instance, what makes a house a house rather than just a ‘heap of stones and mortar and bricks’ is its function of keeping out the weather, and this is also how we should judge the house – a ‘good’ house is one that fulfils this function, a bad house is one that does not not, or at least not so much.

This argument, of course, is a well-known one and endlessly discussed in social ontology (at least among the Cambridge Social Ontology crowd, which I still visit). But Korsgaard emphasizes something that has previously completely escaped my attention, which is the implicit argument about the relationship between normativity and knowledge:

Now, it is entirely true that ‘seeing what things do’ is a pretty neat description of my work as a theorist. But there is an equally important one, which is seeing what things can or could do. This means looking at (I’m parking the discussion about privileging the visual/observer approach to theory for the time being, as it’s both a well-known criticism in e.g. feminist & Indigenous philosophy *and* other people have written about it much better than I ever could) ‘things’ – in my case, usually concepts – and understanding what using them can do, that is, looking at them relationally. You are not the same person looking at one kind of social object and another, nor is it, importantly, the same social object ‘unproblematically’ (meaning that yes, it is possible to reach consensus about social objects – e.g. what is a university, or a man, or a woman, or fascism, but it is not possible to reach it without disagreement – the only difference being whether it is open or suppressed). I’m also parking the discussion about observer effects, indefinitely: if you’re interested in how that theoretical argument looks without butchering theoretical physics, I’ve written about it here.

This also makes the normative element of the argument more difficult, as it requires delving not only into the ‘satisficing’ or ‘fitness’ analysis (a good house is a house that does the job of being a house), but also into the performative effects analysis (is a good house a house that does its job in a way that eventually turns ‘houseness’ into something bad?). To note, this is distinct from other issues Korsgaard recognizes – e.g. that a house constructed in a place that obscures the neighbours’ view is bad, but not a bad house, as its ‘badness’ is not derived from its being a house, but from its position in space (the ‘where’, not the ‘what’). This analysis may – and I emphasize may – be sufficient for discrete (and Western) ontologies, where it is entirely conceivable of the same house being positioned somewhere else and thus remaining a good house, while no longer being ‘bad’ for the neighbourhood as a whole. But it clearly encounters problems on any kind of relational, environment-based, or contextual ontologies (a house is not a house only by the virtue of being sufficient to keep out elements for the inhabitants, but also – and, possibly, more importantly – by being positioned in a community, and a community that is ‘poisoned’ by a house that blocks everyone’s view is not a good community for houses).

In this sense, it makes sense to ask when what an object does turns into badness for the object itself? I.e., what would it mean that a ‘good’ house is at the same time a bad house? Plot spoiler: I believe this is likely true for all social objects. (I’ve written about ambiguity here and also here). The task of the (social) theorist – what, I think, makes my work social (both in the sense of applying to the domain of interaction between multiple human beings and in the sense of having relevance to someone beyond me) is to figure out what kind of contexts make one more likely than the other. Under what conditions do mostly good things (like, for instance, academic freedom) become mostly bad things (like, for instance, a form of exclusion)?

I’ve been thinking about this a lot in relation to what constitutes ‘bad’ scholarship (and, I guess, by extension, a bad scholar). Having had the dubious pleasure of encountering people who teach different combinations of neocolonial, right-wing, and anti-feminist ‘scholarship’ over the past couple of years (England, and especially the place where I work, is a trove of surprises in this sense), it strikes me that the key question is under what conditions this kind of work – which universities tend to ignore because it ‘passes’ as scholarship and gives them the veneer of presenting ‘both sides’ – turns the whole idea of scholarship into little more than competition for followers on either of the ‘sides’. This brings me to the question which, I think, should be the source of normativity for academic speech, if anything: when is ‘two-sideism’ destructive to knowledge production as a whole?

This is what Korsgaard says:


Is bad scholarship just bad scholarship, or is it something else? When does the choice to not know about the effects of ‘platforming’ certain kinds of speakers turn from the principle of liberal neutrality to wilful ignorance? Most importantly, how would we know the difference?

Does academic freedom extend to social media?

There is a longer discussion about this that has been going on in the US, continental European, and many other parts of the academic/policy/legal/media complexes and their intersection. Useful points of reference are Magna Charta Universitatum (1988), in part developed to stimulate ‘transition’ of Central/Eastern European universities away from communism, and European University Association’s Autonomy Scorecard, which represents an interesting case study for thinking through tensions between publicly (state) funded higher education and principles of freedom and autonomy (Terhi Nokkala and I have analyzed it here). Discussions in the UK, however, predictably (though hardly always justifiably) transpose most of the elements, political/ideological categories, and dynamics from the US; in this sense, I thought an article I wrote a few years back – mostly about theorising complex objects and their transformation, but with extensive analysis of 2 (and a half) case studies of ‘controversies’ involving academics’ use of social media – could offer a good reference point. The article is available (Open Access!) here; the subheadings that engage with social media in particular are pasted below. If citing, please refer to the following:

Bacevic, J. (2018). With or without U? Assemblage theory and (de)territorialising the university, Globalisation, Societies and Education, 17:1, 78-91, DOI: 10.1080/14767724.2018.1498323

——————————————————————————————————–

Boundary disputes: intellectuals and social media

In an analogy for a Cartesian philosophy of mind, Gilbert Ryle famously described a hypothetical visitor to Oxford (Ryle 1949). This astonished visitor, Ryle argued, would go around asking whether the University was in the Bodleian library? The Sheldonian Theatre? The colleges? and so forth, all the while failing to understand that the University was not in any of these buildings per se. Rather, it was all of these combined, but also the visible and invisible threads between them: people, relations, books, ideas, feelings, grass; colleges and Formal Halls; sub fusc and port. It also makes sense to acknowledge that these components can also be parts of other assemblages: for instance, someone can equally be an Oxford student and a member of the Communist Party, for instance. ‘The University’ assembles these and agentifies them in specific contexts, but they exist beyond those contexts: port is produced and shipped before it becomes College port served at a Formal Hall. And while it is possible to conceive of boundary disputes revolving around port, more often they involve people.

The cases analysed below involve ‘boundary disputes’ that applied to intellectuals using social media. In both cases, the intellectuals were employed at universities; and, in both, their employment ceased because of their activity online. While in the press these disputes were usually framed around issues of academic freedom, they can rather be seen as instances of reterritorialization: redrawing of the boundaries of the university, and reassertion of its agency, in relation to digital technologies. This challenges the assumption that digital technologies serve uniquely to deterritorialise, or ‘unbundle’, the university as traditionally conceived.

The public engagement of those who authoritatively produce knowledge – in sociological theory traditionally referred to as ‘intellectuals’ – has an interesting history (e.g. Small 2002). It was only in the second half of the twentieth century that intellectuals became en masse employed by universities: with the massification of higher education and the rise of the ‘campus university’, in particular in the US, came what some saw as the ‘decline’ of the traditional, bohemian ‘public intellectual’ reflected in Mannheim’s (1936) concept of ‘free-floating’ intelligentsia. Russell Jacoby’s The Last Intellectuals (1987) argues that this process of ‘universitisation’ has led to the disappearance of the intellectual ferment that once characterised the American public sphere. With tenure, he claimed, came the loss of critical edge; intellectuals became tame and complacent, too used to the comfort of a regular salary and an office job. Today, however, the source of the decline is no longer the employment of intellectuals at universities, but its absence: precarity, that is, the insecurity and impermanence of employment, are seen as the major threat not only to public intellectualism, but to universities – or at least the notion of knowledge as public good – as a whole.

This suggests that there has been a shift in the coding of the relationship between intellectuals, critique and universities. In the first part of the twentieth century, the function of social critique was predominantly framed as independent of universities; in this sense, ‘public intellectuals’ were if not more than equally likely to be writers, journalists, and other men (since they were predominantly men) of ‘independent means’ than academic workers. This changed in the second half of the twentieth century, with both the massification of higher education and diversification of the social strata intellectuals were likely to come from. The desirability of university employment increased with the decreasing availability of permanent positions. In part because of this, precarity was framed as one of the main elements of the neoliberal transformation of higher education and research: insecurity of employment, in this sense, became the ‘new normal’ for people entering the academic profession in the twenty-first century.

Some elements of precarity can be directly correlated with processes of ‘unbundling’ (see Gehrke and Kezar 2015; Macfarlane 2011). In the UK, for instance, certain universities rely on platforms such as Teach Higher to provide the service of employing teaching staff, who deliver an increasing portion of courses. In this case, teaching associates and lecturers are no longer employees of the university; they are employed by the platform. Yet even when this is not the case, we can talk about processes of deterritorializing, in the sense in which the practice is part of the broader weakening of the link between teaching staff and the university (cf. Hall 2016). It is not only the security of employment that is changed in the process; universities, in this case, also own the products of teaching as practice, for instance, course materials, so that when staff depart, they can continue to use this material for teaching with someone else in charge of ‘delivery’.

A similar process is observable when it comes to ownership of the products of research. In the context of periodic research assessment and competitive funding, some universities have resorted to ‘buying’, that is, offering highly competitive packages to staff with a high volume of publications, in order to boost their REF scores. The UK research councils and particularly the Stern Review (2016) include measures explicitly aimed to counter this practice, but these, in turn, harm early career researchers who fear that institutional ‘ownership’ of their research output would create a problem for their employability in other institutions. What we can observe, then, is a disassembling of knowledge production, where the relationship between universities, academics, and the products of their labour – whether teaching or research – is increasingly weakened, challenged, and reconstructed.

Possibly the most tenuous link, however, applies to neither teaching nor research, but to what is referred to as universities’ ‘Third mission’: public engagement (e.g. Bacevic 2017). While academics have to some degree always been engaged with the public – most visibly those who have earned the label of ‘public intellectual’ – the beginning of the twenty-first century has, among other things, seen a rise in the demand for the formalisation of universities’ contribution to society. In the UK, this contribution is measured as ‘impact’, which includes any application of academic knowledge outside of the academia. While appearances in the media constitute only one of the possible ‘pathways to impact’, they have remained a relatively frequent form of engaging with the public. They offer the opportunity for universities to promote and strengthen their ‘brand’, but they also help academics gain reputation and recognition. In this sense, they can be seen as a form of extension; they position the universities in the public arena, and forge links with communities outside of its ‘traditional’ boundaries. Yet, this form of engagement can also provoke rather bitter boundary disputes when things go wrong.

In the recent years, the case of Steven Salaita, professor of Native American studies and American literature became one of the most widely publicised disputes between academics and universities. In 2013, Salaita was offered a tenured position at the University of Illinois. However, in 2014 the Board of Trustees withdrew the offer, citing Salaita’s ‘incendiary’ posts on Twitter (Dorf 2014; Flaherty 2015). At the time, Israel was conducting one of its campaigns of daily shelling in the Gaza Strip. Salaita tweeted: ‘Zionists, take responsibility: if your dream of an ethnocratic Israel is worth the murder of children, just fucking own it already. #Gaza’ (Steven Salaita on Twitter, 19 July 2014). Salaita’s appointment was made public and was awaiting formal approval by the Board of Trustees of the University of Illinois, usually a matter of pure technicality once it had been recommended by academic committees. Yet, in August Salaita was informed by the Chancellor that the University was withdrawing the offer.

Scandal erupted in the media shortly afterwards. It turned out that several of university’s wealthy donors, as well as a few students, had contacted members of the Board demanding that Salaita’s offer be revoked. The Chancellor justified her decision by saying that the objection to Salaita’s tweets concerned standards of ‘civility’, not the political opinion they expressed, but the discussions inevitably revolved around questions of identity, campus politics, and the degree to which they can be kept separate. This was exacerbated by a split within the American Association of University Professors, which is the closest the professoriate in the US has to a union: while the AAUP issued a statement of support to Salaita as soon as the news broke, Cary Nelson, the association’s former president and a prolific writer on issues of university autonomy and academic freedom, defended the Board’s decision. The reason? The protections awarded by the principle of academic freedom, Nelson claimed, extends only to tenured professors.

Very few people agreed with Nelson’s definition: eventually, the courts upheld Salaita’s case that the University of Illinois Board’s decision constituted breach of contract. He was awarded a hefty settlement (ten times the annual salary he would be earning at Illinois), but was not reinstated. This points to serious limitations of the using ‘academic freedom’ as an analytical concept. While university autonomy and academic freedom are principles invoked by academics in order to protect their activity, their application in academic and legal practice is, at best, open to interpretation. A detailed report by Karran and Malinson (2017), for instance, shows that both the understanding and the legal level of protection of academic freedom vary widely within European countries. In the US, the principle is often framed as part of freedom of speech and thus protected under the First Amendment (Karran 2009); but, as we could see, this does not in any way insulate it against widely differing interpretations of how it should be applied in practice.

While the Salaita case can be considered foundational in terms of making these questions central to a prolonged public controversy as well as a legal dispute, navigating the terrain in which these controversies arise has progressively become more complicated. Carrigan (2016) and Lupton (2014) note that almost everyone, to some degree, is already a ‘digital scholar’. While most human resources departments as well as graduate programmes increasingly offer workshops or courses on ‘using social media’ or ‘managing your identity online’ the issue is clearly not just one of the right tool or skill. Inevitably, it comes down to the question of boundaries, that is, what ‘counts as’ public engagement in the ‘digital university’, and why? How is academic work seen, evaluated, and recognised? Last, but not least, who decides?

Rather than questions of accountability or definitions of academic freedom, these controversies cannot be seen separately from questions of ontology, that is, questions about what entities are composed of, as well as how they act. This brings us back to assemblages: what counts as being a part of the university – and to what degree – and what does not? Does an academic’s activity on social media count as part of their ‘public’ engagement? Does it count as academic work, and should it be valued – or, alternatively, judged – as such? Do the rights (and protections) of academic freedom extend beyond the walls of the university, and in what cases? Last, but not least, which elements of the university exercise these rights, and which parts can refuse to extend them?

The case of George Ciccariello-Maher, until recently a Professor of English at Drexel University, offers an illustration of how these questions impact practice. On Christmas Day 2016, Ciccariello-Maher tweeted ‘All I want for Christmas is white genocide’, an ironic take on certain forms of right-wing critique of racial equality. Drexel University, which had been closed over Christmas vacation, belatedly caught up with the ire that the tweet had provoked among conservative users of Twitter, and issued a statement saying that ‘While the university recognises the right of its faculty to freely express their thoughts and opinions in public debate, Professor Ciccariello-Maher’s comments are utterly reprehensible, deeply disturbing and do not in any way reflect the values of the university’. After the ironic nature of the concept of ‘white genocide’ was repeatedly pointed out both by Ciccariello-Maher himself and some of his colleagues, the university apologised, but did not withdraw its statement.

In October 2017, the University placed Ciccariello-Maher on administrative leave, after his tweets about white supremacy as the cause of the Las Vegas shooting provoked a similar outcry among right-wing users of Twitter.1 Drexel cited safety concerns as the main reason for the decision – Ciccariello-Maher had been receiving racist abuse, including death threats – but it was obvious that his public profile was becoming too much to handle. Ciccariello-Maher resigned on 31st December 2017. His statement read: ‘After nearly a year of harassment by right-wing, white supremacist media and internet trolls, after threats of violence against me and my family, my situation has become unsustainable’.2 However, it indirectly contained a criticism of the university’s failure to protect him: in an earlier opinion piece published right after the Las Vegas controversy, Cicariello-Maher wrote that ‘[b]y bowing to pressure from racist internet trolls, Drexel has sent the wrong signal: That you can control a university’s curriculum with anonymous threats of violence. Such cowardice notwithstanding, I am prepared to take all necessary legal action to protect my academic freedom, tenure rights and most importantly, the rights of my students to learn in a safe environment where threats don’t hold sway over intellectual debate.’.3 The fact that, three months later, he no longer deemed it safe to continue doing that from within the university suggests that something had changed in the positioning of the university – in this case, Drexel – as a ‘bulwark’ against attacks on academic freedom.

Forms of capital and lines of flight

What do these cases suggest? In a deterritorialised university, the link between academics, their actions, and the institution becomes weaker. In the US, tenure is supposed to codify a stronger version of this link: hence, Nelson’s attempt to justify Salaita’s dismissal as a consequence of the fact that he did not have tenure at the University of Illinois, and thus the institutional protection of academic freedom did not extend to his actions. Yet there is a clear sense of ‘stretching’ nature of universities’ responsibilities or jurisdiction. Before the widespread use of social media, it was easier to distinguish between utterances made in the context of teaching or research, and others, often quite literally, off-campus. This doesn’t mean that there were no controversies: however, the concept of academic freedom could be applied as a ‘rule of thumb’ to discriminate between forms of engagement that counted as ‘academic work’ and those that did not. In a fragmented and pluralised public sphere, and the growing insecurity of academic employment, this concept is clearly no longer sufficient, if it ever was.

Of course, one might claim in this particular case it would suffice to define the boundaries of academic freedom by conclusively limiting it to tenured academics. But that would not answer questions about the form or method of those encounters. Do academics tweet in a personal, or in a professional, capacity? Is it easy to distinguish between the two? While some academics have taken to disclaimers specifying the capacity in which they are engaging (e.g. ‘tweeting in a personal capacity’ or ‘personal views/ do not express the views of the employer’), this only obscures the complex entanglement of individual, institution, and forms of engagement. This means that, in thinking about the relationship between individuals, institutions, and their activities, we have to take account the direction in which capital travels. This brings us back to lines of flight.

The most obvious form of capital in motion here is symbolic. Intellectuals such as Salaita and Ciccariello-Maher in part gain large numbers of followers and visibility on social media because of their institutional position; in turn, universities encourage (and may even require) staff to list their public engagement activities and media appearances on their profile pages, as this increases visibility of the institution. Salaita has been a respected and vocal critic of Israel’s policy and politics in the Middle East for almost a decade before being offered a job at the University of Illinois. Ciccariello-Maher’s Drexel profile page listed his involvement as

 … a media commentator for such outlets as The New York Times, Al Jazeera, CNN Español, NPR, the Wall Street Journal, Washington PostLos Angeles Times and the Christian Science Monitor, and his opinion pieces have run in the New York Times’ Room for Debate, The NationThe Philadelphia Inquirer and Fox News Latino.4

One would be forgiven for thinking that, until the unfortunate Tweet, the university supported and even actively promoted Ciccariello-Maher’s public profile.

The ambiguous nature of symbolic capital is illustrated by the case of another controversial public intellectual, Slavoj Žižek. Renowned ‘Elvis of philosophy’ is not readily associated with an institution; however, he in fact has three institutional positions. Žižek is a fellow of the Institute of Philosophy and Social Theory of the University of Ljubljana, teaches at the European Graduate School, and, most recently has been appointed International Director of the Birkbeck Institute of the Humanities. The Institute’s web page describes his appointment:

Although courted by many universities in the US, he resisted offers until the International Directorship of Birkbeck’s Centre came up. Believing that ‘Political issues are too serious to be left only to politicians’, Žižek aims to promote the role of the public intellectual, to be intellectually active and to address the larger public.5

Yet, Žižek quite openly boasts what comes across as a principled anti-institutional stance. Not long ago, a YouTube video in which he dismisses having to read students’ essays as ‘stupid’ attracted quite a degree of opprobrium.6 On the one hand, of course, what Žižek says in the video can be seen as yet another form of attention-seeking, or a testimony to the capacity of new social media to make everything and anything go ‘viral’. Yet, what makes it exceptional is exactly its unexceptionality: Žižek is known for voicing opinions that are bound to prove controversial or at least thread on the boundary of political correctness, and it is not a big secret that most academics do not find the work of essay-reading and marking particularly rewarding. But, unlike Žižek, they are not in a position to say it. Trumpeting disregard for one’s job on social media would, probably, seriously endanger it for most academics. As we could see in examples of Salaita and Ciccariello-Maher, universities were quick to sanction opinions that were far less directly linked to teaching. The fact that Birkbeck was not bothered by this – in fact, it could be argued that this attitude contributed to the appeal of having Žižek, who previously resisted ‘courting’ by universities in the US – serves as a reminder that symbolic capital has to be seen within other possible ‘lines of flight’.

These processes cannot be seen as simply arising from tensions between individual freedom on the one, and institutional regulation on the other side. The tenuous boundaries of the university became more visible in relation to lines of flight that combine persons and different forms of capital: economic, political, and symbolic. The Salaita controversy, for instance, is a good illustration of the ‘entanglement’ of the three. Within the political context – that is, the longer Israeli-Palestinian conflict, and especially the role of the US within it – and within the specific set of economic relationships, that is, the fact US universities are to a great degree reliant on funds from their donors – Salaita’s statement becomes coded as a symbolic liability, rather than an asset. This runs counter to the way his previous statements were coded: so, instead of channelling symbolic capital towards the university, it resulted in the threat of economic capital ‘fleeing’ in the opposite direction, in the sense of donors withholding it from the university. When it came to Ciccariello-Maher, from the standpoint of the university, the individual literally acts as a nodal point of intersection between different ‘lines of flight’: on the one hand, the channelling of symbolic capital generated through his involvement as an influential political commentator towards the institution; on the other, the possible ‘breach’ of the integrity (and physical safety) or staff and students as its constituent parts via threats of physical violence against Ciccariello-Maher.

All of this suggests that deterritorialization can be seen as positive and even actively supported; until, of course, the boundaries of the institution become too porous, in which case the university swiftly reterritorialises. In the case of the University of Illinois, the threat of withdrawn support from donors was sufficient to trigger the reterritorialization process by redrawing the boundaries of the university, symbolically leaving Salaita outside them. In the case of Ciccariello-Maher, it would be possible to claim that agency was distributed in the sense in which it was his decision to leave; yet, a second look suggests that it was also a case of reterritorialization inasmuch as the university refused to guarantee his safety, or that of his students, in the face of threats of white supremacist violence or disruption.

This also serves to illustrate why ‘unbundling’ as a concept is not sufficient to theorise the processes of assembling and disassembling that take place in (or on the same plane as) contemporary university. Public engagement sits on a boundary: it is neither fully inside the university, nor is it ‘outside’ by the virtue of taking place in the environment of traditional or social media. This impossibility to conclusively situate it ‘within’ or ‘without’ is precisely what hints at the arbitrary nature of boundaries. The contours of an assemblage, thus, become visible in such ‘boundary disputes’ as the controversies surrounding Salaita and Ciccariello-Maher or, alternatively, their relative absence in the case of Žižek. While unbundling starts from the assumption that these boundaries are relatively fixed, and it is only components that change (more specifically, are included or excluded), assemblage theory allows us to reframe entities as instantiated through processes of territorialisation and deterritorialization, thus challenging the degree to which specific elements are framed (or, coded) as elements of an assemblage.

Conclusion: towards a new political economy of assemblages

Reframing universities (and, by extension, other organisations) as assemblages, thus, allows us to shift attention to the relational nature of the processes of knowledge production. Contrary to the narratives of university’s ‘decline’, we can rather talk about a more variegated ecology of knowledge and expertise, in which the identity of particular agents (or actors) is not exhausted in their position with(in) or without the university, but rather performed through a process of generating, framing, and converting capitals. This calls for longer and more elaborate study of the contemporary political economy (and ecology) of knowledge production, which would need to take into account multiple other actors and networks – from the more obvious, such as Twitter, to less ‘tangible’ ones that these afford – such as differently imagined audiences for intellectual products.

This also brings attention back to the question of economies of scale. Certainly, not all assemblages exist on the same plane. The university is a product of multiple forces, political and economic, global and local, but they do not necessarily operate on the same scale. For instance, we can talk about the relative importance of geopolitics in a changing financial landscape, but not about the impact of, say, digital technologies on ‘The University’ in absolute terms. Similarly, talking about effects of ‘neoliberalism’ makes sense only insofar as we recognise that ‘neoliberalism’ itself stands for a confluence of different and frequently contradictory forces. Some of these ‘lines of flight’ may operate in ways that run counter to the prior states of the object in question – for instance, by channelling funds, prestige, or ideas away from the institution. The question of (re)territorialisation, thus, inevitably becomes the question of the imaginable as well as actualised boundaries of the object; in other words, when is an object no longer an object? How can we make boundary-work integral to the study of the social world, and of the ways we go about knowing it?

This line of inquiry connects with a broader sociological tradition of the study of boundaries, as the social process of delineation between fields, disciplines, and their objects (e.g. Abbott 2001; Lamont 2009; Lamont and Molnár 2002). But it also brings in another philosophical, or, more precisely, ontological, question: how do we know when a thing is no longer the same thing? This applies not only to universities, but also to other social entities – states, regimes, companies, relationships, political parties, and social movements. The social definition of entities is always community-specific and thus in a sense arbitrary; similarly, how the boundaries of entities are conceived and negotiated has to draw on a socially-defined vocabulary that conceptualises certain forms of (dis-)assembling as potentially destructive to the entity as a whole. From this perspective, understanding how entities come to be drawn together (assembled), how their components gain significance (coding), and how their relations are strengthened or weakened (territorialisation) is a useful tool in thinking about beginnings, endings, and resilience – all of which become increasingly important in the current political and historical moment.

The transformation of processes of knowledge production intensifies all of these dynamics, and the ways in which they play out in universities. While certainly contributing to the unbundling of its different functions, the analysis presented in this article shows that the university remains a potent agent in the social world – though what the university is composed of can certainly differ. In this sense, while the pronouncement of the ‘death’ of universities should be seen as premature, this serves as a potent reminder that understanding change, to a great deal, depends not only on how we conceptualise the mechanisms that drive it, but also on how we view elements that make up the social world. The tendency to posit fixed and durable boundaries of objects – that I have elsewhere referred to as ‘ontological bias’7 – has, therefore, important implications for both scholarship and practice. This article hopes to have made a contribution towards questioning the boundaries of the university as one among these objects.

——————–

If you’re interested in reading more about these tensions, I also recommend Mark Carrigan’s ‘Social Media for Academics’ (Sage).

How to think about theory (interview with Mark Carrigan, 28 June 2018)

In June 2018, Mark Carrigan interviewed me and a few other people on what is social theory. The original interview was published on the Social Theory Applied website. I’m sharing it here only lightly edited, as it still reflects to a good degree what I think about the labour of theorizing as well as what I call ‘the social life of concepts’, which is my approach to doing and teaching theory.

MC: What is theory?

JB: The million dollar question, isn’t it? I think theory can mean quite a few things – Abend (2008) has listed a few – but rather than reiterate that, I’d focus on two interpretations of the concept that are crucial to my work. One is that it is a language, or a vocabulary, for making sense of social reality; as all languages, it allows for improvisation, but also has rules and procedures that regulate how and under what conditions specific statements make sense. It is also a practice: that is, the practice of growing, developing, and engaging with concepts in that language. This is why Arendt’s discussion of the concept as theorein is not opposed to practice as a whole, but rather comprises action, though one that entails a different idea of engagement.

MC: Why is it important?

JB: This is also why I believe theory is important – I think that a meta-language, and a language about that language, is necessary in order to ensure we can have a meaningful conversation about social matters – and when I say “we”, I do not mean only scientists. Social life by definition involves some level of reduction of complexity: how we go about reducing that complexity has direct implications for how we go about dealing with other people and our environment. This is also why I think it is fruitless to separate social and political theory. Take the concept of class, for instance: it can – it does – mean different things to different people. We need to have both a language in which to make these concepts meaningfully talk to each other, and a routinized social practice for doing so.

MC: What role does it play in your work?

JB: One of the corollaries of my training in both sociology and anthropology is that I find it difficult to sustain discussions about theory that do not engage with how actual people go about using these concepts – the exegetic tone of “did Marx really mean to say this…” or “why Bourdieu’s concept of social capital is that….” is, in my view, both too canonical and insufficiently exciting.

Theory need not be scholastic. One of the elements I got interested in when I was doing my first PhD, for instance, was the concept of ‘romantic relationship’- there were different attempts to theorise it (inversion of historical abstraction of property/inheritance rights, subjugation of women, emancipation from gender roles, cultural expression of ‘hard-wired’ preferences, and so on), but fewer attempts to see how these interpretations ‘sit’ with people’s ideas and practices. Reality does not ‘naturally’ fit into a specific theoretical framework. Rather than trying to make it do so, I decided to put these different theoretical lenses into conversation, to see a particular empirical case could illuminate their commonalities, differences, and possible overlaps.

My second PhD – which is on the role of critique in and of higher education – pretty much repeated this movement, but took it one step further. It asks what difference people’s knowledge makes in how they go about approaching things (including their own situation). Some of this knowledge is theoretical, both in the sense in which it is imbued by concepts derived from theory (as in Giddens’ double hermeneutic), and in the sense in which the question of the link between knowledge and action is in itself theoretically informed. There’s a whole bunch of nested epistemic double binds in there, and that’s what I find so attractive! In philosophical terms, I am aiming to bridge the gap between [critical] realist and pragmatist accounts of the production of knowledge and its role in social reality. I’ve found speculative realism to be a potentially useful tool in doing so, but it is a signpost rather than a church.

I always strive to work simultaneously *on* and *with* theory; someone recently described this as “theoretically hybrid”, which I think was a nice way of putting that I was inclined to bastardize every and one concept I ever came across. But I think this is what the job of the theorist is about. I understand some people prefer to work within the confines of a single theoretical tradition, sometimes dogmatically so; but this has never been my choice. I have very little reverence for principled fidelity to specific theoretical frameworks. Theories are worldviews; this means they need to be challenged.

MC: What would this routinised social practice look like? Is this something social theorists are uniquely qualified to do? How do we ensure this challenge happens? There are lots of obvious mechanisms within the academy which militate against this.

JB: Well, I think routinised social practice is what happens in teaching of sociology and other social science disciplines; it also happens at conferences, reading groups, etc. – such as the Theory stream at the British Sociological Association’s annual conference. The problem is these practices are often sequestered from other bits of theorising. For instance, feminist theory is rarely treated as part of ‘mainstream’ social theory; same goes for postcolonial theory and theories of race, though it seems this is finally beginning to change.

This reproduces, as your question suggests, one of the worst tendencies in the academia (and beyond): theories about and by educated Western white men are treated as ‘theory’, while almost all theory that falls short of even if just one of these categories is automatically a ‘special case’ – as if feminist theory applied only to women, and theories of race only to people of colour. As someone whose induction into theory happened initially through the combination of social anthropology (where questions of identity and difference are pretty much front and centre) and philosophy of science (which acknowledged quite a while ago that all claims to knowledge – including theoretical knowledge – are socially grounded), I find this almost incomprehensible – or, rather, I find that explanations for this go back to the elements in the academia we do not particularly like: racism, sexism, Euro- or (not always mutually exclusively) Anglo-centrism, etc.

Social theorists, on the whole, have not been very good at talking about this. This means that this challenge tends to happen in isolated contexts – and a lot of mainstream social theory carries on with ‘business as usual’. Making it more central requires, I think, a lot of concerted effort. Some of this is personal – for instance, I make a point of always calling out these practices when I spot them, and very adamantly resist ‘pigeonholing’ in which, eg, women’s theoretical claims are routinely repackaged or treated as empirical. For example: a man writing on privatisation of enterprises is seen as contributing to Marxist theory, but a woman writing on the gendered division of labour is either writing ‘about women’ (sic!) or about household labour.

When I was writing my first PhD, about relationships, people often said ‘oh’, as it was a ‘light’ topic, or as if it pertained only to practices of social mobility in a post-socialist context, where my fieldwork was. Giddens’ ‘pure relationship’, on the other hand – which, incidentally, is a concept I did my best to write against – was not taken as only representative of the lived experience of transnational bourgeois mobile academics. This will sound a bit Gramscian, but a lot of theoretical claims made by ‘academic celebrities’ that are routinely taken seriously are often little but the extrapolation of their privilege. Yet, clearly, that is not the problem in and of itself – everyone writes themselves into theories they develop. It’s treating some of these as reflections of universal, God-given truth, and some as ‘about women’ or ‘about race’.  It’s the culture of condescension towards women and minorities that really needs to change.

Obviously, calling it out is not enough: I think we need a strong organisational and institutional support for this. One of academia’s performative contradictions – that I am particularly dedicated to exploring – is that often collective practices work precisely against this. So, we can have a workshop or panel on sexism, racism, or colonialism in social theory, but actually challenging these practices – including in their ‘everyday’ guises – takes a lot of courage, but also a lot of solidarity. It cannot happen outside of challenging the whole culture of fear that currently pervades the academia but which, I hope, the UCU strikes have started chipping away at.

The other thing we can do is provide spaces where these conversations can take place. For instance, the Social Theory summer school we ran at the University of Cambridge in 2016 was developed exactly to surmount this tendency towards ‘cloistered’ (well, of all words!) theorising. To step outside of the retreat of academic positions, seminars, self-rewarding research grants panels, etc., and ask: what is it that doing theory actually entails? Is it anything other than an attempt to justify our own (academic and non-academic) privilege by casually namedropping Foucault or Durkheim? I think this is the question we really need to answer.

Knowing neoliberalism

(This is a companion/’explainer’ piece to my article, ‘Knowing Neoliberalism‘, published in July 2019 in Social Epistemology. While it does include a few excerpts from the article, if using it, please cite and refer to the original publication. The very end of this post explains why).

What does it mean to ‘know’ neoliberalism?

What does it mean to know something from within that something? This question formed the starting point of my (recently defended) PhD thesis. ‘Knowing neoliberalism’ summarizes some of its key points. In this sense, the main argument of the article is epistemological — that is, it is concerned with the conditions (and possibilities, and limitations) of (human) knowledge — in particular when produced and mediated through (social) institutions and networks (which, as some of us would argue, is always). More specifically, it is interested in a special case of that knowledge — that is, what happens when we produce knowledge about the conditions of the production of our own knowledge (in this sense, it’s not ‘about universities’ any more than, say, Bourdieu’s work was ‘about universities’ and it’s not ‘on education’ any more than Latour’s was on geology or mining. Sorry to disappoint).

The question itself, of course, is not new – it appears, in various guises, throughout the history of Western philosophy, particularly in the second half of the 20th century with the rise (and institutionalisation) of different forms of theory that earned the epithet ‘critical’ (including the eponymous work of philosophers associated with the Frankfurt School, but also other branches of Marxism, feminism, postcolonial studies, and so on). My own theoretical ‘entry points’ came from a longer engagement with Bourdieu’s work on sociological reflexivity and Boltanski’s work on critique, mediated through Arendt’s analysis of the dichotomy between thinking and acting and De Beauvoir’s ethics of ambiguity; a bit more about that here. However, the critique of neoliberalism that originated in universities in the UK and the US in the last two decades – including intellectual interventions I analysed in the thesis – lends itself as a particularly interesting case to explore this question.

Why study the critique of neoliberalism?

  • Critique of neoliberalism in the academia is an enormously productive genre. The number of books, journal articles, special issues, not to mention ‘grey’ academic literature such as reviews or blogs (in the ‘Anglosphere’ alone) has grown exponentially since mid-2000s. Originating in anthropological studies of ‘audit culture’, the genre now includes at least one dedicated book series (Palgrave’s ‘Critical University Studies’, which I’ve mentioned in this book review), as well as people dedicated to establishing ‘critical university studies‘ as a field of its own (for the avoidance of doubt, I do not associate my work within this strand, and while I find the delineation of academic ‘fields’ interesting as a sociological phenomenon, I have serious doubts about the value and validity of field proliferation — which I’ve shared in many amicable discussions with colleagues in the network). At the start of my research, I referred to this as the paradox of the proliferation of critique and relative absence of resistance; the article, in part, tries to explain this paradox through the examination of what happens if and when we frame neoliberalism as an object of knowledge — or, in formal terms, epistemic object.
  • This genre of critique is, and has been, highly influential: the tropes of the ‘death’ of the university or the ‘assault’ on the academia are regularly reproduced in and through intellectual interventions (both within and outside of the university ‘proper’), including far beyond academic neoliberalism’s ‘native’ context (Australia, UK, US, New Zealand). Authors who present this kind of critique, while most frequently coming from (or being employed at) Anglophone universities in the ‘Global North’, are often invited to speak to audiences in the ‘Global South’. Some of this, obviously, has to do with the lasting influence of colonial networks and hierarchies of ‘global’ knowledge production, and, in particular, with the durability of ‘White’ theory. But it illustrates the broader point that the production of critique needs to be studied from the same perspective as the production of any sort of knowledge – rather than as, somehow, exempt from it. My work takes Boltanski’s critique of ‘critical sociology’ as a starting point, but extends it towards a different epistemic position:

Boltanski primarily took issue with what he believed was the unjustified reduction of critical properties of ‘lay actors’ in Bourdieu’s critical sociology. However, I start from the assumption that professional producers of knowledge are not immune to the epistemic biases to which they suspect their research subjects to be susceptible…what happens when we take forms and techniques of sociological knowledge – including those we label ‘critical’ and ‘reflexive’ – to be part and parcel of, rather than opposed to or in any way separate from, the same social factors that we assume are shaping epistemic dispositions of our research subjects? In this sense, recognising that forms of knowledge produced in and through academic structures, even if and when they address issues of exploitation and social (in)justice, are not necessarily devoid of power relations and epistemic biases, seems a necessary step in situating epistemology in present-day debates about neoliberalism. (KN, p. 4)

  • This, at the same time, is what most of the sources I analysed in my thesis have in common: by and large, they locate sources of power – including neoliberal power – always outside of their own scope of influence. As I’ve pointed out in my earlier work, this means ‘universities’ – which, in practice, often means ‘us’, academics – are almost always portrayed as being on the receiving end of these changes. Not only is this profoundly unsociological – literally every single take on human agency in the past 50-odd years, from Foucault through to Latour and from Giddens through to Archer – recognizes ‘we’ (including as epistemic agents) have some degree of influence over what happens; it is also profoundly unpolitical, as it outsources agency to variously conceived ‘others’ (as I’ve agued here) while avoiding the tricky elements of own participation in the process. This is not to repeat the tired dichotomy of complicity vs. resistance, which is another not particularly innovative reading of the problem. What the article asks, instead, is: What kind of ‘purpose’ does systematic avoidance of questions of ambiguity and ambivalence serve?

What does it aim to achieve?

The objective of the article is not, by the way, to say that the existing forms of critique (including other contributions to the special issue) are ‘bad’ or that they can somehow be ‘improved’. Least of all is it to say that if we just ‘corrected’ our theoretical (epistemological, conceptual) lens we would finally be able to ‘defeat neoliberalism’. The article, in fact, argues the very opposite: that as long as we assume that ‘knowing’ neoliberalism will somehow translate into ‘doing away’ with neoliberalism we remain committed to the (epistemologically and sociologically very limited) assumption that knowledge automatically translates into action.

(…) [the] politically soothing, yet epistemically limited assumption that knowledge automatically translates into action…not only omit(s) to engage with precisely the political, economic, and social elements of the production of knowledge elaborated above, [but] eschews questions of ambiguity and ambivalence generated by these contradictions…examples such as doctors who smoke, environmentalists who fly around the world, and critics of academic capitalism who nonetheless participate in the ‘academic rat race’ (Berliner 2016) remind us that knowledge of the negative effects of specific forms of behaviour is not sufficient to make them go away (KN, p. 10)

(If it did, there would be no critics of neoliberalism who exploit their junior colleagues, critics of sexism who nonetheless reproduce gendered stereotypes and dichotomies, or critics of academic hierarchy who evaluate other people on the basis of their future ‘networking’ potential. And yet, here we are).

What is it about?

The article approaches ‘neoliberalism’ from several angles:

Ontological: What is neoliberalism? It is quite common to see neoliberalism as an epistemic project. Yet, does the fact that neoliberalism changes the nature of the production of knowledge and even what counts as knowledge – and, eventually, becomes itself a subject of knowledge – give us grounds to infer that the way to ‘deal’ with neoliberalism is to frame it as an object (of knowledge)? Is the way to ‘destroy’ neoliberalism to ‘know it’ better? Does treating neoliberalism as an ideology – that is, as something that masses can be ‘enlightened’ about – translate into the possibility to wield political power against it?

(Plot spoiler: my answer to the above questions is no).

Epistemological: What does this mean for ways we can go about knowing neoliberalism (or, for that matter, any element of ‘the social’)? My work, which is predominantly in social theory and sociology of knowledge (no, I don’t work ‘on education’ and my research is not ‘about universities’), in many ways overlaps substantially with social epistemology – the study of the way social factors (regardless of how we conceive of them) shape the capacity to make knowledge claims. In this context, I am particularly interested in how they influence reflexivity, as the capacity to make knowledge claims about our own knowledge – including knowledge of ‘the social’. Enter neoliberalism.

What kind of epistemic position are we occupying when we produce an account of the neoliberal conditions of knowledge production in academia? Is one acting more like the ‘epistemic exemplar’ (Cruickshank 2010) of a ‘sociologist’, or a ‘lay subject’ engaged in practice? What does this tell us about the way in which we are able to conceive of the conditions of the production of our own knowledge about those conditions? (KN, p. 4)

(Yes, I know this is a bit ‘meta’, but that’s how I like it).

Sociological: How do specific conditions of our own production of knowledge about neoliberalism influence this? As a sociologist of knowledge, I am particularly interested in relations of power and privilege reproduced through institutions of knowledge production. As my work on the ‘moral economy’ of Open Access with Chris Muellerleile argued, the production of any type of knowledge cannot be analysed as external to its conditions, including when the knowledge aims to be about those conditions.

‘Knowing neoliberalism’ extends this line of argument by claiming we need to engage seriously with the political economy of critique. It offers some of the places we could look for such clues: for instance, the political economy of publishing. The same goes for networks of power and privilege: whose knowledge is seen as ‘translateable’ and ‘citeable’, and whose can be treated as an empirical illustration:

Neoliberalism offers an overarching diagnostic that can be applied to a variety of geographical and political contexts, on different scales. Whose knowledge is seen as central and ‘translatable’ in these networks is not independent from inequalities rooted in colonial exploitation, maintaining a ‘knowledge hierarchy’ between the Global North and the Global South…these forms of interaction reproduce what Connell (2007, 2014) has dubbed ‘metropolitan science’: sites and knowledge producers in the ‘periphery’ are framed as sources of ‘empirical’, ‘embodied’, and ‘lived’ resistance, while the production of theory, by and large, remains the work of intellectuals (still predominantly White and male) situated in prestigious univer- sities in the UK and the US. (KN, p. 9)

This, incidentally, is the only part of the article that deals with ‘higher education’. It is very short.

Political: What does this mean for different sorts of political agency (and actorhood) that can (and do) take place in neoliberalism? What happens when we assume that (more) knowledge leads to (more) action? (apart from a slew of often well-intended but misconceived policies, some of which I’ve analysed in my book, ‘From Class to Identity’). The article argues that affecting a cognitive slippage between two parts of Marx’s Eleventh Thesis – that is, assuming that interpreting the world will itself lead to changing it – is the thing that contributes to the ‘paradox’ of the overproduction of critique. In other words, we become more and more invested in ‘knowing’ neoliberalism – e.g. producing books and articles – and less invested in doing something about it. This, obviously, is neither a zero-sum game (and it shouldn’t be) nor an old-fashioned call on academics to drop laptops and start mounting barricades; rather, it is a reminder that acting as if there were an automatic link between knowledge of neoliberalism and resistance to neoliberalism tends to leave the latter in its place.

(Actually, maybe it is a call to start mounting barricades, just in case).

Moral: Is there an ethically correct or more just way of ‘knowing’ neoliberalism? Does answering these questions enable us to generate better knowledge? My work – especially the part that engages with the pragmatic sociology of critique – is particularly interested in the moral framing and justification of specific types of knowledge claims. Rather than aiming to provide the ‘true’ way forward, the article asks what kind of ideas of ‘good’ and ‘just’ are invoked/assumed through critique? What kind of moral stance does ‘gnossification’ entail? To steal the title of this conference, when does explaining become ‘explaining away’ – and, in particular, what is the relationship between ‘knowing’ something and framing our own moral responsibility in relation to something?

The full answer to the last question, unfortunately, will take more than one publication. The partial answer the article hints at is that, while having a ‘correct’ way of ‘knowing’ neoliberalism will not ‘do away’ with neoliberalism, we can and should invest in more just and ethical ways of ‘knowing’ altogether. It shouldn’t warrant reminding that the evidence of wide-spread sexual harrassment in the academia, not to mention deeply entrenched casual sexism, racism, ableism, ethnocentrism, and xenophobia, all suggest ‘we’ (as academics) are not as morally impeccable as we like to think we are. Thing is, no-one is. The article hopes to have made a small contribution towards giving us the tools to understand why, and how, this is the case.

I hope you enjoy the article!

——————————————————-

P.S. One of the rather straightforward implications of the article is that we need to come to terms with multiple reasons for why we do the work we do. Correspondingly, I thought I’d share a few that inspired me to do this ‘companion’ post. When I first started writing/blogging/Tweeting about the ‘paradox’ of neoliberalism and critique in 2015, this line of inquiry wasn’t very popular: most accounts smoothly reproduced the ‘evil neoliberalism vs. poor us little academics’ narrative. This has also been the case with most people I’ve met in workshops, conferences, and other contexts I have participated in (I went to quite a few as part of my fieldwork).

In the past few years, however, more analyses seem to converge with mine on quite a few analytical and theoretical points. My initial surprise at the fact that they seem not to directly engage with any of these arguments — in fact, were occasionally very happy to recite them back at me, without acknowledgement, attribution or citation — was somewhat clarified through reading the work on gendered citation practices. At the same time, it provided a very handy illustration for exactly the type of paradox described here: namely, while most academics are quick to decry the precarity and ‘awful’ culture of exploitation in the academia, almost as many are equally quick to ‘cite up’ or act strategically in ways that reproduce precisely these inequalities.

The other ‘handy’ way of appropriating the work of other people is to reduce the scope of their arguments, ideally representing it as an empirical illustration that has limited purchase in a specific domain (‘higher education’, ‘gender’, ‘religion’), while hijacking the broader theoretical point for yourself (I have heard a number of other people — most often, obviously, women and people of colour — describe a very similar thing happening to them).

This post is thus a way of clarifying exactly what the argument of the article is, in, I hope, language that is simple enough even if you’re not keen on social ontology, social epistemology, social theory, or, actually, anything social (couldn’t blame you).

PPS. In the meantime, I’ve also started writing an article on how precisely these forms of ‘epistemic positioning’ are used to limit and constrain the knowledge claims of ‘others’ (women, minorities) etc. in the academia: if you have any examples you would like to share, I’m keen to hear them!

Existing while female

Space

The most threatening spectacle to the patriarchy is a woman staring into space.

I do not mean in the metaphorical sense, as in a woman doing astronomy or astrophysics (or maths or philosophy), though all of these help, too. Just plainly sitting, looking into some vague mid-point of the horizon, for stretches of time.

I perform this little ‘experiment’ at least once per week (more often, if possible; I like staring into space). I wholly recommend it. There are a few simple rules:

  • You can look at the passers-by (a.k.a. ‘people-watching’), but try to avoid eye contact longer than a few seconds: people should not feel that they are particular objects of attention.
  • If you are sitting in a café, or a restaurant, you can have a drink, ideally a tea or coffee. That’s not saying you shouldn’t enjoy your Martini cocktails or glasses of Chardonnay, but images of women cradling tall glasses of alcoholic drink of choice have been very succesfully appropriated by both capitalism and patriarchy, for distinct though compatible purposes.
  • Don’t look at your phone. If you must check the time or messages it’s fine, but don’t start staring at it, texting, or browsing.
  • Don’t read (a book, a magazine, a newspaper). If you have a particularly interesting or important thought feel free to scribble it down, but don’t bury your gaze behind a notebook, book, or a laptop.

Try doing this for an hour.

What this ‘experiment’ achieves is that it renders visible the simple fact of existing. As a woman. Even worse, it renders visible the process of thinking. Simultaneously inhabiting an inner space (thinking) and public space (sitting), while doing little else to justify your existence.

NOT thinking-while-minding-children, as in ‘oh isn’t it admirrrrable that she manages being both an academic and a mom’.

NOT any other form of ‘thinking on our feet’ that, as Isabelle Stengers and Vinciane Despret (and Virginia Woolf) noted, was the constitutive condition for most thinking done by women throughout history.

The important thing is to claim space to think, unapologetically and in public.

Depending on place and context, this usually produces at least one of the following reactions:

  • Waiting staff, especially if male, will become increasingly attentive, repeatedly inquiring whether (a) I am alright (b) everything was alright (c) I would like anything else (yes, even if they are not trying to get you to leave, and yes, I have sat in the same place with friends, and this didn’t happen)
  • Men will try to catch my eye
  • Random strangers will start repeatedly glancing and sometimes staring in my direction.

I don’t think my experience in this regard is particularly exceptional. Yes, there are many places where women couldn’t even dream of sitting alone in public without risking things much worse than uncomfortable stares (I don’t advise attempting this experiment in such places). Yes, there are places where staring into a book/laptop/phone, ideally with headphones on, is the only way to avoid being approached, chatted up, or harassed by men. Yet, even in wealthy, white, urban, middle-class, ‘liberal’ contexts, women who display signs of being afflicted by ‘the life of the mind’ are still somehow suspect. For what this signals is that it is, actually, possible for women to have an inner life not defined by relation to men, if not particular men, then at least in the abstract.

Relations

‘Is it possible to not be in relation to white men?’, asks Sara Ahmed, in a brilliant essay on intellectual genealogies and institutional racism. The short answer is yes, of course, but not as long as men are in charge of drawing the family tree. Philosophy is a clear example. Two of my favourite philosophers, De Beauvoir and Arendt, are routinely positioned in relation to, respectively, Sartre and Heidegger (and, in Arendt’s case, to a lesser degree, Jaspers). While, in the case of De Beauvoir, this could be, to a degree, justified – after all, they were intellectual and writing partners for most of Sartre’s life – the narrative is hardly balanced: it is always Simone who is seen in relation to Jean-Paul, not the other way round*.

In a bit of an ironic twist, De Beauvoir’s argument in the Second Sex that a woman exists only in relation to a man seems to have been adopted as a stylistic prescription for narrating intellectual history (I recently downloaded an episode of In Our Time on De Beauvoir only to discover, in frustration, that it repeats exactly this pattern). Another example is the philosopher GEM Anscombe, whose work is almost uniquely described in terms of her interpretation of Wittgenstein (she was also married to the philosopher Peter Geach, which doesn’t help). A great deal of Anscombe’s writing does not deal with Wittgenstein, but that is, somehow, passed over, at least in non-specialist circles. What also gets passed over is that, in any intellectual partnership or friendship, ideas flow in both directions. In this case, the honesty and generosity of women’s acknowledgments (and occasional overstatements) of intellectual debt tends to be taken for evidence of incompleteness of female thinking; as if there couldn’t, possibly, be a thought in their ‘pretty heads’ that had not been placed there by a man.

Anscombe, incidentally, had a predilection for staring at things in public. Here’s an excerpt from the Introduction to the Vol. 2 of her collected philosophical papers, Metaphysics and the philosophy of mind:

“The other central philosophical topic which I got hooked on without realising it was philosophy, was perception (…) For years I would spend time in cafés, for instance, staring at objects saying to myself: ‘I see a packet. But what do I really see? How can I say that I see here anything more than a yellow expanse?’” (1981: viii).

But Wittgenstein, sure.

Nature

Nature abhors a vacuum, if by ‘nature’ we mean the rationalisation of patriarchy, and if by ‘vacuum’ we mean the horrifying prospect of women occupied by their own interiority, irrespectively of how mundane or elevated its contents. In Jane Austen’s novels, young women are regularly reminded that they should seem usefully occupied – embroidering, reading (but not too much, and ideally out loud, for everyone’s enjoyment), playing an instrument, singing – whenever young gentlemen came for a visit. The underlying message is that, of course, young gentlemen are not going to want to marry ‘idle’ women. The only justification for women’s existence, of course, is their value as (future) wives, and thus their reproductive capital: everything else – including forms of internal life that do not serve this purpose – is worthless.

Clearly, one should expect things to improve once women are no longer reduced to men’s property, or the function of wives and mothers. Clearly, they haven’t. In Motherhood, Sheila Heti offers a brilliant diagnosis of how the very question of having children bears down differently on women:

It suddenly seemed like a huge conspiracy to keep women in their thirties—when you finally have some brains and some skills and experience—from doing anything useful with them at all. It is hard to when such a large portion of your mind, at any given time, is preoccupied with the possibility—a question that didn’t seem to preoccupy the drunken men at all (2018: 98).

Rebecca Solnit points out the same problem in The Mother of All Questions: no matter what a woman does, she is still evaluated in relation to her performance as a reproductive engine. One of the messages of the insidious ‘lean-in’ kind of feminism is that it’s OK to not be a wife and a mother, as long as you are remarkably successful, as a businesswoman, a political leader, or an author. Obviously, ‘ideally’, both. This keeps women stressed, overworked, and so predictably willing to tolerate absolutely horrendous working conditions (hello, academia) and partnerships. Men can be mediocre and still successful (again, hello, academia); women, in order to succeed, have to be outstanding. Worse, they have to keep proving their oustandingness; ‘pure’ existence is never enough.

To refuse this – to refuse to justify one’s existence through a retrospective or prospective contribution to either particular men (wife of, mother of, daughter of), their institutions (corporation, family, country), or the vaguely defined ‘humankind’ (which, more often than not, is an extrapolation of these categories) – is thus to challenge the washed-out but seemingly undying assumption that a woman is somehow less-worthy version of a man. It is to subvert the myth that shaped and constrained so many, from Austen’s characters to Woolf’s Shakespeare’s sister: that to exist a woman has to be useful; that inhabiting an interiority is to be performed in secret (which meant away from the eyes of the patriarchy); that, ultimately, women’s existence needs to be justified. If not by providing sex, childbearing, and domestic labour, then at least indirectly, by consuming stuff and services that rely on underpaid (including domestic) labour of other women, from fashion to IPhones and from babysitting to nail salons. Sometimes, if necessary, also by writing Big Books: but only so they could be used by men who see in them the reflection of their own (imagined) glory.

Death

Heti recounts another story, about her maternal grandmother, Magda, imprisoned in a concentration camp during WWII. One day, Nazi soldiers came to the women’s barracks and asked for volunteers to help with cooking, cleaning and scrubbing in the officers’ kitchen. Magda stepped forward; as Heti writes, ‘they all did’. Magda was not selected; she was lucky, as it soon transpired that those women were not taken to the kitchen, but rather raped by the officers and then killed.

I lingered over the sentence ‘they all did’ for a long time. What would it mean for more women to not volunteer? To not accept endlessly proving one’s own usefulness, in cover letters, job interviews, student feedback forms? To simply exist, in space?

I think I’ll just sit and think about it for a while.

Screen Shot 2019-06-12 at 18.12.20.png

(The photo is by the British photographer Hannah Starkey, who has a particular penchant for capturing women inhabiting their own interiority. Thank you to my partner who first introduced me to her work, the slight irony being that he interrupted me in precisely one such moment of contemplation to tell me this).

*I used to make a point of asking the students taking Social Theory to change ‘Sartre’s partner Simone de Beauvoir’ in their essays to ‘de Beauvoir’s partner Jean-Paul Sartre’ and see if it begins to read differently.

Between legitimation and imagination: epistemic attachment, ontological bias, and thinking about the future

Greyswans
Some swans are…grey (Cambridge, August 2017)

 

A serious line of division runs through my household. It does not concern politics, music, or even sports: it concerns the possibility of large-scale collapse of social and political order, which I consider very likely. Specific scenarios aside for the time being, let’s just say we are talking more human-made climate-change-induced breakdown involving possibly protracted and almost certainly lethal conflict over resources, than ‘giant asteroid wipes out Earth’ or ‘rogue AI takes over and destroys humanity’.

Ontological security or epistemic positioning?

It may be tempting to attribute the tendency towards catastrophic predictions to psychological factors rooted in individual histories. My childhood and adolescence took place alongside the multi-stage collapse of the country once known as the Socialist Federal Republic of Yugoslavia. First came the economic crisis, when the failure of ‘shock therapy’ to boost stalling productivity (surprise!) resulted in massive inflation; then social and political disintegration, as the country descended into a series of violent conflicts whose consequences went far beyond the actual front lines; and then actual physical collapse, as Serbia’s long involvement in wars in the region was brought to a halt by the NATO intervention in 1999, which destroyed most of the country’s infrastructure, including parts of Belgrade, where I was living at the time*. It makes sense to assume this results in quite a different sense of ontological security than one, say, the predictability of a middle-class English childhood would afford.

But does predictability actually work against the capacity to make accurate predictions? This may seem not only contradictory but also counterintuitive – any calculation of risk has to take into account not just the likelihood, but also the nature of the source of threat involved, and thus necessarily draws on the assumption of (some degree of) empirical regularity. However, what about events outside of this scope? A recent article by Faulkner, Feduzi and Runde offers a good formalization of this problem (the Black Swans and ‘unknown unknowns’) in the context of the (limited) possibility to imagine different outcomes (see table below). Of course, as Beck noted a while ago, the perception of ‘risk’ (as well as, by extension, any other kind of future-oriented thinking) is profoundly social: it depends on ‘calculative devices‘ and procedures employed by networks and institutions of knowledge production (universities, research institutes, think tanks, and the like), as well as on how they are presented in, for instance, literature and the media.

Screen shot 2017-12-18 at 3.58.23 PM
From: Faulkner, Feduzi and Runde: Unknowns, Black Swans and the risk/uncertainty distinction, Cambridge Journal of Economics 41 (5), August 2017, 1279-1302

 

Unknown unknowns

In The Great Derangement (probably the best book I’ve read in 2017), Amitav Gosh argues that this can explain, for instance, the surprising absence of literary engagement with the problem of climate change. The problem, he claims, is endemic to Western modernity: a linear vision of history cannot conceive of a problem that exceeds its own scale**. This isn’t the case only with ‘really big problems’ such as economic crises, climate change, or wars: it also applies to specific cases such as elections or referendums. Of course, social scientists – especially those qualitatively inclined – tend to emphasise that, at best, we aim to explain events retroactively. Methodological modesty is good (and advisable), but avoiding thinking about the ways in which academic knowledge production is intertwined with the possibility of prediction is useless, for at least two reasons.

One is that, as reflected in the (by now overwrought and overdetermined) crisis of expertise and ‘post-truth’, social researchers increasingly find themselves in situations where they are expected to give authoritative statements about the future direction of events (for instance, about the impact of Brexit). Even if they disavow this form of positioning, the very idea of social science rests on (no matter how implicit) assumption that at least some mechanisms or classes or objects will exhibit the same characteristics across cases; consequently, the possibility of inference is implied, if not always practised. Secondly, given the scope of challenges societies face at present, it seems ridiculous to not even attempt to engage with – and, if possibly, refine – the capacity to think how they will develop in the future. While there is quite a bit of research on individual predictive capacity and the way collective reasoning can correct for cognitive bias, most of these models – given that they are usually based on experiments, or simulations – cannot account for the way in which social structures, institutions, and cultures of knowledge production interact with the capacity to theorise, model, and think about the future.

The relationship between social, political, and economic factors, on the one hand, and knowledge (including knowledge about those factors), on the other, has been at the core of my work, including my current PhD. While it may seem minor compared to issues such as wars or revolutions, the future of universities offers a perfect case to study the relationship between epistemic positioning, positionality, and the capacity to make authoritative statements about reality: what Boltanski’s sociology of critique refers to as ‘complex externality’. One of the things it allowed me to realise is that while there is a good tradition of reflecting on positionality (or, in positivist terms, cognitive ‘bias’) in relation to categories such as gender, race, or class, we are still far from successfully theorising something we could call ‘ontological bias’: epistemic attachment to the object of research.

The postdoctoral project I am developing extends this question and aims to understand its implications in the context of generating and disseminating knowledge that can allow us to predict – make more accurate assessments of – the future of complex social phenomena such as global warming or the development of artificial intelligence. This question has, in fact, been informed by my own history, but in a slightly different manner than the one implied by the concept of ontological security.

Legitimation and prediction: the case of former Yugoslavia

Socialist Federal Republic of Yugoslavia had a relatively sophisticated and well developed networks of social scientists, which both of my parents were involved in***. Yet, of all the philosophers, sociologists, political scientists etc. writing about the future of the Yugoslav federation, only one – to the best of my knowledge – predicted, in eerie detail, the political crisis that would lead to its collapse: Bogdan Denitch, whose Legitimation of a revolution: the Yugoslav case (1976) is, in my opinion, one of the best books about former Yugoslavia ever written.

A Yugoslav-American, Denitch was a professor of sociology at the City University of New York. He was also a family friend, a fact I considered of little significance (having only met him once, when I was four, and my mother and I were spending a part of our summer holiday at his house in Croatia; my only memory of it is being terrified of tortoises roaming freely in the garden), until I began researching the material for my book on education policies and the Yugoslav crisis. In the years that followed (I managed to talk to him again in 2012; he passed away in 2016), I kept coming back to the question: what made Denitch more successful in ‘predicting’ the crisis that would ultimately lead to the dissolution of former Yugoslavia than virtually anyone writing on Yugoslavia at the time?

Denitch had a pretty interesting trajectory. Born in 1929 to Croat Serb parents, he spent his childhood in a series of countries (including Greece and Egypt), following his diplomat father; in 1946, the family emigrated to the United States (the fact his father was a civil servant in the previous government would have made it impossible for them to continue living in Yugoslavia after the Communist regime, led by Josip Broz Tito, formally took over). There, Denitch (in evident defiance of his upper-middle-class legacy) trained as a factory worker, while studying for a degree in sociology at CUNY. He also joined the Democratic Socialist Alliance – one of American socialist parties – whose member (and later functionary) he would remain for the rest of his life.

In 1968, Denitch was awarded a major research grant to study Yugoslav elites. The project was not without risks: while Yugoslavia was more open to ‘the West’ than other countries in Eastern Europe, visits by international scholars were strictly monitored. My mother recalls receiving a house visit from an agent of the UDBA, the Yugoslav secret police – not quite the KGB but you get the drift – who tried to elicit the confession that Denitch was indeed a CIA agent, and, in the absence of that, the promise that she would occasionally report on him****.

Despite these minor throwbacks, the research continued: Legitimation of a revolution is one of its outcomes. In 1973, Denitch was awarded a PhD by the Columbia University and started teaching at CUNY, eventually retiring in 1994. His last book, Ethnic nationalism: the tragic death of Yugoslavia came out in the same year, a reflection on the conflict that was still going on at the time, and whose architecture he had foreseen with such clarity eighteen years earlier (the book is remarkably bereft of “told-you-so”-isms, so warmly recommended for those wishing to learn more about Yugoslavia’s dissolution).

Did personal history, in this sense, have a bearing on one’s epistemic position, and by extension, on the capacity to predict events? One explanation (prevalent in certain versions of popular intellectual history) would be that Denitch’s position as both a Yugoslav and an American would have allowed him to escape the ideological traps other scholars were more likely to fall into. Yugoslavs, presumably,  would be at pains to prove socialism was functioning; Americans, on the other hand, perhaps egalitarian in theory but certainly suspicious of Communist revolutions in practice, would be looking to prove it wasn’t, at least not as an economic model. Yet this assumption hardly stands even the lightest empirical interrogation. At least up until the show trials of Praxis philosophers, there was a lively critique of Yugoslav socialism within Yugoslavia itself; despite the mandatory coating of jargon, Yugoslav scholars were quite far from being uniformly bright-eyed and bushy-tailed about socialism. Similarly, quite a few American scholars were very much in favour of the Yugoslav model, eager, if anything, to show that market socialism was possible – that is, that it’s possible to have a relatively progressive social policy and still be able to afford nice things. Herein, I believe, lies the beginning of the answer as to why neither of these groups was able to predict the type or the scale of the crisis that will eventually lead to the dissolution of former Yugoslavia.

Simply put, both groups of scholars depended on Yugoslavia as a source of legitimation of their work, though for different reasons. For Yugoslav scholars, the ‘exceptionality’ of the Yugoslav model was the source of epistemic legitimacy, particularly in the context of international scientific collaboration: their authority was, in part at least, constructed on their identity and positioning as possessors of ‘local’ knowledge (Bockman and Eyal’s excellent analysis of the transnational roots of neoliberalism makes an analogous point in terms of positioning in the context of the collaboration between ‘Eastern’ and ‘Western’ economists). In addition to this, many of Yugoslav scholars were born and raised in socialism: while, some of them did travel to the West, the opportunities were still scarce and many were subject to ideological pre-screening. In this sense, both their professional and their personal identity depended on the continued existence of Yugoslavia as an object; they could imagine different ways in which it could be transformed, but not really that it could be obliterated.

For scholars from the West, on the other hand, Yugoslavia served as a perfect experiment in mixing capitalism and socialism. Those more on the left saw it as a beacon of hope that socialism need not go hand-in-hand with Stalinist-style repression. Those who were more on the right saw it as proof that limited market exchange can function even in command economies, and deduced (correctly) that the promise of supporting failing economies in exchange for access to future consumer markets could be used as a lever to bring the Eastern Bloc in line with the rest of the capitalist world. If no one foresaw the war, it was because it played no role in either of these epistemic constructs.

This is where Denitch’s background would have afforded a distinct advantage. The fact his parents came from a Serb minority in Croatia meant he never lost sight of the salience of ethnicity as a form of political identification, despite the fact socialism glossed over local nationalisms. His Yugoslav upbringing provided him not only with fluency in the language(s), but a degree of shared cultural references that made it easier to participate in local communities, including those composed of intellectuals. On the other hand, his entire professional and political socialization took place in the States: this meant he was attached to Yugoslavia as a case, but not necessarily as an object. Not only was his childhood spent away from the country; the fact his parents had left Yugoslavia after the regime change at the end of World War II meant that, in a way, for him, Yugoslavia-as-object was already dead. Last, but not least, Denitch was a socialist, but one committed to building socialism ‘at home’. This means that his investment in the Yugoslav model of socialism was, if anything, practical rather than principled: in other words, he was interested in its actual functioning, not in demonstrating its successes as a marriage of markets and social justice. This epistemic position, in sum, would have provided the combination needed to imagine the scenario of Yugoslav dissolution: a sufficient degree of attachment to be able to look deeply into a problem and understand its possible transformations; and a sufficient degree of detachment to be able to see that the object of knowledge may not be there forever.

Onwards to the…future?

What can we learn from the story? Balancing between attachment and detachment is, I think, one of the key challenges in any practice of knowing the social world. It’s always been there; it cannot be, in any meaningful way, resolved. But I think it will become more and more important as the objects – or ‘problems’ – we engage with grow in complexity and become increasingly central to the definition of humanity as such. Which means we need to be getting better at it.

 

———————————-

(*) I rarely bring this up as I think it overdramatizes the point – Belgrade was relatively safe, especially compared to other parts of former Yugoslavia, and I had the fortune to never experience the trauma or hardship people in places like Bosnia, Kosovo, or Croatia did.

(**) As Jane Bennett noted in Vibrant Matter, this resonates with Adorno’s notion of non-identity in Negative Dialectics: a concept always exceeds our capacity to know it. We can see object-oriented ontology, (e.g. Timothy Morton’s Hyperobjects) as the ontological version of the same argument: the sheer size of the problem acts as a deterrent from the possibility to grasp it in its entirety.

(***) This bit lends itself easily to the Bourdieusian “aha!” argument – academics breed academics, etc. The picture, however, is a bit more complex – I didn’t grow up with my father and, until about 16, had a very vague idea of what my mother did for a living.

(****) Legend has it my mother showed the agent the door and told him never to call on her again, prompting my grandmother – her mother – to buy funeral attire, assuming her only daughter would soon be thrown into prison and possibly murdered. Luckily, Yugoslavia was not really the Soviet Union, so this did not come to pass.

What is the relationship between universities and democracy? From the purposes to the uses of university (and back)

Screen shot 2017-09-18 at 9.39.41 AM

[Lightly edited text of a keynote lecture delivered to the Department of Sociology and Social Anthropology’s Graduate conference at the Central European University in Budapest, 18 September 2017. The conference was initially postponed because of the problematic situation concerning the status of CEU in Hungary, following the introduction of the special law known as ‘Lex CEU‘].

Thank you. It’s a pleasure to be here – or rather, I should say it’s a pleasure to be back.

The best way to evaluate knowledge claims is to look at how they change over time. About three and a half years ago, during the launch event for From Class to Identity, I stood in this exact same spot. If you asked me back then what the relationship between universities and democracy is, I would have very likely told you at least one of the following things.

Conceptual, contingent, nonexistent?

Obviously, the relationship between universities and democracy depends on how you define both. What democracy actually means is both contested and notoriously difficult to measure. University, on the other hand, is a concept somewhat more easily recognisable through different periods. However, that does not mean it is not changing; in particular, it is increasingly becoming synonymous with the concept of ‘higher education’, a matter whose significance, I hope, will become clearer during the course of this talk.

Secondly, I would have most likely told you that the link between universities and democracy is contingent, which means it depends on the constellation of social, political, economic and historical factors, implying correlation more than a causation.

Last, and not least importantly, I would have told you that, in some cases, the link is not even there; universities can and do exist alongside regimes that cannot be described as democratic even if we extended the term in the most charitable way possible.

In fact, when I first came to CEU as a research fellow in 2010, it was in order to look more deeply into this framing of the relationship between universities and democracy. At the time, in much of public policy and in particular in international development discourse, education was seen as an instrument for promoting democracy, peace, and sustainable prosperity – especially in the context of post-conflict reconciliation. The more of it, thus, the better. This was the consensus I wanted to challenge. Now, while most universities subscribe to values of peace and democracy at least on paper, only a few were ever founded with the explicit aim to promote them. In that sense, I came to the very belly of the beast, but in the best possible sense. CEU proved immensely valuable, both in terms of research I did here and at the Open Society Archives, as well as discussions with colleagues and students: all of this fed into From Class to Identity, which was published in 2014.

For better or worse, the case I settled on – former Yugoslavia – lent itself rather fortuitously to questioning the relationship between education and values we usually associate with democracy. In Socialist Federal Republic of Yugoslavia (which was, it bears remembering, a one-party state) higher education attainment kept rising steadily (in fact, at a certain period of time, in exact opposition to governmental policies, which aimed to reduce enrollment to universities) up until its dissolution and subsequent violent conflict.

The political landscape of its successor states today may be more variegated (Slovenia and Croatia are EU members, the semblance of a peaceful order in Bosnia, Kosovo and Macedonia is maintained through heavy investment and involvement of the international community, and Serbia and to a perhaps lesser extent Montenegro are effectively authoritarian fiefdoms), but what they share across the board is both growing levels of educational attainment and an expanding higher education sector. In other words, both the number of people who have, or are in the process of obtaining, higher education, and the number of higher education institutions in total, are growing. This, I thought, goes some way towards proving that the link between universities and democracy is contingent and dependent on a number of political factors, rather than necessary.

Under attack?

Would I say the same thing today? Today, universities and those within them increasingly find it necessary to justify their existence, not only in response to challenges to autonomy, academic freedom, and, after all, the basic human rights of academics, such as those happening in Turkey (as we will hear in much more detail during this conference) or here in Hungary, but also in relation to the broader challenges related to the declining public funding of higher education and research. Last, but not least, the election of President Trump in the United States and the Brexit vote in the UK have by many been taken as portents of the decline of epistemic foundations of liberal democratic order, reflected in denouncement of the ‘rule of experts’ and phenomena such as ‘fake news’ or the ‘post-truth’ landscape. In this context, it becomes all the more attractive to resort to justifications of universities’ existence by appeal to their contribution to democracy, civil society, and sustainable prosperity.

Universities and democracy: drop the mic

I will argue that this urge needs to be resisted. I will argue that focusing on the purposes of university framed in this way legitimises the very processes of valorisation – that is, the creation of value – that thrive on competition, and whose logical end are inflated claims of the sort, to paraphrase you-know-who, “we have all the best educations”.

In doing this, we forgo exactly the fine-grained detail that disciplines including but limited to sociology and social anthropology should pay attention to. Put bluntly, we forget the relevance of the social context for making universities what they are. For this, we need to ask not what universities (ideally) aim to achieve, but rather, what is it that universities do, what they can do, but also, importantly, what can be done with them.

Shifting the focus from purposes to uses is not the case, as Latour may have put it, of betraying matters of concern in order to boast about matters of fact. It is, however, to draw attention to the fact that the relationship between universities and democracy is, to borrow another expression from Latour, a factish: both real and fabricated, that is, a social construct but with very real consequences – neither a fact nor a fetish, but an always not-fully-reconciled amalgam of the two. Keeping this in mind, I think, can allow us to think about different roles of universities without losing sight neither of their reality, nor of their constructed nature.

Correlation or causation?

Let me give you just two examples. In the period leading up to as well as in the immediate aftermath of the 2016 US elections, much has been made of the difference in education levels of voters for respective candidates, leading some pundits to pronounce that the ‘university educated are voting for Clinton’, that the ‘single most pronounced difference in voter preference is college education’. That is, until someone bothered to break down the data a bit differently, which showed that 44% those with a college degree voted for Trump. Within this group, the most pronounced distinction is being white or not. In other words: it’s race, stupid – possibly just about the most salient political distinction in the US today.

Voterswcollegedegreesvrace

The other example is from a very recent study that looked at the relationship longitudinal data concerning outgoing student mobility from former Soviet countries, and levels of attained democracy. It concluded that “…Cross-sectional data on student mobility and attained democracy shows that former Soviet countries with higher proportions of students studying in Europe have achieved higher levels of democratic development. In contrast, countries with higher proportions of students studying in the most popular, authoritarian destination – the Russian Federation – have reached significantly lower levels of democratic development. This suggests that internationalisation of European HE can offer the potential of facilitating democratic socialisation, especially in environments where large proportions of students from less-democratic countries study in a democratic context for an extended period of time”.

Now, this is the sort of research that makes for catchy one-liners, such as “studying in the EU helps democracy”; it makes you feel good about what you do – well, it certainly makes me feel good about what I do, and, perhaps, if you are from one of the countries mentioned in the study and you are studying in the EU (as you most likely are) it makes you feel good about that. It’s also the sort of research that funders love to hear about. The problem is, it doesn’t tell us anything we actually need to know.

It’s a bit too early to look at the data, but how about the following: both the “level of attained democracy” and “proportion of students studying in the EU” are a function of a different factor, one that has to do with the history of international relations, centre-periphery relationships, and, in particular, international political economy. Thus, for instance, countries that are traditionally more dependent on EU aid are quicker to “democratize” – that is, fall outside of the Russian sphere of influence – which is aided by cultural diplomacy (whose effects are reflected in language fluency, aptitude, and, at the end of the day, framing of studying in the EU as a desirable life- and career choice), visa regimes, and the availability of country- or region-specific scholarships. All of which is a rather long way of saying what this graph achieves much more succinctly, which is that correlation does not imply causation.

dicapriocorrelations

Sociology and anthropology are particularly good at unraveling knots of multiple and overlapping processes, but history, political science and (critical) public policy analysis are necessary too. It’s not about shunning quantitative data (something our disciplines are sometimes prone to doing) but being able to look behind it, at the myriad interactions that take place in the fabric of everyday life: sometimes visibly in, but sometimes away from the political arena. However, this sort of research does not easy clickbait make.

What universities can do: making communities

In the rest of my talk, I want to focus on the one thing that universities can and do do, the one thing they are really good at doing. That is, creating communities. Fostering a sense of belonging. Forging relationships. Making lasting networks.

If you think that this is an unequivocally good thing, may I remind you that (a) this is a university-fostered community, but (b) this is also a university-fostered community. (For those of you unfamiliar with the British political landscape, the latter is the Bullingdon club, an Oxford University-based exclusive society whose former members include David Cameron, George Osborne and Boris Johnson). In other words, community-building can be both good and bad thing: it always means inclusion as well as exclusion. Universities provide a sense of “us”, a sense of who belongs, including to the elite who run the country. They help order and classify people – in theory, according to their aptitude and ambition, but in practice, as we know, all to often according to a host of other factors, including class, gender and race.

The origin of their name, universitas, reflects this ambition to be all-encompassing, to signify a totality, despite the fact that the way totality is signified has over time shifted from indexicality to representation: that is, from the idea that universities project what a collectivity is supposed to be about – for instance, define the literary language and canon, structure of professions, and delineate the criteria of truth and scientific knowledge – to the idea that they reflect the composition of the collectivity, for example, the student body representing the diversity of the general population.

This is why universities experienced a veritable boom in the 19th century, in the period of forging of nation-states, and why they are of persistent interest to them: because they define the boundaries of the community. This is why universities, at best a collective name for a bunch of different institutional traditions, became part of ‘higher (or ‘tertiary’) education’, a rationally, hierarchically ordered system of qualifications integrated into a state-administered context. This is why being able to quantify and compare these qualifications – through rankings, league tables, productivity and performance measurement – is so important to nation-states. It becomes ever more important whenever they feel their grip is slipping, either due to influences of globalisation and internationalisation or for other, more local reasons – such as when a university does not sit easily with the notion of a community projected by the political elite of a nation-state, as in the case of CEU in Hungary.

On the other hand, this is why universities police their boundaries so diligently, and insist on having authority over who gets in and who stays out. In fact, the principles of academic freedom and university autonomy were explicitly devised in order to protect universities’ right to exercise final judgment over such decisions. Last, but not least, this is why societal divisions and conflicts, both nascent and actual, are always felt so viscerally at universities, often years in advance of other parts of society. Examples vary from struggles over identity politics on campus, to broader acts of political positioning related to the Israeli-Palestinian conflict, for instance.

This brings me to my final point. The biggest challenge universities face today is how to go on with this function of community-building in the context of disagreement, especially when disagreement includes things as fundamental as the very notion of truth, for instance, as with those who question the reality of climate change. Who do universities reflect and represent in this case? How do we reconcile the need to be democratic – that is, reflect a broad range of positions and opinions – with democracy, that is, with the conditions necessary for such a conversation to endure in the first place? These are some of the questions we need to be asking before we resort to claims concerning the necessity of the relationship between universities and democracy, or universities and anything else, for that matter.

Incidentally, this is one of the things Central European University has always been particularly good at: teaching people how to go about disagreeing in ways that allow everyone to learn from each other. I don’t know if any of you remember the time when the university mailing list was open to everyone, but I think conversations there provided a good example of how to how to discuss differing ideas and political stances in a way that furthers everyone’s engagement with their political community; teaching at CEU has always aspired to do the same.

That is a purpose worth defending. This is a purpose that carries forth the tradition not only the man who this room was named after, Karl Popper, but also, and perhaps more, a philosopher who was particularly concerned with the relationship between modes of knowledge production and the creation of communities: Hannah Arendt. Thus, it is with a quote from Arendt’s Truth and politics (1967) that I would like to end with.

“Outstanding among the existential modes of truth-telling are the solitude of the philosopher, the isolation of the scientist and the artist, the impartiality of the historian and the judge (…) These modes of being alone differ in many respects, but they have in common that as long as any one of them lasts, no political commitment, no adherence to a cause, is possible. (…) From this perspective, we remain unaware of the actual content of political life – of the joy and the gratification that arise out of being in company with our peers, out of acting together and appearing in public, out of inserting ourselves into the world by word and deed, thus acquiring and sustaining our personal identity and beginning something entirely new. However, what I meant to show here is that this whole sphere, its greatness notwithstanding, is limited – it does not encompass the whole of man’s and the world s existence. It is limited by those things which men cannot change at will.

And it is only by respecting its own borders that this realm, where we are free to act and to change, can remain intact, preserving its integrity and keeping its promises. Conceptually, we may call truth what we cannot change; metaphorically, it is the ground on which we stand and the sky that stretches above us.”

Thank you for your attention.

Critters, Critics, and Californian Theory – review of Haraway’s Staying with the Trouble

13925114_10153706710291720_1736673444964895015_n
Coproduction

 

[This review was originally published on the blog of the Political Economy Research Centre as part of its Anthropocene Reading Group, as well as on the blog of Centre for Understanding Sustainable Prosperity]

 

Donna Haraway, Staying with the Trouble: Making Kin in the Chthulucene(Duke University Press, 2016)

From the opening, Donna Haraway’s recent book reads like a nice hybrid of theoretical conversation and science fiction. Crescendoing in the closing Camille Stories, the outcome of a writing experiment of imagining five future generations, “Staying with the trouble” weaves together – like the cat’s cradle, one of the recurrent metaphors in the book – staple Harawayian themes of the fluidity of boundaries between human and variously defined ‘Others’, metamorphoses of gender, the role of technology in modifying biology, and the related transformation of the biosphere – ‘Gaia’ – in interaction with human species. Eschewing the term ‘Anthropocene’, which she (somewhat predictably) associates with Enlightenment-centric, tool-obsessed rationality, Haraway births ‘Chthulucene’ – which, to be specific, has nothing to do with the famous monster of H.P. Lovecraft’s imagination, instead being named after a species of spider, Pimoa Cthulhu, native to Haraway’s corner of Western California.

This attempt to avoid dealing with human(-made) Others – like Lovecraft’s “misogynist racial-nightmare monster” – is the key unresolved issue in the book. While the tone is rightfully respectful – even celebratory – of nonhuman critters, it remains curiously underdefined in relation to human ones. This is evident in the treatment of Eichmann and the problem of evil. Following Arendt, Haraway sees Eichmann’s refusal to think about the consequences of his actions as the epitome of the banality of evil – the same kind of unthinking that leads to the existing ecological crisis. That more thinking seems like a natural antidote and a solution to the long-term destruction of the biosphere seems only logical (if slightly self-serving) from the standpoint of developing a critical theory whose aim is to save the world from its ultimate extinction. The question, however, is what to do if thoughts and stories are not enough?

The problem with a political philosophy founded on belief in the power of discourse is that it remains dogmatically committed to the idea that only if one can change the story, one can change the world. The power of stories as “worlding” practices fundamentally rests on the assumption that joint stories can be developed with Others, or, alternatively, that the Earth is big enough to accommodate those with which no such thing is possible. This leads Haraway to present a vision of a post-apocalyptic future Earth, in which population has been decimated to levels that allow human groups to exist at sufficient distance from each other. What this doesn’t take into account is that differently defined Others may have different stories, some of which may be fundamentally incompatible with ours – as recently reflected in debates over ‘alternative facts’ or ‘post-truth’, but present in different versions of science and culture wars, not to mention actual violent conflicts. In this sense, there is no suggestion of sympoiesis with the Eichmanns of this world; the question of how to go about dealing with human Others – especially if they are, in Kristeva’s terms, profoundly abject – is the kind of trouble “Staying with the trouble” is quite content to stay out of.

Sympoiesis seems reserved for non-humans, which seem to happily go along with the human attempts to ‘become-with’ them. But it seems easier when ‘Others’ do not, technically speaking, have a voice: whether we like it or not, few of the non-human critters have efficient means to communicate their preferences in terms of political organisation, speaking order at seminars, or participation in elections. The critical practice of com-menting, to which Haraway attributes much of the writing in the book, is only possible to the extent to which the Other has equal means and capacities to contribute to the discussion. As in the figure of the Speaker for the Dead, the Other is always spoken-for, the tragedy of its extinction obscuring the potential conflict or irreconcilability between species.

The idea of a com-pliant Other can, of course, be seen as an integral element of the mythopoetic scaffolding of West Coast academia, where the idea of fluidity of lifestyle choices probably has near-orthodox status. It’s difficult not to read parts of the book, such as the following passage, as not-too-fictional accounts of lived experiences of the Californian intellectual elite (including Haraway herself):

“In the infectious new settlements, every new child must have at least three parents, who may or may not practice new or old genders. Corporeal differences, along with their fraught histories, are cherished. Throughout life, the human person may adopt further bodily modifications for pleasure and aesthetics or for work, as long as the modifications tend to both symbionts’ well-being in the humus of sympoiesis” (p. 133-5)

The problem with this type of theorizing is not so much that it universalises a concept of humanity that resembles an extended Comic-Con with militant recycling; reducing ideas to their political-cultural-economic background is not a particularly innovative critical move. It is that it fails to account for the challenges and dangers posed by the friction of multiple human lives in constrained spaces, and the ways in which personal histories and trajectories interact with the configurations of place, class, and ownership, in ways that can lead to tragedies like the Grenfell tower fire in London.

In other words, what “Staying with the trouble” lacks is a more profound sense of political economy, and the ways in which social relations influence how different organisms interact with their environment – including compete for its scarce resources, often to the point of mutual extinction. Even if the absolution of human woes by merging one’s DNA with those of fellow creatures works well as an SF metaphor, as a tool for critical analysis it tends to avoid the (often literally) rough edges of their bodies. It is not uncommon even for human bodies to reject human organs; more importantly, the political history of humankind is, to a great degree, the story of various groups of humans excluding other humans from the category of humans (colonized ‘Others’, slaves), citizens (women, foreigners), or persons with full economic and political rights (immigrants, and again women). This theme is evident in the contemporary treatment of refugees, but it is also preserved in the apparently more stable boundaries between human groups in the Camille Stories. In this context, the transplantation of insect parts to acquire consciousness of what it means to inhabit the body of another species has more of a whiff of transhumanist enhancement than of an attempt to confront head-on (antennae-first?) multifold problems related to human coexistence on a rapidly warming planet.

At the end of the day, solutions to climate change may be less glamorous than the fantasy of escaping global warming by taking a dip in the primordial soup. In other words, they may require some good ol’ politics, which fundamentally means learning to deal with Others even if they are not as friendly as those in Haraway’s story; even if, as the Eichmanns and Trumps of this world seem to suggest, their stories may have nothing to do with ours. In this sense, it is the old question of living with human Others, including abject ones, that we may have to engage with in the AnthropoCapitaloCthulucene: the monsters that we created, and the monsters that are us.

Jana Bacevic is a PhD candidate at the Department of Sociology at the University of Cambridge, and has a PhD in social anthropology from the University of Belgrade. Her interests lie at the intersection of social theory, sociology of knowledge, and political sociology; her current work deals with the theory and practice of critique in the transformation of higher education and research in the UK.

 

Theory as practice: for a politics of social theory, or how to get out of the theory zoo

 

[These are my thoughts/notes for the “Practice of Social Theory, which Mark Carrigan and I are running at the Department of Sociology of the University of Cambridge from 4 to 6 September, 2017].

 

Revival of theory?

 

It seems we are witnessing something akin to a revival of theory, or at least of an interest in it. In 2016, the British Journal of Sociology published Swedberg’s “Before theory comes theorizing, or how to make social sciences more interesting”, a longer version of its 2015 Annual public lecture, followed by responses from – among others – Krause, Schneiderhan, Tavory, and Karleheden. A string of recent books – including Matt Dawson’s Social Theory for Alternative Societies, Alex Law’s Social Theory for Today, and Craig Browne’s Critical Social Theory, to name but a few – set out to consider the relevance or contribution of social theory to understanding contemporary social problems. This is in addition to the renewal of interest in biography or contemporary relevance of social-philosophical schools such as Existentialism (1, 2) and the Frankfurt School [1, 2].

To a degree, this revival happens on the back of the challenges posed to the status of theory by the rise of data science, leading Lizardo and Hay to engage in defense of the value and contributions of theory to sociology and international relations, respectively. In broader terms, however, it addresses the question of the status of social sciences – and, by extension, academic knowledge – more generally; and, as such, it brings us back to the justification of expertise, a question of particular relevance in the current political context.

The meaning of theory

Surely enough, theory has many meanings (Abend, 2008), and consequently many forms in which it is practiced. However, one of the characteristics that seem to be shared across the board is that it is  part of (under)graduate training, after which it gets bracketed off in the form of “the theory chapter” of dissertations/theses. In this sense, theory is framed as foundational in terms of socialization into a particular discipline, but, at the same time, rarely revisited – at least not explicitly – after the initial demonstration of aptitude. In other words, rather than doing, theory becomes something that is ‘done with’. The exception, of course, are those who decide to make theory the centre of their intellectual pursuits; however, “doing theory” in this sense all too often becomes limited to the exegesis of existing texts (what Krause refers to as ‘theory a’ and Abend as ‘theory 4’) that leads to the competition among theorists for the best interpretation of “what theorist x really wanted to say”, or, alternatively, the application of existing concepts to new observations or ‘problems’ (‘theory b and c’, in Krause’s terms). Either way, the field of social theory resembles less the groves of Plato’s Academy, and more a zoo in which different species (‘Marxists’, ‘critical realists’, ‘Bourdieusians’, ‘rational-choice theorists’) delve in their respective enclosures or fight with members of the same species for dominance of a circumscribed domain.

 

Screen shot 2017-06-12 at 8.11.36 PM
Competitive behaviour among social theorists

 

This summer school started from the ambition to change that: to go beyond rivalries or allegiances to specific schools of thought, and think about what doing theory really means. I often told people that wanting to do social theory was a major reason why I decided to do a second PhD; but what was this about? I did not say ‘learn more’ about social theory (my previous education provided a good foundation), ‘teach’ social theory (though supervising students at Cambridge is really good practice for this), read, or even write social theory (though, obviously, this was going to be a major component). While all of these are essential elements of becoming a theorist, the practice of social theory certainly isn’t reducible to them. Here are some of the other aspects I think we need to bear in mind when we discuss the return, importance, or practice of theory.

Theory is performance

This may appear self-evident once the focus shifts to ‘doing’, but we rarely talk about what practicing theory is meant to convey – that is, about theorising as a performative act. Some elements of this are not difficult to establish: doing theory usually means  identification with a specific group, or form of professional or disciplinary association. Most professional societies have committees, groups, and specific conference sessions devoted to theory – but that does not mean theory is exclusively practiced within them. In addition to belonging, theory also signifies status. In many disciplines, theoretical work has for years been held in high esteem; the flipside, of course, is that ‘theoretical’ is often taken to mean too abstract or divorced from everyday life, something that became a more pressing problem with the decline of funding for social sciences and the concomitant expectation to make them socially relevant. While the status of theory is a longer (and separate) topic, one that has been discussed at length in the history of sociology and other social sciences, it bears repeating that asserting one’s work as theoretical is always a form of positioning: it serves to define the standing of both the speaker, and (sometimes implicitly) others contributors. This brings to mind that…

Theory is power

Not everyone gets to be treated as a theorist: it is also a question of recognition, and thus, a question of political (and other) forms of power. ‘Theoretical’ discussions are usually held between men (mostly, though not exclusively, white men); interventions from women, people of colour, and persons outside centres of epistemic power are often interpreted as empirical illustrations, or, at best, contributions to ‘feminist’ or ‘race’ theory*. Raewyn Connell wrote about this in Southern Theory, and initiatives such as Why is my curriculum white? and Decolonizing curriculum in theory and practice have brought it to the forefront of university struggles, but it speaks to the larger point made by Spivak: that the majority of mainstream theory treats the ‘subaltern’ as only empirical or ethnographic illustration of the theories developed in the metropolis.

The problem here is not only (or primarily) that of representation, in the sense in which theory thus generated fails to accurately depict the full scope of social reality, or experiences and ideas of different people who participate in it. The problem is in a fundamentally extractive approach to people and their problems: they exist primarily, if not exclusively, in order to be explained. This leads me to the next point, which is that…

Theory is predictive

A good illustration for this is offered by pundits and political commentators’ surprise at events in the last year: the outcome of the Brexit referendum (Leave!), US elections (Donald Trump!), and last but not least, the UK General Election (surge in votes for Corbyn!). Despite differences in how these events are interpreted, they in most cases convey that, as one pundit recently confessed, nobody has a clue about what is going on. Does this mean the rule of experts really is over, and, with it, the need for general theories that explain human action? Two things are worth taking into account.

To begin with, social-scientific theories enter the public sphere in a form that’s not only simplified, but also distilled into ‘soundbites’ or clickbait adapted to the presumed needs and preferences of the audience, usually omitting all the methodological or technical caveats they normally come with. For instance, the results of opinion polls or surveys are taken to presented clear predictions, rather than reflections of general statistical tendencies; reliability is rarely discussed. Nor are social scientists always innocent victims of this media spin: some actively work on increase their visibility or impact, and thus – perhaps unwittingly – contribute to the sensationalisation of social-scientific discourse. Second, and this can’t be put delicately, some of these theories are just not very good. ‘Nudgery’ and ‘wonkery’ often rest on not particularly sophisticated models of human behaviour; which is not saying that they do not work – they can – but rather that theoretical assumptions underlying these models are rarely accessible to scrutiny.

Of course, it doesn’t take a lot of imagination to figure out why this is the case: it is easier to believe that selling vegetables in attractive packaging can solve the problem of obesity than to invest in long-term policy planning and research on decision-making that has consequences for public health. It is also easier to believe that removing caps to tuition fees will result in universities charging fees distributed normally from lowest to highest, than to bother reading theories of organizational behaviour in different economic and political environments and try to understand how this maps onto the social structure and demographics of a rapidly changing society. In other words: theories are used to inform or predict human behaviour, but often in ways that reinforce existing divisions of power. So, just in case you didn’t see this coming…

Theory is political

All social theories are about constraints, including those that are self-imposed. From Marx to Freud and from Durkheim to Weber (and many non-white, non-male theorists who never made it into ‘the canon’), theories are about what humans can and cannot do; they are about how relatively durable relations (structures) limit and enable how they act (agency). Politics is, fundamentally, about the same thing: things we can and things we cannot change. We may denounce Bismarck’s definition of politics as the art of the possible as insufficiently progressive, but – at the risk of sounding obvious – understanding how (and why) things stay the same is fundamental to understanding how to go about changing them. The history of social theory, among other things, can be read as a story about shifting the boundaries of what was considered fixed and immutable, on the one hand, and constructed – and thus subject to change – on the other.

In this sense, all social theory is fundamentally political. This isn’t to license bickering over different historical materialisms, or to stimulate fantasies – so dear to intellectuals – of ‘speaking truth to power’. Nor should theories be understood as weapons in the ‘war of time’, despite Débord’s poetic formulation: this is but the flipside of intellectuals’ dream of domination, in which their thoughts (i.e. themselves) inspire masses to revolt, usually culminating in their own ascendance to a position of power (thus conveniently cutting out the middleman in ‘speaking truth to power’, as they become the prime bearers of both).

Theory is political in a much simpler sense, in which it is about society and elements that constitute it. As such, it has to be about understanding what is it that those we think of as society think, want, and do, even – and possibly, especially – when we do not agree with them. Rather than aiming to ‘explain away’ people, or fit their behaviour into pre-defined social models, social theory needs to learn to listen to – to borrow a term from politics – its constituents. This isn’t to argue for a (not particularly innovative) return to grounded theory, or ethnography (despite the fact both are relevant and useful). At the risk of sounding pathetic, perhaps the next step in the development of social theory is to really make it a form of social practice – that is, make it be with the people, rather than about the people. I am not sure what this would entail, or what it would look like; but I am pretty certain it would be a welcome element of building a progressive politics. In this sense, doing social theory could become less of the practice of endlessly revising a blueprint for a social theory zoo, and more of a project of getting out from behind its bars.

 

 

*The tendency to interpret women’s interventions as if they are inevitably about ‘feminist theory’ (or, more frequently, as if they always refer to empirical examples) is a trend I have been increasingly noticing since moving into sociology, and definitely want to spend more time studying. This is obviously not to say there aren’t women in the field of social theory, but rather that gender (and race, ethnicity, and age) influence the level of generality at which one’s claims are read, thus reflecting the broader tendency to see universality and Truth as coextensive with the figure of the male and white academic.

 

 

Solving the democratic problem: intellectuals and reconciling epistemic and liberal democracy

bristols_somewhere
…but where? Bristol, October 2014

 

[This review of “Democratic problem-solving” (Cruickshank and Sassower eds., 2017) was first published in Social Epistemology Review and Reply Collective, 26 May 2017].

It is a testament to the lasting influence of Karl Popper and Richard Rorty that their work continues to provide inspiration for debates concerning the role and purpose of knowledge, democracy, and intellectuals in society. Alternatively, it is a testament to the recurrence of the problem that continues to lurk under the glossy analytical surface or occasional normative consensus of these debates: the impossibility to reconcile the concepts of liberal and epistemic democracy. Essays collected under the title Democratic Problem-Solving (Cruickshank and Sassower 2017) offer grounds for both assumptions, so this is what my review will focus on.

Boundaries of Rational Discussion

Democratic Problem-Solving is a thorough and comprehensive (if at times seemingly meandering) meditation on the implications of Popper’s and Rorty’s ideas for the social nature of knowledge and truth in contemporary Angloamerican context. This context is characterised by combined forces of neoliberalism and populism, growing social inequalities, and what has for a while now been dubbed, perhaps euphemistically, the crisis of democracy. Cruickshank’s (in other contexts almost certainly heretical) opening that questions the tenability of distinctions between Popper and Rorty, then, serves to remind us that both were devoted to the purpose of defining the criteria for and setting the boundaries of rational discussion, seen as the road to problem-solving. Jürgen Habermas, whose name also resonates throughout this volume, elevated communicative rationality to the foundational principle of Western democracies, as the unifying/normalizing ground from which to ensure the participation of the greatest number of members in the public sphere.

Intellectuals were, in this view, positioned as guardians—epistemic police, of sorts—of this discursive space. Popper’s take on epistemic ‘policing’ (see DPS, 42) was to use the standards of scientific inquiry as exemplars for maintaining a high level, and, more importantly, neutrality of public debates. Rorty saw it as the minimal instrument that ensured civility without questioning, or at least without implicitly dismissing, others’ cultural premises, or even ontological assumptions. The assumption they and authors in this volume have in common is that rational dialogue is, indeed, both possible and necessary: possible because standards of rationality were shared across humanity, and necessary because it was the best way to ensure consensus around the basic functioning principles of democracy. This also ensured the pairing of knowledge and politics: by rendering visible the normative (or political) commitments of knowledge claims, sociology of knowledge (as Reed shows) contributed to affirming the link between the epistemic and the political. As Agassi’s syllogism succinctly demonstrates, this link quickly morphed from signifying correlation (knowledge and power are related) to causation (the more knowledge, the more power), suggesting that epistemic democracy was if not a precursor, then certainly a correlate of liberal democracy.

This is why Democratic Problem-Solving cannot avoid running up against the issue of public intellectuals (qua epistemic police), and, obviously, their relationship to ‘Other minds’ (communities being policed). In the current political context, however, to the well-exercised questions Sassower raises such as—

should public intellectuals retain their Socratic gadfly motto and remain on the sidelines, or must they become more organically engaged (Gramsci 2011) in the political affairs of their local communities? Can some academics translate their intellectual capital into a socio-political one? Must they be outrageous or only witty when they do so? Do they see themselves as leaders or rather as critics of the leaders they find around them (149)?

—we might need to add the following: “And what if none of this matters?”

After all, differences in vocabularies of debate matter only if access to it depends on their convergence to a minimal common denominator. The problem for the guardians of public sphere today is not whom to include in these debates and how, but rather what to do when those ‘others’ refuse, metaphorically speaking, to share the same table. Populist right-wing politicians have at their disposal the wealth of ‘alternative’ outlets (Breitbart, Fox News, and increasingly, it seems, even the BBC), not to mention ‘fake news’ or the ubiquitous social media. The public sphere, in this sense, resembles less a (however cacophonous) town hall meeting than a series of disparate village tribunals. Of course, as Fraser (1990) noted, fragmentation of the public sphere has been inherent since its inception within the Western bourgeois liberal order.

The problem, however, is less what happens when other modes of arguing emerge and demand to be recognized, and more what happens when they aspire for redistribution of political power that threatens to overturn the very principles that gave rise to them in the first place. We are used to these terms denoting progressive politics, but there is little that prevents them from being appropriated for more problematic ideologies: after all, a substantial portion of the current conservative critique of the ‘culture of political correctness’, especially on campuses in the US, rests on the argument that ‘alternative’ political ideologies have been ‘repressed’, sometimes justifying this through appeals to the freedom of speech.

Dialogic Knowledge

In assuming a relatively benevolent reception of scientific knowledge, then, appeals such as Chis and Cruickshank’s to engage with different publics—whether as academics, intellectuals, workers, or activists—remain faithful to Popper’s normative ideal concerning the relationship between reasoning and decision-making: ‘the people’ would see the truth, if only we were allowed to explain it a bit better. Obviously, in arguing for dialogical, co-produced modes of knowledge, we are disavowing the assumption of a privileged position from which to do so; but, all too often, we let in through the back door the implicit assumption of the normative force of our arguments. It rarely, if ever, occurs to us that those we wish to persuade may have nothing to say to us, may be immune or impervious to our logic, or, worse, that we might not want to argue with them.

For if social studies of science taught us anything, it is that scientific knowledge is, among other things, a culture. An epistemic democracy of the Rortian type would mean that it’s a culture like any other, and thus not automatically entitled to a privileged status among other epistemic cultures, particularly not if its political correlates are weakened—or missing (cf. Hart 2016). Populist politics certainly has no use for critical slow dialogue, but it is increasingly questionable whether it has use for dialogue at all (at the time of writing of this piece, in the period leading up to the 2017 UK General Election, the Prime Minister is refusing to debate the Leader of the Opposition). Sassower’s suggestion that neoliberalism exhibits a penchant for justification may hold a promise, but, as Cruickshank and Chis (among others) show on the example of UK higher education, ‘evidence’ can be adjusted to suit a number of policies, and political actors are all too happy to do that.

Does this mean that we should, as Steve Fuller suggested in another SERRC article see in ‘post-truth’ the STS symmetry principle? I am skeptical. After all, judgments of validity are the privilege of those who can still exert a degree of control over access to the debate. In this context, I believe that questions of epistemic democracy, such as who has the right to make authoritative knowledge claims, in what context, and how, need to, at least temporarily, come second in relation to questions of liberal democracy. This is not to be teary-eyed about liberal democracy: if anything, my political positions lie closer to Cruickshank and Chis’ anarchism. But it is the only system that can—hopefully—be preserved without a massive cost in human lives, and perhaps repurposed so as to make them more bearable.

In this sense, I wish the essays in the volume confronted head-on questions such as whether we should defend epistemic democracy (and what versions of it) if its principles are mutually exclusive with liberal democracy, or, conversely, would we uphold liberal democracy if it threatened to suppress epistemic democracy. For the question of standards of public discourse is going to keep coming up, but it may decreasingly have the character of an academic debate, and increasingly concern the possibility to have one at all. This may turn out to be, so to speak, a problem that precedes all other problems. Essays in this volume have opened up important venues for thinking about it, and I look forward to seeing them discussed in the future.

References

Cruickshank, Justin and Raphael Sassower. Democratic Problem Solving: Dialogues in Social Epistemology. London: Rowman & Littlefield, 2017.

Fraser, Nancy. “Rethinking the Public Sphere: A Contribution to the Critique of Actually Existing Democracy.” Social Text 25/26 (1990): 56-80.

Fuller, Steve. “Embrace the Inner Fox: Post-Truth as the STS Symmetry Principle Universalized.” Social Epistemology Review and Reply Collective, December 25, 2016. http://wp.me/p1Bfg0-3nx

Hart, Randle J. “Is a Rortian Sociology Desirable? Will It Help Us Use Words Like ‘Cruelty’?” Humanity and Society, 40, no. 3 (2016): 229-241.

Zygmunt Bauman and the sociologies of end times

[This post was originally published at the Sociological Review blog’s Special Issue on Zygmunt Bauman, 13 April 2017]

“Morality, as it were, is a functional prerequisite of a world with an in-built finality and irreversibility of choices. Postmodern culture does not know of such a world.”

Zygmunt Bauman, Sociology and postmodernity

Getting reacquainted with Bauman’s 1988 essay “Sociology and postmodernity”, I accidentally misread the first word of this quote as “mortality”. In the context of the writing of this piece, it would be easy to interpret this as a Freudian slip – yet, as slips often do, it betrays a deeper unease. If it is true that morality is a functional prerequisite of a finite world, it is even truer that such a world calls for mortality – the ultimate human experience of irreversibility. In the context of trans- and post-humanism, as well as the growing awareness of the fact that the world, as the place inhabited (and inhabitable) by human beings, can end, what can Bauman teach us about both?

In Sociology and postmodernity, Bauman assumes the position at the crossroads of two historical (social, cultural) periods: modernity and postmodernity. Turning away from the past to look towards the future, he offers thoughts on what a sociology adapted to the study of postmodern condition would be like. Instead of a “postmodern sociology” as a mimetic representation of (even if a pragmatic response to) postmodernity, he argues for a sociology that attempts to give a comprehensive account of the “aggregate of aspects” that cohere into a new, consumer society: the sociology of postmodernity. This form of account eschews the observation of the new as a deterioration, or aberration, of the old, and instead aims to come to terms with the system whose contours Bauman will go on to develop in his later work: the system characterised by a plurality of possible worlds, and not necessarily a way to reconcile them.

The point in time in which he writes lends itself fortuitously to the argument of the essay. Not only did Legislators and interpreters, in which he reframes intellectuals as translators between different cultural worlds, come out a year earlier; the publication of Sociology and postmodernity briefly precedes 1989, the year that will indeed usher a wholly new period in the history of Europe, including in Bauman’s native Poland.

On the one hand, he takes the long view back to post-war Europe, built, as it was, on the legacy of Holocaust as a pathology of modernity, and two approaches to preventing its repetition – market liberalism and political freedoms in the West, and planned economies and more restrictive political regimes in Central and Eastern parts of the subcontinent. On the other, he engages with some of the dilemmas for the study of society that the approaching fall of Berlin Wall and eventual unification of those two hitherto separated worlds was going to open. In this sense, Bauman really has the privilege of a two-facing version of Benjamin’s Angel of History. This probably helped him recognize the false dichotomy of consumer freedom and dictatorship over needs, which, as he stated, was quickly becoming the only imaginable alternative to the system – at least as far as imagination was that of the system itself.

The present point of view is not all too dissimilar from the one in which Bauman was writing. We regularly encounter pronouncements of an end of a whole host of things, among them history, classical distribution of labour, standards of objectivity in reporting, nation-states, even – or so we hope – capitalism itself. While some of Bauman’s fears concerning postmodernity may, from the present perspective, seem overstated or even straightforwardly ridiculous, we are inhabiting a world of many posts – post-liberal, post-truth, post-human. Many think that this calls for a rethinking of how sociology can adapt itself to these new conditions: for instance, in a recent issue of International Sociological Association’s Global Dialogue, Leslie Sklair considers what a new radical sociology, developed in response to the collapse of global capitalism, would be like.

As if sociology and the zeitgeist are involved in some weird pas-de-deux: changes in any domain of life (technology, political regime, legislation) almost instantaneously trigger calls for, if not the invention of new, then a serious reconsideration of old paradigms and approaches to its study.

I would like to suggest that one of the sources of continued appeal of this – which Mike Savage brilliantly summarised as epochal theorising – is not so much the heralding of the new, as the promise that there is an end to the present state of affairs. In order for a new ‘epoch’ to succeed, the old one needs to end. What Bauman warns about in the passage cited at the beginning is that in a world without finality – without death – there can be no morality. In T.S. Eliot’s lines from Burnt Norton: If all time is eternally present, all time is irredeemable. What we may read as Bauman’s fear, therefore, is not that worlds as we know them can (and will) end: it is that, whatever name we give to the present condition, it may go on reproducing itself forever. In other words, it is a vision of the future that looks just like the present, only there is more of it.

Which is worse? It is hard to tell. A rarely discussed side of epochal theorising is that it imagines a world in which social sciences still have a role to play, if nothing else, in providing a theoretical framing or empirically-informed running commentary of its demise, and thus offers salvation from the existential anxiety of the present. The ‘ontological turn’ – from object-oriented ontology, to new materialisms, to post-humanism – reflects, in my view, the same tendency. If objects ‘exist’ in the same way as we do, if matter ‘matters’ in the same way (if not in the same degree) in which, for instance, black lives matter, this provides temporary respite from the confines of our choices. Expanding the concept of agency so as to involve non-human actors may seem more complicated as a model of social change, but at least it absolves humans from the unique burden of historical responsibility – including that for the fate of the world.

Human (re)discovery of the world, thus, conveys less a newfound awareness of the importance of the lived environment, as much as the desire to escape the solitude of thinking about the human (as Dawson also notes, all too human) condition. The fear of relativism that postmodern ‘plurality’ of worlds brought about appears to have been preferable to the possibility that there is, after all, just the one world. If the latter is the case, the only escape from it lies, to borrow from Hamlet, in the country from whose bourn no traveller has ever returned: in other words, in death.

This impasse is perhaps felt strongest in sociology and anthropology because excursions into other worlds have been both the gist of their method and the foundations of their critical potential (including their self-critique, which focused on how these two elements combine in the construction of epistemic authority). The figure of the traveller to other worlds was more pronounced in the case of anthropology, at least at the time when it developed as the study of exotic societies on the fringe of colonial empires, but sociology is no stranger to visitation either: its others, and their worlds, delineated by sometimes less tangible boundaries of class, gender, race, or just epistemic privilege. Bauman was among theorists who recognized the vital importance of this figure in the construction of the foundations of European modernity, and thus also sensitive to its transformations in the context of postmodernity – exemplified, as he argued, in contemporary human’s ambiguous position: between “a perfect tourist” and a “vagabond beyond remedy”.

In this sense, the awareness that every journey has an end can inform the practice of social theory in ways that go beyond the need to pronounce new beginnings. Rather than using eulogies in order to produce more of the same thing – more articles, more commentary, more symposia, more academic prestige – perhaps we can see them as an opportunity to reflect on the always-unfinished trajectory of human existence, including our existence as scholars, and the responsibility that it entails. The challenge, in this case, is to resist the attractive prospect of escaping the current condition by ‘exit’ into another period, or another world – postmodern, post-truth, post-human, whatever – and remember that, no matter how many diverse and wonderful entities they may be populated with, these worlds are also human, all too human. This can serve as a reminder that, as Bauman wrote in his famous essay on heroes and victims of postmodernity, “Our life struggles dissolve, on the contrary, in that unbearable lightness of being. We never know for sure when to laugh and when to cry. And there is hardly a moment in life to say without dark premonitions: ‘I have arrived’”.

@Grand_Hotel_Abyss: digital university and the future of critique

[This post was originally published on 03/01 2017 in Discover Society Special Issue on Digital Futures. I am also working on a longer (article) version of it, which will be uploaded soon].

It is by now commonplace to claim that digital technologies have fundamentally transformed knowledge production. This applies not only to how we create, disseminate, and consume knowledge, but also who, in this case, counts as ‘we’. Science and technology studies (STS) scholars argue that knowledge is an outcome of coproduction between (human) scientists and objects of their inquiry; object-oriented ontology and speculative realism go further, rejecting the ontological primacy of humans in the process. For many, it would not be overstretching to say machines do not only process knowledge, but are actively involved in its creation.

What remains somewhat underexplored in this context is the production of critique. Scholars in social sciences and humanities fear that the changing funding and political landscape of knowledge production will diminish the capacity of their disciplines to engage critically with the society, leading to what some have dubbed the ‘crisis’ of the university. Digital technologies are often framed as contributing to this process, speeding up the rate of production, simultaneously multiplying and obfuscating the labour of academics, perhaps even, as Lyotard predicted, displacing it entirely. Tensions between more traditional views of the academic role and new digital technologies are reflected in, often heated, debates over academics’ use of social media (see, for instance, #seriousacademic on Twitter). Yet, despite polarized opinions, there is little systematic research into links between the transformation of the conditions of knowledge production and critique.

My work is concerned with the possibility – that is, the epistemological and ontological foundations – of critique, and, more precisely, how academics negotiate it in contemporary (‘neoliberal’) universities. Rather than trying to figure out whether digital technologies are ‘good’ or ‘bad’, I think we need to consider what it is about the way they are framed and used that makes them either. From this perspective, which could be termed the social ontology of critique, we can ask: what is it about ‘the social’ that makes critique possible, and how does it relate to ‘the digital’? How is this relationship constituted, historically and institutionally? Lastly, what does this mean for the future of knowledge production?

Between pre-digital and post-critical 

There are a number of ways one can go about studying the relationship between digital technologies and critique in the contemporary context of knowledge production. David Berry and Christian Fuchs, for instance, both use critical theory to think about the digital. Scholars in political science, STS, and sociology of intellectuals have written on the multiplication of platforms from which scholars can engage with the public, such as Twitter and blogs. In “Uberfication of the University”, Gary Hall discusses how digital platforms transform the structure of academic labour. This joins the longer thread of discussions about precarity, new publishing landscapes, and what this means for the concept of ‘public intellectual’.

One of the challenges of theorising this relationship is that it has to be developed out of the very conditions it sets out to criticise. This points to limitations of viewing ‘critique’ as a defined and bounded practice, or the ‘public intellectual’ as a fixed and separate figure, and trying to observe how either has changed with the introduction of the digital. While the use of social media may be a more recent phenomenon, it is worth recalling that the bourgeois public sphere that gave rise to the practice of critique in its contemporary form was already profoundly mediatised. Whether one thinks of petitions and pamphlets in the Dreyfus affair, or discussions on Twitter and Facebook – there is no critique without an audience, and digital technologies are essential in how we imagine them. In this sense, grounding an analysis of the contemporary relationship between the conditions of knowledge production and critique in the ‘pre-digital’ is similar to grounding it in the post-critical: both are a technique of ‘ejecting’ oneself from the confines of the present situation.

The dismissiveness Adorno and other members of the Frankfurt school could exercise towards mass media, however, is more difficult to parallel in a world in which it is virtually impossible to remain isolated from digital technologies. Today’s critics may, for instance, avoid having a professional profile on Twitter or Facebook, but they are probably still using at least some type of social media in their private lives, not to mention responding to emails, reading articles, and searching and gathering information through online platforms. To this end, one could say that academics publicly criticising social media engage, in fact, in a performative contradiction: their critical stance is predicated on the existence of digital technologies both as objects of critique and main vehicles for its dissemination.

This, I believe, is an important source of perceived tensions between the concept of critique and digital technologies. Traditionally, critique implies a form of distancing from one’s social environment. This distancing is seen as both spatial and temporal: spatial, in the sense of providing a vantage point from which the critic can observe and (choose to) engage with the society; temporal, in the sense of affording shelter from the ‘hustle and bustle’ of everyday life, necessary to stimulate critical reflection. Universities, at least in a good part of 20th century, were tasked with providing both. Lukács, in his account of the Frankfurt school, satirized this as “taking residence in the ‘Grand Hotel Abyss’”: engaging in critique from a position of relative comfort, from which one can stare ‘into nothingness’. Yet, what if the Grand Hotel Abyss has a wifi connection?

Changing temporal frames: beyond the Twitter intellectual?

Some potential perils of the ‘always-on’ culture and contracting temporal frames for critique are reflected in the widely publicized case of Steven Salaita, an internationally recognized scholar in the field of Native American studies and American literature. In 2013, Salaita was offered a tenured position at the University of Illinois. However, in 2014 the Board of Trustees withdrew the offer, citing Salaita’s “incendiary” posts on Twitter as the reason. Salaita is a vocal critic of Israel, and his Tweets at the time concerned Israeli military offensive in the Gaza Strip; some of the University’s donors found this problematic and pressured the Board to withdraw the offer. Salaita has in the meanwhile appealed the decision and received a settlement from the University of Illinois, but the case – though by no means unique – drew attention to the issue of the (im)possibility of separating the personal, political and professional on social media.

At the same time, social media can provide venues for practicing critique in ways not confined by the conventions or temporal cycles of the academia. The example of Eric Jarosinski, “The rock star philosopher of Twitter”, shows this clearly. Jarosinski is a Germanist whose Tweets contain clever puns on the Frankfurt school, as well as, among others, Hegel and Nietzsche. In 2013, he took himself out of consideration for tenure at the University of Pennsylvania, but continued to compose philosophically-inspired Tweets, eventually earning a huge following, as well as a column in the two largest newspapers in Germany and The Netherlands. Jarosinski’s moniker, #failedintellectual, is an auto-ironic reminder that it is possible to succeed whilst deviating from the established routes of intellectual critique.

Different ways in which it can be performed on Twitter should not, however, detract from the fact that critique operates in fundamentally politicized and stratified spaces; digital technologies can render them more accessible, but that does not mean that they are more democratic or offer a better view of ‘the public’. This is particularly worth remembering in the light of recent political events in the UK and the US. Once the initial shock following the US election and the British EU referendum had subsided, many academics (and intellectuals more broadly) have taken to social media to comment, evaluate, or explain what had happened. Yet, for the most part, these interventions end exactly where they began – on social media. This amounts to live Tweeting from the balcony of the Grand Hotel Abyss: the view is good, but the abyss no less gaping for it.

By sticking to critique on social media, intellectuals are, essentially, doing what they have always been good at – engaging with audiences and in ways they feel comfortable with. To this end, criticizing the ‘alt-right’ on Twitter is not altogether different from criticising it in lecture halls. Of course, no intellectual critique can aspire to address all possible publics, let alone equally. However, it makes sense to think how the ways in which we imagine our publics influences our capacity to understand the society we live in; and, perhaps more importantly, how it influences our ability to predict – or imagine – its future. In its present form, critique seems far better suited to an idealized Habermasian public sphere, than to the political landscape that will carry on in the 21st century. Digital technologies can offer an approximation, perhaps even a good simulation, of the former; but that, in and of itself, does not mean that they can solve problems of the latter.

Jana Bacevic is a PhD researcher at the Department of Sociology at the University of Cambridge. She works on social theory and the politics of knowledge production; her thesis deals with the social, epistemological and ontological foundations of the critique of neoliberalism in higher education and research in the UK. Previously, she was Marie Curie fellow at the University of Aarhus in Denmark at Universities in Knowledge Economies (UNIKE). She tweets at @jana_bacevic

We are all postliberals now: teaching Popper in the era of post-truth politics

blackswan
Adelaide, South Australia, December 2014

Late in the morning after the US election, I am sitting down to read student essays for the course on social theory I’m supervising. This part of the course involves the work of Popper, Kuhn, Lakatos, and Feyerabend, and its application in the social sciences. The essay question is: do theories need to be falsifiable, and how to choose between competing theories if they aren’t? The first part is a standard essay question; I added the second a bit more than a week ago, interested to see how students would think about criteria of verification in absence of an overarching regime of truth.

This is one of my favourite topics in the philosophy of science. When I was a student at the University of Belgrade, feeling increasingly out of place in the post-truth and intensely ethnographic though anti-representationalist anthropology, the Popper-Kuhn debate in Criticism and the Growth of Knowledge held the promise that, beyond classification of elements of material culture of the Western Balkans, lurked bigger questions of the politics and sociology of knowledge (paradoxically, this may have been why it took me very long to realize I actually wanted to do sociology).

I was Popper-primed well before that, though: the principle of falsification is integral to the practice of parliamentary-style academic debating, in which the task of the opposing team(s) is to ‘disprove’ the motion. In the UK, this practice is usually associated with debate societies such as the Oxford and Cambridge Union, but it is widespread in the US as well as the rest of the world; during my undergraduate studies, I was an active member of Yugoslav (now Serbian) Universities Debating Network, known as Open Communication. Furthermore, Popper’s political ideas – especially those in Open Society and its Enemies – formed the ideological core of the Open Society Foundation, founded by the billionaire George Soros to help the promotion of democracy and civil society in Central and Eastern Europe.

In addition to debate societies, the Open Society Foundation supported and funded a greater part of civil society activism in Serbia. At the time, most of it was conceived as the opposition to the regime of Slobodan Milošević, a one-time-banker-turned-politician who ascended to power in the wake of the dissolution of the Socialist federal republic of Yugoslavia. Milošević played a major role in the conflicts in its former republics, simultaneously plunging Serbia deeper into economic and political crisis exacerbated by international isolation and sanctions, culminating in the NATO intervention in 1999. Milošević’s rule ended in a coup following a disputed election in 2000.

I had been part of the opposition from the earliest moment conceivable, skipping classes in secondary school to go to anti-government demos in 1996 and 1997. The day of the coup – 5 October 2000 – should have been my first day at university, but, together with most students and staff, I was at what would turn out to be the final public protest that ended up in the storming of the Parliament. I swallowed quite a bit of tear gas, twice in situations I expected not to get out of alive (or at the very least unharmed), but somehow made it to a friend’s house, where, together with her mom and grandma, we sat in the living room and watched one of Serbia’s hitherto banned TV and radio stations – the then-oppositional B92 – come back on air. This is when we knew it was over.

Sixteen years and little more than a month later, I am reading students’ essays on truth and falsehood in science. This, by comparison, is a breeze, and it’s always exciting to read different takes on the issue. Of course, in the course of my undergraduate studies, my own appreciation of Popper was replaced by excitement at the discovery of Kuhn – and the concomitant realization of the inertia of social structures, which, just like normal science, are incredibly slow to change – and succeeded by light perplexity by Lakatos (research programmes seemed equal parts reassuring and inherently volatile – not unlike political coalitions). At the end, obviously, came infatuation with Feyerabend: like every self-respecting former liberal, I reckoned myself a methodological (and not only methodological) anarchist.

Unsurprisingly, most of the essays I read exhibit the same trajectory. Popper is, quite obviously, passé; his critique of Marxism (and other forms of historicism) not particularly useful, his idea of falsificationism too strict a criterion for demarcation, and his association with the ideologues of neoliberalism did probably not help much either.

Except that…. this is what Popper has to say:

It is undoubtedly true that we have a more direct knowledge of the ‘inside of the human atom’ than we have of physical atoms; but this knowledge is intuitive. In other words, we certainly use our knowledge of ourselves in order to frame hypotheses about some other people, or about all people. But these hypotheses must be tested, they must be submitted to the method of selection by elimination.

(The Poverty of Historicism, 127)

Our knowledge of ourselves: for instance, our knowledge that we could never, ever, elect a racist, misogynist, reality TV star for the president of one of world’s superpowers. That we would never vote to leave the European Union, despite the fact that, like all supranational entities, it has flaws, but look at how much it invests in our infrastructure. Surely – as Popper would argue – we are rational animals: and rational animals would not do anything that puts them in unnecessary danger.

Of course, we are correct. The problem, however, is that we have forgotten about the second part of Popper’s claim: we use knowledge of ourselves to form hypotheses about other people. For instance: since we understand that a rich businessman is not likely to introduce economic policies that harm the elite, the poor would never vote for him. For instance: since we remember the victims of Nazism and fascism, everyone must understand how frail is the liberal consensus in Europe.

This is why the academia came to be “shocked” by Trump’s victory, just like it was shocked by the outcome of the Brexit referendum. This is also the key to the question of why polls “failed” to predict either of these outcomes. Perhaps we were too focused on extrapolating our assumptions to other people, and not enough on checking whether they hold.

By failing to understand that the world is not composed of left-leaning liberals with a predilection for social justice, we commit, time and again, what Bourdieu termed scholastic fallacy – propensity to attribute categories of our own thinking to those we study. Alternatively, and much worse, we deny them common standards of rationality: the voters whose political choices differ from ours are then cast as uneducated, deluded, suffering from false consciousness. And even if they’re not, they must be a small minority, right?

Well, as far as hypotheses are concerned, that one has definitely failed. Maybe it’s time we started considering alternatives.

All the feels

This poster drew my attention while I was working in the library of Cambridge University a couple of weeks ago:

lovethelib

 

For a while now, I have been fascinated with the way in which the language of emotions, or affect, has penetrated public discourse. People ‘love’ all sorts of things: the way a film uses interior light, the icing on a cake, their friend’s new hairstyle. They ‘hate’ Donald Trump, the weather, next door neighbours’ music. More often than not, conversations involving emotions would not be complete without mentioning online expressions of affect, such as ‘likes’ or ‘loves’ on Facebook or on Twitter.

Of course, the presence of emotions in human communication is nothing new. Even ‘ordinary’ statements – such as, for instance, “it’s going to rain tomorrow” – frequently entail an affective dimension (most people would tend to get at least slightly disappointed at the announcement). Yet, what I find peculiar is that the language of affect is becoming increasingly present not only in non-human-mediated communication, but also in relation to non-human entities. Can you really ‘love’ a library? Or be ‘friends’ with your local coffee place?

This isn’t to in any way concede ground to techno-pessimists who blame social media for ‘declining’ standards in human communication, nor even to express concern over the ways in which affective ‘reaction’ buttons allow tracking online behaviour (privacy is always a problem, and ‘unmediated’ communication largely a fiction). Even if face-to-face is qualitatively different from online interaction, there is nothing to support the claim that makes it inherently more valuable, or, indeed, ‘real’ (see: “IRL fetish[i]). It is the social and cultural framing of these emotions, and, especially, the way social sciences think about it – the social theory of affect, if you wish – that concerns me here.

Fetishism and feeling

So what is different about ‘loving’ your library as opposed to, say, ‘loving’ another human being? One possible way of going about this is to interpret expressions of emotion directed at or through non-human entities as ‘shorthand’ for those aimed at other human beings. The kernel of this idea is contained in Marx’s concept of commodity fetishism: emotion, or affect, directed at an object obscures the all-too-human (in his case, capital) relationship behind it. In this sense, ‘liking’ your local coffee place would be an expression of appreciation for the people who work there, for the way they make double macchiato, or just for the times you spent there with friends or other significant others. In human-to-human communication, things would be even more straightforward: generally speaking, ‘liking’ someone’s status updates, photos, or Tweets would signify appreciation of/for the person, agreement with, or general interest in, what they’re saying.

But what if it is actually the inverse? What if, in ‘liking’ something on Facebook or on Twitter, the human-to-human relationship is, in fact, epiphenomenal to the act? The prime currency of online communication is thus the expenditure of (emotional) energy, not the relationship that it may (or may not) establish or signify. In this sense, it is entirely irrelevant whether one is liking an inanimate object (or concept), or a person. Likes or other forms of affective engagement do not constitute any sort of human relationship; the only thing they ‘feed’ is the network itself. The network, at the same time, is not an expression, reflection, or (even) simulation of human relationships: it is the primary structure of feeling.

All hail…

Yuval Noah Harari’s latest book, Homo Deus, puts the issue of emotions at the centre of the discussion of the relationship between human and AI. In a review in The Guardian, David Runciman writes:

“Human nature will be transformed in the 21st century because intelligence is uncoupling from consciousness. We are not going to build machines any time soon that have feelings like we have feelings: that’s consciousness. Robots won’t be falling in love with each other (which doesn’t mean we are incapable of falling in love with robots). But we have already built machines – vast data-processing networks – that can know our feelings better than we know them ourselves: that’s intelligence. Google – the search engine, not the company – doesn’t have beliefs and desires of its own. It doesn’t care what we search for and it won’t feel hurt by our behaviour. But it can process our behaviour to know what we want before we know it ourselves. That fact has the potential to change what it means to be human.”

On the surface level, this makes sense. Algorithms can measure our ‘likes’ and other emotional reactions and combine them into ‘meaningful’ patterns – e.g., correlate them with specific background data (age, gender, location), time of day, etc., and, on the basis of this, predict how you will act (click, shop) in specific situations. However, does this amount to ‘knowledge’? In other words, if machines cannot have feelings – and Harari seems adamant that they cannot – how can they actually ‘know’ them?

Frege on Facebook

This comes close to a philosophical problem I’ve  been trying to get a grip on recently: the Frege-Geach (alternatively, the embedding, or Frege-Geach-Searle) problem. It is comprised of two steps. The first is to claim that there is a qualitative difference between moral and descriptive statements – for instance, between saying “It is wrong to kill” and “It is raining”. Most humans, I believe, would agree with this. The second is to observe that there is no basis for claiming this sort of difference based on sentence structure alone, which then leads to the problem of explaining its source – how do we know there is one? In other words, how it could be that moral and descriptive terms have exactly the same sort of semantic properties in complex sentences, even though they have different kinds of meaning? Where does this difference stem from?

The argument can be extended to feelings: how do we know that there is a qualitative difference between statements such as “I love you” and “I eat apples”? Or loving someone and ‘liking’ an online status? From a formal (syntactic) perspective, there isn’t. More interestingly, however, there is no reason why machines should not be capable of such a form of expression. In this sense, there is no way to reliably establish that likes coming from a ‘real’ person and, say, a Twitterbot, are qualitatively different. As humans, of course, we would claim to know the difference, or at least be able to spot it. But machines cannot. There is nothing inherent in the expression of online affect that would allow algorithms to distinguish between, say, the act of ‘loving’ the library and the act of loving a person. Knowledge of emotions, in other words, is not reducible to counting, even if counting takes increasingly sophisticated forms.

How do you know what you do not know?

The problem, however, is that humans do not have superior knowledge of emotions, their own or other people’s. I am not referring to situations in which people are unsure or ‘confused’ about how they feel [ii], but rather to the limited language – forms of expression – available to us. The documentary “One More Time With Feeling”, which I saw last week, engages with this issue in a way I found incredibly resonant. Reflecting on the loss of his son, Nick Cave relates how the words that he or people around him could use to describe the emotions seemed equally misplaced, maladjusted and superfluous (until the film comes back into circulation, Amanda Palmer’s review which addresses a similar question is  here) – not because they couldn’t reflect it accurately, but because there was no necessary link between them and the structure of feeling at all.

Clearly, the idea that language does not reflect, but rather constructs  – and thus also constrains – human reality is hardly new: Wittgenstein, Lacan, and Rorty (to name but a few) have offered different interpretations of how and why this is the case. What I found particularly poignant about the way Cave frames it in the film is that it questions the whole ontology of emotional expression. It’s not just that language acts as a ‘barrier’ to the expression of grief; it is the idea of the continuity of the ‘self’ supposed to ‘have’ those feelings that’s shattered as well.

Love’s labour’s lost (?): between practice and theory

This brings back some of my fieldwork experiences from 2007 and 2008, when I was doing a PhD in anthropology, writing on the concept of romantic relationships. Whereas most of my ‘informants’ – research participants – could engage in lengthy elaboration of the criteria they use in choosing (‘romantic’) partners (as well as, frequently, the reasons why they wouldn’t designate someone as a partner), when it came to emotions their narratives could frequently be reduced to one word: love (it wasn’t for lack of expressive skills: most were highly educated). It was framed as a binary phenomenon: either there or not there. At the time, I was more interested in the way their (elaborated) narratives reflected or coded markers of social inequality – for instance, class or status. Recently, however, I have been going back more to their inability (or unwillingness) to elaborate on the emotion that supposedly underpins, or at least buttresses, those choices.

Theoretical language is not immune to these limitations. For instance, whereas social sciences have made significant steps in deconstructing notions such as ‘man’, ‘woman, ‘happiness’, ‘family’, we are still miles away from seriously examining concepts such as ‘love’, ‘hate’, or ‘fear’. Moira Weigel’s and Eva Illouz’ work are welcome exceptions to the rule: Weigel uses the feminist concept of emotional labour to show how the responsibility for maintaining relationships tends to be unequally distributed between men and women, and Illouz demonstrates how modern notions of dating come to define subjectivity and agency of persons in ways conducive to the reproduction of capitalism. Yet, while both do a great job in highlighting social aspects of love, they avoid engaging with its ontological basis. This leaves the back door open for an old-school dualism that either assumes there is an (a- or pre-social?) ‘basis’ to human emotions, which can  be exploited or ‘harvested’ through relationships of power; or, conversely, that all emotional expression is defined by language, and thus its social construction the only thing worth studying. It’s almost as if ‘love’ is the last construct left standing, and we’re all too afraid to disenchant it.

For a relational ontology

A relational ontology of human emotions could, in principle, aspire to de-throne this nominalist (or, possibly worse, truth-proceduralist) notion of love in favour of one that sees it as a by-product of relationality. This isn’t claiming that ‘love’ is epiphenomenal: to the degree to which it is framed as a motivating force, it becomes part and parcel of the relationship itself. However, not seeing it as central to this inquiry would hopefully allow us to work on the diversification of the language of emotions. Instead of using a single marker (even as polysemic as ‘love’) for the relationship with one’s library and one’s significant other, we could start thinking about ways in which they are (or are not) the same thing. This isn’t, of course, to sanctify ‘live’ human-to-human emotion: I am certain that people can feel ‘love’ for pets, places, or deceased ones. Yet, calling it all ‘love’ and leaving it at that is a pretty shoddy way of going about feelings.

Furthermore, a relational ontology of human emotions would mean treating all relationships as unique. This isn’t, to be clear, a pseudoanarchist attempt to deny standards of or responsibility for (inter)personal decency; and even less a default glorification of long-lasting relationships. Most relationships change over time (as do people inside them), and this frequently means they can no longer exist; some relationships cannot coexist with other relationships; some relationships are detrimental to those involved in them, which hopefully means they cease to exist. Equally, some relationships are superficial, trivial, or barely worth a mention. However, this does not make them, analytically speaking, any less special.

This also means they cannot be reduced to the same standard, nor measured against each other. This, of course, runs against one of capitalism’s dearly-held assumptions: that all humans are comparable and, thus, mutually replaceable. This assumption is vital not only for the reproduction of labour power, but also, for instance, for the practice of dating [iii], whether online or offline. Moving towards a relational concept of emotions would allow us to challenge this notion. In this sense, ‘loving’ a library is problematic not because the library is not a human being, but because ‘love’, just like other human concepts, is a relatively bad proxy. Contrary to what pop songs would have us believe, it’s never an answer, and, quite possibly, neither the question.

Some Twitter wisdom for the end….

————————————————————————–

[i] Thanks go to Mark Carrigan who sent this to me.

[ii] While I am very interested in the question of self-knowledge (or self-ignorance), for some reason, I never found this particular aspect of the question analytically or personally intriguing.

[iii] Over the past couple of years, I’ve had numerous discussions on the topic of dating with friends, colleagues, but also acquaintances and (almost) strangers (the combination of having a theoretical interest in the topic and not being in a relationship seem to be particularly conducive to becoming involved in such conversations, regardless of whether one wants it or not). I feel compelled to say that my critique of dating (and the concomitant refusal to engage in it, at least as far as its dominant social forms go) does not, in any way, imply a criticism of people who do. There is quite a long list of people whom I should thank for helping me clarify this, but instead I promise to write another longer post on the topic, as well as, finally, develop that app  :).