Climate change and the paradox of inaction

One of the things I most often hear when talking to people about climate change is “but what to do?” This, in and of itself, is good news. Perhaps owing to evidently extreme weather patterns1, perhaps owing to the concentrated efforts of primary/secondary school teachers2, perhaps owing to unceasing (though increasingly brutally repressed, even in the UK & the rest of Europe) efforts of activists, it seems the question whether climate change is ‘real’ has finally taken the back seat to “and what shall we do about it?”.

While climate denialism may have had its day, challenges now come from its cousins (or descendants) in the form of climate optimism, technosolutionism, or – as Linsey McGoey and I have recently argued – the specific kind of ignorance associated with liberal fatalism: using indeterminacy to delay action until certain actions are foreclosed. In the latter context in particular, the sometimes overwhelming question “what to do” can compound and justify, even if unintentionally, the absence of action. The problem is that whilst we are deliberating what to do, certain kinds of action become less possible or more costly, thus limiting the likelihood we will be able to implement them in the future. This is the paradox of inaction.

My interest in this question came from researching the complex relationship between knowledge (and ignorance) and (collective or individual) action. Most commonsense theories assume a relatively linear link between the two: knowing about something will lead you to act on it, especially in the contexts of future risk or harm. This kind of approach shaped information campaigns, or struggles to listen to ‘the science’, from early conversations around climate change to Covid-19. Another kind of approach overrides these information- or education-based incentives in favour of behavioural ‘nudges’; awareness of cognitive processing biases (well-documented and plenty) suggested slightly altering decisional infrastructure would be more efficient than trying to, effectively, persuade people to do the right thing. While I can see sense in both approaches, I became interested instead in the ambiguous role of knowledge. In other words, under what conditions would knowing (about the future) prevent us from acting (on the future)?

There are plenty of examples to choose from: from the critique of neoliberalism to Covid-19 (see also the above) to, indeed, climate change (free version here). In the context of teaching, this question often comes up when students begin to realize the complexity of global economy, and the inextricability of questions of personal agency from what we perceive as systemic change. In other words, they begin to realize that the state of the world cannot be reduced either to individual responsibility nor to the supposedly impersonal forces of “economy”, “politics”, “power” etc. But this rightly leaves them at an impasse; if change is not only about individual agency nor about large-scale system change, how can we make anything happen?

It is true that awareness of complexity can often lead to bewilderment or, at worst, inaction. After all, in view of such extraordinary entanglement of factors – individual, cultural, economic, social, political, geological, physical, biological – it can be difficult to even know how to tackle one without unpicking all others. Higher education doesn’t help with this: most people (not all, but most) are, sadly, trained to see the world from the perspective of one discipline or field of study3, which can rightly make processes that span those fields appear impossible to grasp. Global heating is one such process; it is, at the same time, geological, meteorological, ecological, social, political, medical, economic, etc. As Timothy Morton has argued, climate change is a ‘hyperobject’; it exceeds the regular boundaries of human conceptualization.

Luckily, social theory, and in particular social ontology, is particularly good at analysing objects. Gender – e.g. the notion of ‘woman’ – is an example of such an object. This does not mean, by the way, that ‘deconstructing’ objects, concepts, or notions needs to reduce from the complexity of their interrelation; in some approaches to social ontology, a whole is always more than the sum (or any deducible interrelation) of its parts. In other words, to ‘deconstruct’ climate change is not in any way to deny its effects or the usefulness of the concept; it is to understand how different elements – which we conventionally, and historically, but not-at-all necessarily, associate with disciplines or ‘domains’ – interact and interrelate, and what that means. Differently put, the way disciplines construct climate change as an object (or assemblage) tells us something about the way we are likely to perceive solutions (or ways of addressing it, more broadly). It does not determine what is going to happen, but it points to the venues (and limitations) humans are likely to see in doing something about it.

Why does this matter? Our horizon of agency is limited by what we perceive as subjects, objects, and forms of agency. In less weighty parlance, what (and whom) we perceive as being able to do stuff; and the kind of stuff it (they) can do. This, also, includes what we perceive as limitations on doing stuff, real or not. Two limitations apply to all human beings; time and energy. In other words, doing stuff takes time. It also consumes energy. This has implications for what we perceive as the stuff we can do. So what can we do?

As with so many other things, there are two answers. One is obvious: do anything and everything you can, and do it urgently. Anything other than nothing. (Yes, even recycling, in the sense in which it’s better than not recycling, though obviously less useful than not buying packaging in the first place).

The second answer is also obvious, but perhaps less frequent. Simply, what you aim to do depends on what you aim to achieve. Aiming to feel a bit better? Recycle, put a poster up, maybe plant a tree (or just some bee-friendly plants). Make a bit of a difference to your carbon emissions? Leave the car at home (at least some of the time!), stop buying stuff in packaging, cut on flying, eliminate food waste (yes, this is fact very easy to do). Make a real change? Vote on climate policy; pressure your MP; insulate your home (if you have one); talk to others. Join a group, or participate in any kind of collective action. The list goes on; there are other forms of action that go beyond this. They should not be ranked, not in terms of moral rectitude, nor in terms of efficiency (if you’re thinking of the old ‘limitations of individual agency’ argument, do consider what would happen if everyone *did* stop driving and no, that does not mean ambulance vehicles).

The problem with agency is that our ideas of what we can do are often shaped by what we have been trained, raised, and expected to do. Social spaces, in this sense, also become polygons for action. You can learn to do something by being in a space where you are expected to do (that) something; equally, you learn not to do things by being told, explicitly or implicitly, that it is not the done thing. Institutions of higher education are really bad at fostering certain kinds of action, while rewarding others. What is rewarded is (usually) individual performance. This performance is frequently framed, explicitly or implicitly, as competition: against your peers (in relation to whom you are graded) or colleagues (with whom you are compared when it comes to pay, or promotion); against other institutions (for REF scores, or numbers of international students); against everyone in your field (for grants, or permanent jobs). Even instances of team spirit or collaboration are more likely to be rewarded or recognized when they lead to such outcomes (getting a grant, or supporting someone in achieving individual success).

This poses significant limitations for how most people think about agency, whether in the context of professional identities or beyond (I’ve written before about limits to, and my own reluctance towards, affiliation with any kind of professional let alone disciplinary identity). Agency fostered in most contemporary capitalist contexts is either consumption- or competition-oriented (or both, of course, as in conspicuous consumption). Alternatively, it can also be expressive, in the sense in which it can stimulate feelings of identity or belonging, but it bears remembering these do not in and of themselves translate into action. Absent from these is the kind of agency I, for want of a better term, call world-building: the ability to imagine, create, organize and sustain environments that do more than just support the well-being and survival of one and one’s immediate in-group, regardless how narrowly or broadly we may define it, from nuclear family to humanity itself.

The lack of this capacity is starkly evident in classrooms. Not long ago, I asked one of the groups I teach for an example of a social or political issue they were interested in or would support despite the fact it had no direct or personal bearing on their lives. None could (yes, the war on Gaza was already happening). This is not to say that students do not care about issues beyond their immediate scope of interest, or that they are politically disenchanted: there are plenty of examples to the contrary. But it is to suggest that (1), we are really bad at connecting their concerns to broader social and political processes, especially when it comes to issues on which everyone in the global North is relatively privileged (and climate change is one such issue, compared to effects it is likely to have on places with less resilient infrastructure); and (2), institutions are persistently and systematically (and, one might add, intentionally) failing at teaching how to turn this into action. In other words: many people are fully capable of imagining another world is possible. They just don’t know how to build it.

As I was writing this, I found a quote in Leanne Betasamosake Simpson’s (excellent) As We Have Always Done: Indigenous Freedom Through Radical Resistance that I think captures this brilliantly:

Western education does not produce in us the kinds of effects we like to think it does when we say things like ‘education is the new buffalo’. We learn how to type and how to write. We learn how to think within the confines of Western thought. We learn how to pass tests and get jobs within the city of capitalism. If we’re lucky and we fall into the right programs, we might learn to think critically about colonialism. But postsecondary education provides few useful skill sets to those of us who want to fundamentally change the relationship between the Canadian state and Indigenous peoples, because that requires a sustained, collective, strategic long-term movement, a movement the Canadian state has a vested interest in preventing, destroying, and dividing.

(loc 273/795)

It may be evident that generations that have observed us do little but destroy the world will exhibit an absence of capacity (or will) to build one. Here, too, change starts ‘at home’, by which I mean in the classroom. Are we – deliberately or not – reinforcing the message that performance matters? That to ‘do well’ means to fit, even exceed, the demands of capitalist productivity? That this is how the world is, and the best we can do is ‘just get on with it’?

The main challenge for those of us (still) working in higher education, I think, is how to foster and stimulate world-building capacities in every element of our practice. This, make no mistake, is much more difficult than what usually passes for ‘decolonizing’ (though even that is apparently sometimes too much for white colonial institutions), or inserting sessions, talks, or workshops about the climate crisis. It requires resistance to reproducing the relationship to the world that created and sustains the climate crisis – competition-oriented, extractive, and expropriative. It calls for a refusal to conform to the idea that knowledge should, in the end, serve the needs of (a) labour market, ‘economy’, or the state. It requires us to imagine a world beyond such terms. And then teach students how to build it.

  1. Hi, philosophy of science/general philosophy/general bro! Are you looking to explain mansplain stochastic phenomena to me? Please bear in mind that this is a blog post, and thus oriented towards general audience, and that I have engaged with this problem on a slightly different level of complexity elsewhere (and yes, I am well aware of the literature). Here, read up. ↩︎
  2. One of the recent classes I taught that engaged with the question of denialism/strategic ignorance (in addition to a session on sociology of ignorance in Social Theory and Politics of Knowledge, an undergraduate module I taught at Durham in 21-23, and sessions on public engagement, expertise and authority, and environmental sociology in Public Sociology: Theory and Practice, which is a core MSc module at Durham, I teach a number of guest lectures on the relationship between knowledge and ignorance, scientific advice, etc.) was a pleasant surprise insofar as most students were well aware of the scale, scope, and reality of climate change. This is a pronounced change from some of my experiences in the preceding decade, when the likelihood of encountering at least the occasional climate skeptic, if not outright denialist (even if by the virtue of qualifying for the addressee of fn 1 above), was high(er). When asked, most of the students told me they learned about climate change in geography at school. Geography teachers, I salute you. ↩︎
  3. The separation of sociology and politics in most UK degree programmes, for instance, continues to baffle me. ↩︎

Do you dream of the weather?

The weather, as writers on climate change from Amitav Ghosh to Jenny Offill (and many others) have been noting, hardly ever figures at the centre of the plot. Even stories that have a large climactic disaster determining the world they build (The Road, or Margaret Atwood’s Maddadam series, or Octavia Butler’s Parables), the event is usually, in a somewhat punny phrase, precipitating; it happens before, or because, it does not change, it does not change with us, and it cannot be changed.

It is weird to think that a concept so clearly defined by the tendency to change – namely, climate change – is at the same time an acknowledgment of the absolutely planetary scope of human agency (after all, it is human-induced climate change that should most concern us) and of its limits (after all, it is clear that we are locked into at least 1.5C degree warming now, with all the unpredictability that brings). To think about the weather, then, is to dwell on – and at – the very boundary of the human condition: both what we can achieve – destroy, mostly – and what we cannot (repair, mostly). It is also, as Brian Wynne brilliantly analyzed, to revisit the boundaries between observation (or phenomenology), measurement (or attempt at quantification/standardization), and indeterminacy, and thus pose the question that forms the crux of one of the strands of my work: what is the relationship between knowing about and doing something about the future? Or, to put it slightly differently, is the future something we know about or something we do?

To dream of the weather, then, adds another degree of radical indeterminacy: to the extent to which dreams are not volitional (and even for fans of lucid dreaming, that is still a large extent), the incursion of weather into dreams further refracts the horizon of agency. While in dreams we think we can choose what we do (or don’t do), but we are both in charge and not in charge; we are (again, with exceptions) not aware of the dream as we are producing it, but we are producing it; there is no-one else there, right?

It struck me some time ago that, to the best of my knowledge, not many people dream about the weather. Or, in the vein of the backdrop that Ghosh writes about, even if they do, they dream of the weather as something that just happens. True to form, I had a dream that featured a blizzard that very night; but it also featured a snow plough, or road sweeper/gritter, I am not sure which.

Last night, however, I had a dream of a storm cloud passing all over North America, and then getting to the UK. In my dream, the southwest tip of the UK – Cornwall, a bit of Dorset, Somerset – was the only part that was spared. This was strange, as I was sure that what precipitated the dream was reading the forecast about storm Nelson, which predicted high impact in the southwest, but almost none in the northeast, where I live. Yet, when I woke up, rain was lashing against my windows; a thick, low cloud hung over most of the coast.

Strange weather?

When it ends

In the summer of 2018, I came back to Cambridge from one of my travels to a yellowed, dusty patch of land. The grass – the only thing that grew in the too shady back garden of the house me and my partner were renting – had not only wilted; it had literally burnt to the ground.

I burst into tears. As I sat in the garden crying, to (I think) the dismay of my increasingly bewildered partner, I pondered what a scene of death so close to home was doing – what it was doing in my back yard, and what it was doing to me. For it was neither the surprise at nor the scale that shook me – I had witnessed both human and non-human destruction much vaster than a patch of grass in Cambridge; I had spent most of the preceding year and some reading on the politics, economics, and – as the famed expression goes – ‘the science’ of climate change (starting with the excellent Anthropocene reading group I attended while living in London), so I was well-versed, by then, in precisely what was likely to happen, how and when. It wasn’t, either, the proximity, otherwise assumed to be a strong motivator: I certainly did not need climate change to happen in my literal ‘back yard’ in order to become concerned about it. If nothing else, I had come back to Cambridge from a prolonged stay in Serbia, where I have been observing the very same things, detailed here (including preparations for mineral extraction that will become the main point of contention for the protests against Rio Tinto in 2022). As to anyone who has lived outside of the protected enclaves of the Global North, climate change has felt very real, for quite some time.

What made me break down at the sight of that scorched patch of grass was its ordinariness – the fact that, in front, besides, and around what for me was quite bluntly an extinction event, life seemed to go on as usual. No-one warned me my back garden was a cemetery. Several months before that, at the very start of the first round of UCU strikes in 2018, I raised the question of pension funds invested in fossil fuels, only to be casually told one of the biggest USS shares was in Royal Dutch Shell (USS, and the University of Cambridge, have reluctantly committed to divestment since, but this is yet to yield any results in the case of USS). While universities make pompous statements about sustainability, a substantial chunk of their funding and operating revenue goes to activities that are at best one step removed from directly contributing to the climate crisis, from international (air) travel to building and construction. At Cambridge, I ran a reading group called Ontopolitics of the future, whose explicit question was: What survives in the Anthropocene? In my current experience, the raising of climate change tends to provoke uncomfortable silences, as if everyone had already accepted the inevitability of 1.5+ degree warming and the suffering it would inevitably come with.

This acceptance of death is a key feature of the concept of ‘slow death’ that Lauren Berlant introduced in Cruel Optimism:

“Slow death prospers not in traumatic events, as discrete time-framed phenomena like military encounters and genocides can appear to do, but in temporally labile environments whose qualities and whose contours in time and space are often identified with the presentness of ordinariness itself” (Berlant, 2011: 100).

Berlant’s emphasis on the ordinariness of death is a welcome addition to theoretical frameworks (like Foucault’s bio-, Mbembe’s necro- or Povinelli’s onto-politics) that see the administration of life and death as effects of sovereign power:

“Since catastrophe means change, crisis rhetoric belies the constitutive point that slow death—or the structurally induced attrition of persons keyed to their membership in certain populations—is neither a state of exception nor the opposite, mere banality, but a domain where an upsetting scene of living is revealed to be interwoven with ordinary life after all” (Berlant, 2011: 102).

Over the past year and some, I’ve spent a lot of time thinking about the concept of ‘slow death’ in relation to the Covid-19 pandemic (my contribution to the edited special issue on Encountering Berlant should be coming out in Geography Journal sometime this year). However, what brought back the scorched grass in Cambridge as I sat at home during UK’s hottest day on record in 2022 was not the (inevitable) human, non-human, or infrastructural cost of climate change; it was, rather, the observation that for most academics life seemed to go on as usual, if a little hotter. From research concerns to driving to moaning over (the absence of) AC, there seemed to be little reflection on how our own modes of knowledge production – not to mention lifestyles – were directly contributing to heating the planet.

Of course, the paradox of knowledge and (in)action – or knowing and (not) doing – has long been at the crux of my own work, from performativity and critique of neoliberalism to the use of scientific evidence in the management of the Covid-19 pandemic. But with climate change, surely it has to be obvious to everyone that there is no way to just continue business as usual, that – while effects are surely differentially distributed according to privilege and other kinds of entitlement – no-one is really exempt from it?

Or so I thought, as I took an evening walk and passed a dead magpie on the pavement, which made me think of birds dying from heat exhaustion in India earlier in May (luckily, no other signs of mass bird extinction were in sight, so I returned home, already a bit light-headed from the heat). But as I absent-mindedly scrolled through Twitter (as well as attended a part of a research meeting), what seemed obvious was that there was a clear disconnection between modes of knowing and modes of being in the world. On the one hand, everyone was too hot, commenting on the unsustainability of housing, or the inability of transport networks to sustain temperatures over 40 degrees Celsius. On the other, academic knowledge production seemed to go on, as if things such as ‘universities’, ‘promotions’, or ‘reviews’ had the span of geological time, rather than being – for the most part – a very recent blip in precisely the thing that led to this degree of warming: capitalism, and the drive to (over)produce, (over)compete, and expand.

It is true that these kinds of challenges – like existential crises – can really make people double-down on whatever positions and identities they already have. This is quite obvious in the case of some of political divisions – with, for instance, the death spirals of Covid-denialism, misogyny, and transphobia – but it happens in less explicitly polarizing ways too. In the context of knowledge production, this is something I have referred to as the combination of epistemic attachment and ontological bias. Epistemic attachment refers to being attached to our objects of knowledge; these can be as abstract as ‘class’ or ‘social structure’ or as concrete as specific people, problems, or situations. The relationship between us (as knowers) and what we know (our objects of knowledge) is the relationship between epistemic subjects and epistemic objects. Ontological bias, on the other hand, refers to the fact that our ways of knowing the world become so constitutive of who we are that we can fail to register when the conditions that rendered this mode of knowledge possible (or reliable) no longer obtain. (This, it is important to note, is different from having a ‘wrong’ or somehow ‘distorted’ image of epistemic objects; it is entirely conceivable to have an accurate representation on the wrong ontology, as is vice versa).

This is what happens when we carry on with academic research (or, as I’ve recently noted, the circuit of academic rituals) in a climate crisis. It is not that our analyses and publications stop being more or less accurate, more or less cited, more or less inspiring. On the other side, the racism, classism, ableism, and misogyny of academia do not stop either. It’s just that, technically speaking, the world in which all of these things happen is no longer the same world. The 1.5C (let alone 2 or 2.5, more-or-less certain now) degrees warmer world is no longer the same world that gave rise to the interpretative networks and theoretical frameworks we overwhelmingly use.

In this sense, to me, continuing with academia as business as usual (only with AC) isn’t even akin to the proverbial polishing of brass on the Titanic, not least because the iceberg has likely already melted or at least calved several times over. What it brings to mind, instead, was Jeff Vandermeer’s Area X trilogy, and the way in which professional identities play out in it.

I’ve already written about Area X, in part because the analogy with climate change presents itself, and in part because I think that – in addition to Margaret Atwood’s MaddAddam and Octavia Butler’s Parables – it is the best literary (sometimes almost literal) depiction of the present moment. Area X (or Southern Reach, if you’re in the US), is about an ‘event’ – that is at the same time a space – advancing on the edge of the known, ‘civilized’ world. The event/space – ‘Area’ – is, in a clear parallel to Strugatskys’ The Zone, something akin to a parallel dimension: a world like our own, within our own, and accessible from our own, but not exactly hospitable to us. In Vandermeer’s trilogy, Area X is a lush green, indeed overgrown, space; like in The Zone, ‘nature is healing’ has a more ominous sound to it, as in Area X, people, objects, and things disappear. Or reappear. Like bunnies. And husbands.

The three books of Area X are called Annihilation, Authority, and Acceptance. In the first book, the protagonist – whom we know only as the Biologist – goes on a mission to Area X, the area that has already swallowed (or maybe not) her husband. Other members of the expedition, who we also know only by profession – the Anthropologist, the Psychologist – are also women. The second book, Authority, follows the chief administrator – who we know as Control – of Area X, as the area keeps expanding. Control eventually follows the Biologist into Area X. The third book – well, I’ll stop with the plot spoilers here, but let’s just say that the Biologist is no longer called the Biologist.

This, if anything, is the source of slight reservation I have towards the use of professional identities, authority, and expertise in contexts like the climate crisis. Scientists for XR and related initiatives are both incredibly brave (especially those risking arrest, something I, as an immigrant, cannot do) and – needless to say – morally right; but the underlying emphasis on ‘the science’ too often relies on the assumption that right knowledge will lead to right action, which tends not to hold even for many ‘professional’ academics. In other words, it is not exactly that people do not act on climate change because they do not know or do not believe the science (some do, at least). It is that systems and institutions – and, in many cases, this includes systems and institutions of knowledge production, such as universities – are organized in ways that makes any kind of action that would refuse to reproduce (let alone actually disrupt) the logic of extractive capitalism increasingly difficult.

What to do? It is clear that we are now living on the boundary of Area X, and it is fast expanding. Area X is what was in my back garden in Cambridge. Area X is outside when you open windows in the north of England and what drifts inside has the temperature of a jet engine exhaust of a plane that had just landed. The magpie that was left to die in the middle of the road in Jesmond crossed Area X.

For my part, I know it is no longer sufficient to approach Area X as the Sociologist (or Theorist, or Anthropologist, or whatever other professional identity I have – relucantly, as all identities – perused); I tried doing that for Covid-19, and it did not get very far. Instead, I’d urge my academic colleagues to seriously start thinking about what we are and what we do when these labels – Sociologist, Biologist, Anthropologist, Scientist – no longer have a meaning. For this moment may come earlier than many of us can imagine; by then, we’d have better worked out the relationship between annihilation, authority, and acceptance.  

Area Y: The Necropolitics of Post-Socialism

This summer, I spent almost a month in Serbia and Montenegro (yes, these are two different countries, despite New York Times still refusing to acknowledge this). This is about seven times as long as I normally would. The two principal reasons are that my mother, who lives in Belgrade, is ill, and that I was planning to get a bit of time to quietly sit and write my thesis on the Adriatic coast of Montenegro. How the latter turned out in light of the knowledge of the former I leave to imagination (tl;dr: not well). It did, however, give me ample time to reflect on the post-socialist condition, which I haven’t done in a while, and to get outside Belgrade, to which I normally confine my brief visits.

The way in which perverse necro/bio-politics of post-socialism obtain in my mother’s illness, in the landscape, and in the socio-material, fits almost too perfectly into what has been for years the dominant style of writing about places that used to be behind the Iron Curtain (or, in the case of Yugoslavia, on its borders). Social theory’s favourite ruins – the ruins of socialism – are repeatedly re-valorised through being dusted off and resurrected, as yet another alter-world to provide the mirror image to the here and the now (the here and the now, obviously, being capitalism). During the Cold War, the Left had its alter-image in the Soviet Union; now, the antidote to neoliberalism is provided not through the actual ruins of real socialism – that would be a tad too much to handle – but through the re-invention of the potential of socialism to provide, in a tellingly polysemic title of MoMA’s recently-opened exhibition on architecture in Yugoslavia, concrete utopias.

Don’t get me wrong: I would love to see the exhibition, and I am sure that it offers much to learn, especially for those who did not have the dubious privilege of having grown up on both sides of socialism. It’s not the absence of nuance that makes me nauseous in encounters with socialist nostalgia: a lot of it, as a form of cultural production, is made by well-meaning people and, in some cases, incredibly well-researched. It’s that  resurrecting hipsterified golems of post-socialism serves little purpose other than to underline their ontological status as a source of comparison for the West, cannon-fodder for imaginaries of the world so bereft of hope that it would rather replay its past dreams than face the potential waking nightmare of its future.

It’s precisely this process that leaves them unable to die, much like the ghosts/apparitions/copies in Lem’s (and Tarkovsky’s) Solaris, and in VanderMeer’s Southern Reach trilogy. In VanderMeer’s books, members of the eleventh expedition (or, rather, their copies) who return to the ‘real world’ after exposure to the Area X develop cancer and die pretty quickly. Life in post-socialism is very much this: shadows or copies of former people confusedly going about their daily business, or revisiting the places that once made sense to them, which, sometimes, they have to purchase as repackaged ‘post-socialism’; in this sense, the parable of Roadside Picnic/Stalker as the perennial museum of post-communism is really prophetic.

The necropolitical profile of these parts of former Yugoslavia, in fact, is pretty unexceptional. For years, research has shown that rapid privatisation increases mortality, even controlled for other factors. Obviously, the state still feigns perfunctory care for the elderly, but healthcare is cumbersome, inefficient and, in most cases, barely palliative. Smoking and heavy drinking are de rigueur: in winter, Belgrade cafés and pubs turn into proper smokehouses. Speaking of that, vegetarianism is still often, if benevolently, ridiculed. Fossil fuel extraction is ubiquitous. According to this report from 2014, Serbia had the second highest rate of premature deaths due to air pollution in Europe. That’s not even getting closer to the Thing That Can’t Be Talked About – the environmental effects of the NATO intervention in 1999.

An apt illustration comes as I travel to Western Serbia to give a talk at the anthropology seminar at Petnica Science Centre, where I used to work between 2000 and 2008. Petnica is a unique institution that developed in the 1980s and 1990s as part science camp, part extracurricular interdisciplinary  research institute, where electronics researchers would share tables in the canteen with geologists, and physicists would talk (arguably, not always agreeing) to anthropologists. Founded in part by the Young Researchers of Serbia (then Yugoslavia), a forward-looking environmental exploration and protection group, the place used to float its green credentials. Today, it is funded by the state – and fully branded by the Oil Industry of Serbia. The latter is Serbian only in its name, having become a subsidiary of the Russian fossil fuel giant Gazpromneft. What could arguably be dubbed Serbia’s future research elite, thus, is raised in view of full acceptance of the ubiquity of fossil fuels not only for providing energy, but, literally, for running the facilities they need to work.

These researchers can still consider themselves lucky. The other part of Serbian economy that is actually working are factories, or rather production facilities, of multinational companies. In these companies, workers are given 12-hour shifts, banned from unionising, and, as a series of relatively recent reports revealed, issued with adult diapers so as to render toilet breaks unnecessary.

As Elizabeth Povinelli argued, following Achille Mbembe, geontopower – the production of life and nonlife, and the creation of the distinction between them, including what is allowed to live and what is allowed to die – is the primary mode of exercise of power in late liberalism. Less frequently examined way of sustaining the late liberal order is the production of semi-dependent semi-peripheries. Precisely because they are not the world’s slums, and because they are not former colonies, they receive comparatively little attention. Instead, they are mined for resources (human and inhuman). That the interaction between the two regularly produces outcomes guaranteed to deplete the first is of little relevance. The reserves, unlike those of fossil fuels, are almost endless.

Serbian government does its share in ensuring that the supply of cheap labour force never runs out, by launching endless campaigns to stimulate reproduction. It seems to be working: babies are increasingly the ‘it’ accessory in cafés and bars. Officially, stimulating the birth rate is to offset the ‘cost’ of pensions, which IMF insists should not increase. Unofficially, of course, the easiest way to adjust for this is to make sure pensioners are left behind. Much like the current hype about its legacy, the necropolitics of post-socialism operates primarily through foregrounding its Instagrammable elements, and hiding the ugly, non-productive ones.

Much like in VanderMeer’s Area X, knowledge that the border is advancing could be a mixed blessing: as Danowski and Viveiros de Castro argued in a different context, end of the world comes more easily to those for whom the world has already ended, more than once. Not unlike what Scranton argued in Learning to Die in the Anthropocene – this, perhaps, rather than sanitised dreams of a utopian future, is one thing worth resurrecting from post-socialism.

Writing our way out of neoliberalism? For an ecology of publishing

[This blog post is written in preparation for the panel Thinking knowledge production without the university that I am organising at the Sociological Review’s conference Undisciplining: conversations from the edges, Newcastle, Gateshead, 18-21 June 2018. Reflections from other participants are here. I am planning to expand on this part during and after the conference, so questions and comments welcome!]

What kind of writing and publishing practices might support knowledge that is not embedded in the neoliberal university? I’ve been interested in this question for a long while, in part because it is a really tough one. As academics – and certainly as academics in social sciences and humanities – writing and publishing is, ultimately, what we do. Of course, our work frequently also involves teaching – or, as those with a love for neat terminologies like to call it, ‘knowledge transmission’ – as well as different forms of its communication or presentation, which we (sometimes performatively) refer to as ‘public engagement’. Even those, however, often rely or at least lead to the production of written text of some sort: textbooks, academic blogs. This is no surprise: modern Western academic tradition is highly reliant on the written word. Obviously, in this sense, questions and problems of writing/publishing and its relationship with knowledge practices are both older and much broader than the contemporary economy of knowledge production, which we tend to refer to as neoliberal. They may also last beyond it, if, indeed, we can imagine the end of neoliberalism. However, precisely for this reason, it makes sense to think about how we might reconstruct writing and publishing practices in ways that weaken, rather than contribute to the reproduction of neoliberal practices of knowledge production.

The difficulty with thinking outside of the current framework becomes apparent when we try thinking of the form these practices could take. While there are many publications  not directly contributing to the publishing industry – blogs, zines, open-access, collaborative, non-paywalled articles all come to mind – they all too easily become embedded in the same dynamic. As a result, they are either eschewed because ‘they do not count’, or else they are made to count (become countable) by being reinserted in the processes of valorisation via the proxy of ‘impact’. As I’ve argued in this article (written with my former colleague from the UNIKE (Universities in the knowledge economy) project, economic geographer Chris Muellerleile), even forms of knowledge production that explicitly seek to ‘disrupt’ such modes, such as Open Access or publish first/review later platforms, often rely on – even if implicit – assumptions that can feed into the logic of evaluation and competition. This is not saying that restricting access to scientific publications is in any way desirable. However, we need to accept that opening access (under certaincircumstances, for certain parts of the population) does not in and of itself do much to ‘disrupt’ the broader political and economic system in which knowledge is embedded.

Publish or…publish 

Unsurprisingly,  the hypocrisy of the current system disproportionately affects early career and precarious scholars. ‘Succeeding’ in the academia – i.e. escaping precarity – hinges on publishing in recognised formats and outlets: this means, almost exclusively, peer-reviewed journal in one’s discipline, and books. The process is itself costly and risky. Turnover times can be ridiculously long: a chapter for an edited volume I wrote in July 2015 has finally been published last month, presumably because other – more senior, obviously – contributors took much longer. The chapter deals with a case from 2014, which makes the three-year lag between its accepted version and publication problematic for all sorts of reasons. On the other hand, even when good and relatively timely, the process of peer review can be soul-crushing for junior scholars (see: Reviewer No.2). Obviously, if this always resulted in a better final version of the article, we could argue it would make it worthwhile. However, while some peer reviewers offer constructive feedback that really improves the process of publication, this is not always the case. Increasingly, because peer review takes time and effort, it is kicked down the academic ladder, so it becomes a case of who can afford to review – or, equally (if not more) often, who cannot afford to say no a review.

In other words, just like other aspects of academic knowledge production, the reviewing and publishing process is plagued by stark inequalities. ‘Big names’ or star professors can get away with only perfunctory – if any – peer review; a series of clear cases of plagiarism or self-plagiarism, not to mention a string of recent books with bombastic titles that read like barely-edited transcripts of undergraduate seminars (there are plenty around), are a testament to this. Just in case, many of these ‘Trump academics‘ keep their own journals or book series as a side hustle, where the degree of familiarity with the editorial board is often the easiest path to publication.

What does this all lead to? The net result is the proliferation of academic publications of all sorts, what some scholars have dubbed the shift from an economy of scarcity to that of abundance. However, it’s not that more is necessarily better: while it’s difficult (if not entirely useless) to speak of scholarly publications in universal terms, as the frequently (mis-)cited piece of research argued, most academic articles are read and cited by very few people. It’s quite common for academics to complain they can’t keep up with the scholarly production in their field, even when narrowed down to a very tight disciplinary specialism. Some of this, obviously, has to do with the changing structure of academic labour, in particular the increasing load of administration and the endless rounds of research evaluation and grant application writing, which syphons aways time for reading. But some of this has to do with the simple fact that there is so much more of published stuff around: scholars compete with each other in terms of who’s going to get more ‘out there’, and sooner. As a result, people rarely take the time to read others’ work carefully, especially if it is outside of their narrow specialism or discipline. Substituting this with sycophantic shout-outs via Twitter or book reviews, which are often thinly veiled self-serving praise that reveals more about the reviewer’s career plans, than about the actual publication being reviewed.

For an ecology of knowledge production 

So, how is it possible to work against all this? Given that the purpose of this panel was to start thinking about actual solutions, rather than repeat tired complaints about the nature of knowledge production in the neoliberal academia, I am going to put forward two concrete proposals: one is on the level of conceptual – not to say ‘behavioural’ -change; the other on the level of institutions, or organisations. The first is a commitment to, simply, publish less. Much like in environmental pollution where solutions such as recycling, ‘natural’ materials, and free and ethical trading are a way less effective way to minimise CO2 emissions than just reducing consumption (and production), in writing and publishing we could move towards the progressive divestment from the idea that the goal is to produce as much as possible, and put it ‘out there’ as quickly as possible. To be clear, this isn’t a thinly-veiled plea for ‘slow’ scholarship. Some disciplines or topics clearly call for quicker turnover – one can think of analyses in current affairs, environmental or political science. On the other hand, some topics or disciplines require time, especially when there is value in observing how trends develop over a period of time. Recognising the divergent temporal cycles of knowledge production not only supports the dignity of the academic profession, but also recognises knowledge production happens outside of academia, and should not – need not – necessarily be dependent on being recognised or rewarded within it. As long as the system rewards output, the rate of output will tend to increase: in this sense, competition can be seen not necessarily as an outcome as much as a byproduct of our desire to ‘populate’ the world with the fruits of our labour. Publishing less, in this sense, is not that much a performative act as the first step in divesting from the incessant drive of competitive logic that permeates both the academia and the world ‘outside’ of it.

One way is to, simply, publish less.

Publishers play a very important role in this ecology of knowledge production. Much has been made of the so-called ‘predatory’ journals and publishers, clearly seeking even a marginal profit: the less often mentioned flipside is that almost all publishing is to some degree ‘predatory’, in the sense in which editors seek out authors whose work they believe can sell – that is, sell for a profit that goes to the publisher, and sometimes the editors, while authors can, at best, hope for an occasional drip from royalties (unless, again, they are star/Trump academics, in which case they can aspire to hefty book advances). Given the way in which the imperative to publish is ingrained in the dynamics of academic career progression – and, one might argue, in the academic psyche – it is no surprise that multiple publishing platforms, often of dubious quality, thrive in this landscape.

Instead of this, we could aim for a combination of publishing cooperatives – perhaps embedded in professional societies – and a small number of established journals, which could serve as platforms or hubs for a variety of formats, from blogs to full-blown monographs. These journals would have an established, publicly known, and well-funded board of reviewers and editors. Combined, these principles could enable publishing to serve multiple purposes, communities and formats, without the need to reproduce a harmful hierarchy embedded in competitive market-oriented models. It seems to me that the Sociological Review, which is organising this conference, could be  going towards this model. Another journal with multiple formats and an online forum is the Social Epistemology Review and Reply Collective. I am sure there are others that could serve as blueprints for this new ecology of knowledge production; perhaps, together, we can start thinking how to build it.

Between legitimation and imagination: epistemic attachment, ontological bias, and thinking about the future

Greyswans
Some swans are…grey (Cambridge, August 2017)

 

A serious line of division runs through my household. It does not concern politics, music, or even sports: it concerns the possibility of large-scale collapse of social and political order, which I consider very likely. Specific scenarios aside for the time being, let’s just say we are talking more human-made climate-change-induced breakdown involving possibly protracted and almost certainly lethal conflict over resources, than ‘giant asteroid wipes out Earth’ or ‘rogue AI takes over and destroys humanity’.

Ontological security or epistemic positioning?

It may be tempting to attribute the tendency towards catastrophic predictions to psychological factors rooted in individual histories. My childhood and adolescence took place alongside the multi-stage collapse of the country once known as the Socialist Federal Republic of Yugoslavia. First came the economic crisis, when the failure of ‘shock therapy’ to boost stalling productivity (surprise!) resulted in massive inflation; then social and political disintegration, as the country descended into a series of violent conflicts whose consequences went far beyond the actual front lines; and then actual physical collapse, as Serbia’s long involvement in wars in the region was brought to a halt by the NATO intervention in 1999, which destroyed most of the country’s infrastructure, including parts of Belgrade, where I was living at the time*. It makes sense to assume this results in quite a different sense of ontological security than one, say, the predictability of a middle-class English childhood would afford.

But does predictability actually work against the capacity to make accurate predictions? This may seem not only contradictory but also counterintuitive – any calculation of risk has to take into account not just the likelihood, but also the nature of the source of threat involved, and thus necessarily draws on the assumption of (some degree of) empirical regularity. However, what about events outside of this scope? A recent article by Faulkner, Feduzi and Runde offers a good formalization of this problem (the Black Swans and ‘unknown unknowns’) in the context of the (limited) possibility to imagine different outcomes (see table below). Of course, as Beck noted a while ago, the perception of ‘risk’ (as well as, by extension, any other kind of future-oriented thinking) is profoundly social: it depends on ‘calculative devices‘ and procedures employed by networks and institutions of knowledge production (universities, research institutes, think tanks, and the like), as well as on how they are presented in, for instance, literature and the media.

Screen shot 2017-12-18 at 3.58.23 PM
From: Faulkner, Feduzi and Runde: Unknowns, Black Swans and the risk/uncertainty distinction, Cambridge Journal of Economics 41 (5), August 2017, 1279-1302

 

Unknown unknowns

In The Great Derangement (probably the best book I’ve read in 2017), Amitav Gosh argues that this can explain, for instance, the surprising absence of literary engagement with the problem of climate change. The problem, he claims, is endemic to Western modernity: a linear vision of history cannot conceive of a problem that exceeds its own scale**. This isn’t the case only with ‘really big problems’ such as economic crises, climate change, or wars: it also applies to specific cases such as elections or referendums. Of course, social scientists – especially those qualitatively inclined – tend to emphasise that, at best, we aim to explain events retroactively. Methodological modesty is good (and advisable), but avoiding thinking about the ways in which academic knowledge production is intertwined with the possibility of prediction is useless, for at least two reasons.

One is that, as reflected in the (by now overwrought and overdetermined) crisis of expertise and ‘post-truth’, social researchers increasingly find themselves in situations where they are expected to give authoritative statements about the future direction of events (for instance, about the impact of Brexit). Even if they disavow this form of positioning, the very idea of social science rests on (no matter how implicit) assumption that at least some mechanisms or classes or objects will exhibit the same characteristics across cases; consequently, the possibility of inference is implied, if not always practised. Secondly, given the scope of challenges societies face at present, it seems ridiculous to not even attempt to engage with – and, if possibly, refine – the capacity to think how they will develop in the future. While there is quite a bit of research on individual predictive capacity and the way collective reasoning can correct for cognitive bias, most of these models – given that they are usually based on experiments, or simulations – cannot account for the way in which social structures, institutions, and cultures of knowledge production interact with the capacity to theorise, model, and think about the future.

The relationship between social, political, and economic factors, on the one hand, and knowledge (including knowledge about those factors), on the other, has been at the core of my work, including my current PhD. While it may seem minor compared to issues such as wars or revolutions, the future of universities offers a perfect case to study the relationship between epistemic positioning, positionality, and the capacity to make authoritative statements about reality: what Boltanski’s sociology of critique refers to as ‘complex externality’. One of the things it allowed me to realise is that while there is a good tradition of reflecting on positionality (or, in positivist terms, cognitive ‘bias’) in relation to categories such as gender, race, or class, we are still far from successfully theorising something we could call ‘ontological bias’: epistemic attachment to the object of research.

The postdoctoral project I am developing extends this question and aims to understand its implications in the context of generating and disseminating knowledge that can allow us to predict – make more accurate assessments of – the future of complex social phenomena such as global warming or the development of artificial intelligence. This question has, in fact, been informed by my own history, but in a slightly different manner than the one implied by the concept of ontological security.

Legitimation and prediction: the case of former Yugoslavia

Socialist Federal Republic of Yugoslavia had a relatively sophisticated and well developed networks of social scientists, which both of my parents were involved in***. Yet, of all the philosophers, sociologists, political scientists etc. writing about the future of the Yugoslav federation, only one – to the best of my knowledge – predicted, in eerie detail, the political crisis that would lead to its collapse: Bogdan Denitch, whose Legitimation of a revolution: the Yugoslav case (1976) is, in my opinion, one of the best books about former Yugoslavia ever written.

A Yugoslav-American, Denitch was a professor of sociology at the City University of New York. He was also a family friend, a fact I considered of little significance (having only met him once, when I was four, and my mother and I were spending a part of our summer holiday at his house in Croatia; my only memory of it is being terrified of tortoises roaming freely in the garden), until I began researching the material for my book on education policies and the Yugoslav crisis. In the years that followed (I managed to talk to him again in 2012; he passed away in 2016), I kept coming back to the question: what made Denitch more successful in ‘predicting’ the crisis that would ultimately lead to the dissolution of former Yugoslavia than virtually anyone writing on Yugoslavia at the time?

Denitch had a pretty interesting trajectory. Born in 1929 to Croat Serb parents, he spent his childhood in a series of countries (including Greece and Egypt), following his diplomat father; in 1946, the family emigrated to the United States (the fact his father was a civil servant in the previous government would have made it impossible for them to continue living in Yugoslavia after the Communist regime, led by Josip Broz Tito, formally took over). There, Denitch (in evident defiance of his upper-middle-class legacy) trained as a factory worker, while studying for a degree in sociology at CUNY. He also joined the Democratic Socialist Alliance – one of American socialist parties – whose member (and later functionary) he would remain for the rest of his life.

In 1968, Denitch was awarded a major research grant to study Yugoslav elites. The project was not without risks: while Yugoslavia was more open to ‘the West’ than other countries in Eastern Europe, visits by international scholars were strictly monitored. My mother recalls receiving a house visit from an agent of the UDBA, the Yugoslav secret police – not quite the KGB but you get the drift – who tried to elicit the confession that Denitch was indeed a CIA agent, and, in the absence of that, the promise that she would occasionally report on him****.

Despite these minor throwbacks, the research continued: Legitimation of a revolution is one of its outcomes. In 1973, Denitch was awarded a PhD by the Columbia University and started teaching at CUNY, eventually retiring in 1994. His last book, Ethnic nationalism: the tragic death of Yugoslavia came out in the same year, a reflection on the conflict that was still going on at the time, and whose architecture he had foreseen with such clarity eighteen years earlier (the book is remarkably bereft of “told-you-so”-isms, so warmly recommended for those wishing to learn more about Yugoslavia’s dissolution).

Did personal history, in this sense, have a bearing on one’s epistemic position, and by extension, on the capacity to predict events? One explanation (prevalent in certain versions of popular intellectual history) would be that Denitch’s position as both a Yugoslav and an American would have allowed him to escape the ideological traps other scholars were more likely to fall into. Yugoslavs, presumably,  would be at pains to prove socialism was functioning; Americans, on the other hand, perhaps egalitarian in theory but certainly suspicious of Communist revolutions in practice, would be looking to prove it wasn’t, at least not as an economic model. Yet this assumption hardly stands even the lightest empirical interrogation. At least up until the show trials of Praxis philosophers, there was a lively critique of Yugoslav socialism within Yugoslavia itself; despite the mandatory coating of jargon, Yugoslav scholars were quite far from being uniformly bright-eyed and bushy-tailed about socialism. Similarly, quite a few American scholars were very much in favour of the Yugoslav model, eager, if anything, to show that market socialism was possible – that is, that it’s possible to have a relatively progressive social policy and still be able to afford nice things. Herein, I believe, lies the beginning of the answer as to why neither of these groups was able to predict the type or the scale of the crisis that will eventually lead to the dissolution of former Yugoslavia.

Simply put, both groups of scholars depended on Yugoslavia as a source of legitimation of their work, though for different reasons. For Yugoslav scholars, the ‘exceptionality’ of the Yugoslav model was the source of epistemic legitimacy, particularly in the context of international scientific collaboration: their authority was, in part at least, constructed on their identity and positioning as possessors of ‘local’ knowledge (Bockman and Eyal’s excellent analysis of the transnational roots of neoliberalism makes an analogous point in terms of positioning in the context of the collaboration between ‘Eastern’ and ‘Western’ economists). In addition to this, many of Yugoslav scholars were born and raised in socialism: while, some of them did travel to the West, the opportunities were still scarce and many were subject to ideological pre-screening. In this sense, both their professional and their personal identity depended on the continued existence of Yugoslavia as an object; they could imagine different ways in which it could be transformed, but not really that it could be obliterated.

For scholars from the West, on the other hand, Yugoslavia served as a perfect experiment in mixing capitalism and socialism. Those more on the left saw it as a beacon of hope that socialism need not go hand-in-hand with Stalinist-style repression. Those who were more on the right saw it as proof that limited market exchange can function even in command economies, and deduced (correctly) that the promise of supporting failing economies in exchange for access to future consumer markets could be used as a lever to bring the Eastern Bloc in line with the rest of the capitalist world. If no one foresaw the war, it was because it played no role in either of these epistemic constructs.

This is where Denitch’s background would have afforded a distinct advantage. The fact his parents came from a Serb minority in Croatia meant he never lost sight of the salience of ethnicity as a form of political identification, despite the fact socialism glossed over local nationalisms. His Yugoslav upbringing provided him not only with fluency in the language(s), but a degree of shared cultural references that made it easier to participate in local communities, including those composed of intellectuals. On the other hand, his entire professional and political socialization took place in the States: this meant he was attached to Yugoslavia as a case, but not necessarily as an object. Not only was his childhood spent away from the country; the fact his parents had left Yugoslavia after the regime change at the end of World War II meant that, in a way, for him, Yugoslavia-as-object was already dead. Last, but not least, Denitch was a socialist, but one committed to building socialism ‘at home’. This means that his investment in the Yugoslav model of socialism was, if anything, practical rather than principled: in other words, he was interested in its actual functioning, not in demonstrating its successes as a marriage of markets and social justice. This epistemic position, in sum, would have provided the combination needed to imagine the scenario of Yugoslav dissolution: a sufficient degree of attachment to be able to look deeply into a problem and understand its possible transformations; and a sufficient degree of detachment to be able to see that the object of knowledge may not be there forever.

Onwards to the…future?

What can we learn from the story? Balancing between attachment and detachment is, I think, one of the key challenges in any practice of knowing the social world. It’s always been there; it cannot be, in any meaningful way, resolved. But I think it will become more and more important as the objects – or ‘problems’ – we engage with grow in complexity and become increasingly central to the definition of humanity as such. Which means we need to be getting better at it.

 

———————————-

(*) I rarely bring this up as I think it overdramatizes the point – Belgrade was relatively safe, especially compared to other parts of former Yugoslavia, and I had the fortune to never experience the trauma or hardship people in places like Bosnia, Kosovo, or Croatia did.

(**) As Jane Bennett noted in Vibrant Matter, this resonates with Adorno’s notion of non-identity in Negative Dialectics: a concept always exceeds our capacity to know it. We can see object-oriented ontology, (e.g. Timothy Morton’s Hyperobjects) as the ontological version of the same argument: the sheer size of the problem acts as a deterrent from the possibility to grasp it in its entirety.

(***) This bit lends itself easily to the Bourdieusian “aha!” argument – academics breed academics, etc. The picture, however, is a bit more complex – I didn’t grow up with my father and, until about 16, had a very vague idea of what my mother did for a living.

(****) Legend has it my mother showed the agent the door and told him never to call on her again, prompting my grandmother – her mother – to buy funeral attire, assuming her only daughter would soon be thrown into prison and possibly murdered. Luckily, Yugoslavia was not really the Soviet Union, so this did not come to pass.

Critters, Critics, and Californian Theory – review of Haraway’s Staying with the Trouble

13925114_10153706710291720_1736673444964895015_n
Coproduction

 

[This review was originally published on the blog of the Political Economy Research Centre as part of its Anthropocene Reading Group, as well as on the blog of Centre for Understanding Sustainable Prosperity]

 

Donna Haraway, Staying with the Trouble: Making Kin in the Chthulucene(Duke University Press, 2016)

From the opening, Donna Haraway’s recent book reads like a nice hybrid of theoretical conversation and science fiction. Crescendoing in the closing Camille Stories, the outcome of a writing experiment of imagining five future generations, “Staying with the trouble” weaves together – like the cat’s cradle, one of the recurrent metaphors in the book – staple Harawayian themes of the fluidity of boundaries between human and variously defined ‘Others’, metamorphoses of gender, the role of technology in modifying biology, and the related transformation of the biosphere – ‘Gaia’ – in interaction with human species. Eschewing the term ‘Anthropocene’, which she (somewhat predictably) associates with Enlightenment-centric, tool-obsessed rationality, Haraway births ‘Chthulucene’ – which, to be specific, has nothing to do with the famous monster of H.P. Lovecraft’s imagination, instead being named after a species of spider, Pimoa Cthulhu, native to Haraway’s corner of Western California.

This attempt to avoid dealing with human(-made) Others – like Lovecraft’s “misogynist racial-nightmare monster” – is the key unresolved issue in the book. While the tone is rightfully respectful – even celebratory – of nonhuman critters, it remains curiously underdefined in relation to human ones. This is evident in the treatment of Eichmann and the problem of evil. Following Arendt, Haraway sees Eichmann’s refusal to think about the consequences of his actions as the epitome of the banality of evil – the same kind of unthinking that leads to the existing ecological crisis. That more thinking seems like a natural antidote and a solution to the long-term destruction of the biosphere seems only logical (if slightly self-serving) from the standpoint of developing a critical theory whose aim is to save the world from its ultimate extinction. The question, however, is what to do if thoughts and stories are not enough?

The problem with a political philosophy founded on belief in the power of discourse is that it remains dogmatically committed to the idea that only if one can change the story, one can change the world. The power of stories as “worlding” practices fundamentally rests on the assumption that joint stories can be developed with Others, or, alternatively, that the Earth is big enough to accommodate those with which no such thing is possible. This leads Haraway to present a vision of a post-apocalyptic future Earth, in which population has been decimated to levels that allow human groups to exist at sufficient distance from each other. What this doesn’t take into account is that differently defined Others may have different stories, some of which may be fundamentally incompatible with ours – as recently reflected in debates over ‘alternative facts’ or ‘post-truth’, but present in different versions of science and culture wars, not to mention actual violent conflicts. In this sense, there is no suggestion of sympoiesis with the Eichmanns of this world; the question of how to go about dealing with human Others – especially if they are, in Kristeva’s terms, profoundly abject – is the kind of trouble “Staying with the trouble” is quite content to stay out of.

Sympoiesis seems reserved for non-humans, which seem to happily go along with the human attempts to ‘become-with’ them. But it seems easier when ‘Others’ do not, technically speaking, have a voice: whether we like it or not, few of the non-human critters have efficient means to communicate their preferences in terms of political organisation, speaking order at seminars, or participation in elections. The critical practice of com-menting, to which Haraway attributes much of the writing in the book, is only possible to the extent to which the Other has equal means and capacities to contribute to the discussion. As in the figure of the Speaker for the Dead, the Other is always spoken-for, the tragedy of its extinction obscuring the potential conflict or irreconcilability between species.

The idea of a com-pliant Other can, of course, be seen as an integral element of the mythopoetic scaffolding of West Coast academia, where the idea of fluidity of lifestyle choices probably has near-orthodox status. It’s difficult not to read parts of the book, such as the following passage, as not-too-fictional accounts of lived experiences of the Californian intellectual elite (including Haraway herself):

“In the infectious new settlements, every new child must have at least three parents, who may or may not practice new or old genders. Corporeal differences, along with their fraught histories, are cherished. Throughout life, the human person may adopt further bodily modifications for pleasure and aesthetics or for work, as long as the modifications tend to both symbionts’ well-being in the humus of sympoiesis” (p. 133-5)

The problem with this type of theorizing is not so much that it universalises a concept of humanity that resembles an extended Comic-Con with militant recycling; reducing ideas to their political-cultural-economic background is not a particularly innovative critical move. It is that it fails to account for the challenges and dangers posed by the friction of multiple human lives in constrained spaces, and the ways in which personal histories and trajectories interact with the configurations of place, class, and ownership, in ways that can lead to tragedies like the Grenfell tower fire in London.

In other words, what “Staying with the trouble” lacks is a more profound sense of political economy, and the ways in which social relations influence how different organisms interact with their environment – including compete for its scarce resources, often to the point of mutual extinction. Even if the absolution of human woes by merging one’s DNA with those of fellow creatures works well as an SF metaphor, as a tool for critical analysis it tends to avoid the (often literally) rough edges of their bodies. It is not uncommon even for human bodies to reject human organs; more importantly, the political history of humankind is, to a great degree, the story of various groups of humans excluding other humans from the category of humans (colonized ‘Others’, slaves), citizens (women, foreigners), or persons with full economic and political rights (immigrants, and again women). This theme is evident in the contemporary treatment of refugees, but it is also preserved in the apparently more stable boundaries between human groups in the Camille Stories. In this context, the transplantation of insect parts to acquire consciousness of what it means to inhabit the body of another species has more of a whiff of transhumanist enhancement than of an attempt to confront head-on (antennae-first?) multifold problems related to human coexistence on a rapidly warming planet.

At the end of the day, solutions to climate change may be less glamorous than the fantasy of escaping global warming by taking a dip in the primordial soup. In other words, they may require some good ol’ politics, which fundamentally means learning to deal with Others even if they are not as friendly as those in Haraway’s story; even if, as the Eichmanns and Trumps of this world seem to suggest, their stories may have nothing to do with ours. In this sense, it is the old question of living with human Others, including abject ones, that we may have to engage with in the AnthropoCapitaloCthulucene: the monsters that we created, and the monsters that are us.

Jana Bacevic is a PhD candidate at the Department of Sociology at the University of Cambridge, and has a PhD in social anthropology from the University of Belgrade. Her interests lie at the intersection of social theory, sociology of knowledge, and political sociology; her current work deals with the theory and practice of critique in the transformation of higher education and research in the UK.

 

Zygmunt Bauman and the sociologies of end times

[This post was originally published at the Sociological Review blog’s Special Issue on Zygmunt Bauman, 13 April 2017]

“Morality, as it were, is a functional prerequisite of a world with an in-built finality and irreversibility of choices. Postmodern culture does not know of such a world.”

Zygmunt Bauman, Sociology and postmodernity

Getting reacquainted with Bauman’s 1988 essay “Sociology and postmodernity”, I accidentally misread the first word of this quote as “mortality”. In the context of the writing of this piece, it would be easy to interpret this as a Freudian slip – yet, as slips often do, it betrays a deeper unease. If it is true that morality is a functional prerequisite of a finite world, it is even truer that such a world calls for mortality – the ultimate human experience of irreversibility. In the context of trans- and post-humanism, as well as the growing awareness of the fact that the world, as the place inhabited (and inhabitable) by human beings, can end, what can Bauman teach us about both?

In Sociology and postmodernity, Bauman assumes the position at the crossroads of two historical (social, cultural) periods: modernity and postmodernity. Turning away from the past to look towards the future, he offers thoughts on what a sociology adapted to the study of postmodern condition would be like. Instead of a “postmodern sociology” as a mimetic representation of (even if a pragmatic response to) postmodernity, he argues for a sociology that attempts to give a comprehensive account of the “aggregate of aspects” that cohere into a new, consumer society: the sociology of postmodernity. This form of account eschews the observation of the new as a deterioration, or aberration, of the old, and instead aims to come to terms with the system whose contours Bauman will go on to develop in his later work: the system characterised by a plurality of possible worlds, and not necessarily a way to reconcile them.

The point in time in which he writes lends itself fortuitously to the argument of the essay. Not only did Legislators and interpreters, in which he reframes intellectuals as translators between different cultural worlds, come out a year earlier; the publication of Sociology and postmodernity briefly precedes 1989, the year that will indeed usher a wholly new period in the history of Europe, including in Bauman’s native Poland.

On the one hand, he takes the long view back to post-war Europe, built, as it was, on the legacy of Holocaust as a pathology of modernity, and two approaches to preventing its repetition – market liberalism and political freedoms in the West, and planned economies and more restrictive political regimes in Central and Eastern parts of the subcontinent. On the other, he engages with some of the dilemmas for the study of society that the approaching fall of Berlin Wall and eventual unification of those two hitherto separated worlds was going to open. In this sense, Bauman really has the privilege of a two-facing version of Benjamin’s Angel of History. This probably helped him recognize the false dichotomy of consumer freedom and dictatorship over needs, which, as he stated, was quickly becoming the only imaginable alternative to the system – at least as far as imagination was that of the system itself.

The present point of view is not all too dissimilar from the one in which Bauman was writing. We regularly encounter pronouncements of an end of a whole host of things, among them history, classical distribution of labour, standards of objectivity in reporting, nation-states, even – or so we hope – capitalism itself. While some of Bauman’s fears concerning postmodernity may, from the present perspective, seem overstated or even straightforwardly ridiculous, we are inhabiting a world of many posts – post-liberal, post-truth, post-human. Many think that this calls for a rethinking of how sociology can adapt itself to these new conditions: for instance, in a recent issue of International Sociological Association’s Global Dialogue, Leslie Sklair considers what a new radical sociology, developed in response to the collapse of global capitalism, would be like.

As if sociology and the zeitgeist are involved in some weird pas-de-deux: changes in any domain of life (technology, political regime, legislation) almost instantaneously trigger calls for, if not the invention of new, then a serious reconsideration of old paradigms and approaches to its study.

I would like to suggest that one of the sources of continued appeal of this – which Mike Savage brilliantly summarised as epochal theorising – is not so much the heralding of the new, as the promise that there is an end to the present state of affairs. In order for a new ‘epoch’ to succeed, the old one needs to end. What Bauman warns about in the passage cited at the beginning is that in a world without finality – without death – there can be no morality. In T.S. Eliot’s lines from Burnt Norton: If all time is eternally present, all time is irredeemable. What we may read as Bauman’s fear, therefore, is not that worlds as we know them can (and will) end: it is that, whatever name we give to the present condition, it may go on reproducing itself forever. In other words, it is a vision of the future that looks just like the present, only there is more of it.

Which is worse? It is hard to tell. A rarely discussed side of epochal theorising is that it imagines a world in which social sciences still have a role to play, if nothing else, in providing a theoretical framing or empirically-informed running commentary of its demise, and thus offers salvation from the existential anxiety of the present. The ‘ontological turn’ – from object-oriented ontology, to new materialisms, to post-humanism – reflects, in my view, the same tendency. If objects ‘exist’ in the same way as we do, if matter ‘matters’ in the same way (if not in the same degree) in which, for instance, black lives matter, this provides temporary respite from the confines of our choices. Expanding the concept of agency so as to involve non-human actors may seem more complicated as a model of social change, but at least it absolves humans from the unique burden of historical responsibility – including that for the fate of the world.

Human (re)discovery of the world, thus, conveys less a newfound awareness of the importance of the lived environment, as much as the desire to escape the solitude of thinking about the human (as Dawson also notes, all too human) condition. The fear of relativism that postmodern ‘plurality’ of worlds brought about appears to have been preferable to the possibility that there is, after all, just the one world. If the latter is the case, the only escape from it lies, to borrow from Hamlet, in the country from whose bourn no traveller has ever returned: in other words, in death.

This impasse is perhaps felt strongest in sociology and anthropology because excursions into other worlds have been both the gist of their method and the foundations of their critical potential (including their self-critique, which focused on how these two elements combine in the construction of epistemic authority). The figure of the traveller to other worlds was more pronounced in the case of anthropology, at least at the time when it developed as the study of exotic societies on the fringe of colonial empires, but sociology is no stranger to visitation either: its others, and their worlds, delineated by sometimes less tangible boundaries of class, gender, race, or just epistemic privilege. Bauman was among theorists who recognized the vital importance of this figure in the construction of the foundations of European modernity, and thus also sensitive to its transformations in the context of postmodernity – exemplified, as he argued, in contemporary human’s ambiguous position: between “a perfect tourist” and a “vagabond beyond remedy”.

In this sense, the awareness that every journey has an end can inform the practice of social theory in ways that go beyond the need to pronounce new beginnings. Rather than using eulogies in order to produce more of the same thing – more articles, more commentary, more symposia, more academic prestige – perhaps we can see them as an opportunity to reflect on the always-unfinished trajectory of human existence, including our existence as scholars, and the responsibility that it entails. The challenge, in this case, is to resist the attractive prospect of escaping the current condition by ‘exit’ into another period, or another world – postmodern, post-truth, post-human, whatever – and remember that, no matter how many diverse and wonderful entities they may be populated with, these worlds are also human, all too human. This can serve as a reminder that, as Bauman wrote in his famous essay on heroes and victims of postmodernity, “Our life struggles dissolve, on the contrary, in that unbearable lightness of being. We never know for sure when to laugh and when to cry. And there is hardly a moment in life to say without dark premonitions: ‘I have arrived’”.

Against academic labour: foraging in the wildlands of digital capitalism

sqrl
Central Park, NYC, November 2013

I am reading a book called “The Slow Professor: Challenging the Culture of Speed in the Academy”, by two Canadian professors, Maggie Berg and Barbara Seeber. Published earlier in 2016, to (mostly) wide critical acclaim, it critiques the changing conditions of knowledge production in the academia, in particular those associated with the expectation to produce more and at faster rates (also known as ‘acceleration‘). As an antidote, as the Slow Professor Manifesto appended to the Preface suggests, faculty should resist the corporatisation of the university by adopting the principles of Slow Movement (as in Slow Food etc.) in their professional practices.

While the book is interesting, the argument is not particularly exceptional in the context of the expanding genre of diagnoses of the ‘end’ or ‘crisis’ of the Western university. The origins of the genre could be traced to Bill Readings’ 1996 ‘University in Ruins’ (though, of course, one could always stretch the lineage back to 1918 and Veblen’s ‘The Higher Learning in America’; predecessors in Britain include E.P. Thompson’s ‘Warwick University Ltd.’ (1972) and Halsey’s ‘The Decline of Donnish Dominion’ (1982)). Among contemporary representatives of the genre are Nussbaum’s ‘Not for Profit: Why Democracy Needs the Humanities’ (2010), Collini’s ‘What Are Universities For’ (2012), and Giroux’s ‘Neoliberal Attack on Higher Education’ (2013), to name but a few; in other words, there is no shortage of works documenting how the transformation of the conditions of academic labour fundamentally threatens the role and function of universities in the Western societies – and, by extension, the survival of these societies themselves.

I would like to say straight away that I do not, for a single moment, dispute or doubt the toll that the transformation of the conditions of academic labour is having on those who are employed at universities. Having spent the past twelve years researching the politics of academic knowledge, and most of those working in higher education in a number of different countries, I encountered hardly a single academic or student not pressured, threatened, or at the very least insecure about their future employment. What I want to argue, instead, is that the critique of the transformation of knowledge production that focuses on academic labour is no longer sufficient. Concomitantly, the critique of time – as in labour time – isn’t either.

In lieu of labour, I suggest we could think of what academics do as foraging. By this I do not in any way mean to trivialize union struggles that focus on working conditions for faculty or the position of students; these are and continue to be very important, and I have always been proud to support them. However, unfortunately, they cannot capture the way knowledge has already changed. This is not only due to the growing academic ‘precariat’ (or ‘cognitariat’): while the absence of stable or full-time employment has been used to inform both analyses and specific forms of political action on both sides of the Atlantic, they still frame the problem as fundamentally dependent on academic labour. While this may for the time being represent a good strategy in the political sense, it creates a set of potential contradictions in the conceptual.

For one, labour implies the concept of use: Marx’s labour theory of value postulates that this is what it allows it to be exchanged for something (money, favours). Yet, we as  academics are often the first to point out that lot of knowledge is not directly useful: for every paradigmatic scientist in a white lab coat that cures cancer, there is the equally paradigmatic bookworm reading 18th-century poetry (bear with me, it’s that time of the year when clichés abound). Trying to measure their value by the same or even similar standard risks slipping into the pathologies of impact, or, worse, vague statements about the necessity of social sciences and humanities for democracy, freedom, and human rights (despite personal sympathy for the latter argument, it warrants mentioning that the link between democratic regimes and academic freedom is historically contingent, rather than causal).

Second, framing what academics do as labour makes it very difficult to avoid embracing some form of measurement of output. This isn’t always related to quantity: one can also measure the quality of publications (e.g., by rating them in relation to the impact factors of journals they were published in). Often, however, the ideas of productivity and excellence go hand in hand. This contributes to the proliferation of academic writing – not all of which is exceptional, to say the very least – and, in turn, creates incentives to produce both more and better (‘slow’ academia is underpinned by the argument that taking more time creates better writing).

This also points to why the critique of the conditions of knowledge production is so focused on the notion of time. As long as creating knowledge is primarily defined as a form of labour, it depends on socially and culturally defined cycles of production and consumption. Advocating ‘slowness’, thus, does not amount to the critique of the centrality of time to capitalist production: it just asks for more of it.

The concept of foraging, by contrast, is embedded in a different temporal cycle: seasonal, rather that annual or REF-able. This isn’t some sort of neo-primitivist glorification of supposed forms of sustenance of the humanity’s forebears before the (inevitable) fall from grace; it’s, rather, a more precise description of how knowledge works. To this end, we could say most academics forage anyway: they collect bits and scraps of ideas and information, and turn them into something that can be consumed (if only by other academics). Some academics will discover new ‘edible’ things, either by trial and error or by learning from (surveying) the population that lives in the area, and introduce this to other academics. Often, however, this does not amount to creating something entirely new or original, as much to the recombination of existing flavours. This is why it is not abundance as such as much as diversity that plays a role in how interesting an environment a university, city, or region will become.

However, unlike labour, foraging is not ‘naturally’ given to the creation of surplus: while foraged food can be stored, most of it is collected and prepared more or less in relation to the needs of those who eat it. Similarly, it is also by default somewhat undisciplined: foragers must keep an eye out for the plants and other foodstuffs that may be useful to them. This does not mean that it does not rely on tradition, or that it is not susceptible to prejudice – often, people will ignore or attribute negative properties to forms of food that they are unfamiliar with, much like academics ignore or fear disciplines or approaches that do not form part of their ‘tribe’ or school of thought.

As appealing as it may sound, foraging is not a romanticized, or, worse, sterile vision of what academics do. Some academics, indeed, labour. Some, perhaps, even invent. But increasing numbers are actually foraging: hunting for bits and pieces, some of which can be exchanged for other stuff – money, prestige – thus allowing them to survive another winter. This isn’t easy: in the vast digital landscape, knowing how to spot ideas and thoughts that will have traction – and especially those that can be exchanged – requires continued focus and perseverance, as well as a lot of previously accumulated knowledge. Making a mistake can be deadly, perhaps not in the literal sense, but certainly as far as reputation is concerned.

So, workers of all lands, happy New Year, and spare a thought for the foragers in the wildlands of digital capitalism.