Zygmunt Bauman and the sociologies of end times

[This post was originally published at the Sociological Review blog’s Special Issue on Zygmunt Bauman, 13 April 2017]

“Morality, as it were, is a functional prerequisite of a world with an in-built finality and irreversibility of choices. Postmodern culture does not know of such a world.”

Zygmunt Bauman, Sociology and postmodernity

Getting reacquainted with Bauman’s 1988 essay “Sociology and postmodernity”, I accidentally misread the first word of this quote as “mortality”. In the context of the writing of this piece, it would be easy to interpret this as a Freudian slip – yet, as slips often do, it betrays a deeper unease. If it is true that morality is a functional prerequisite of a finite world, it is even truer that such a world calls for mortality – the ultimate human experience of irreversibility. In the context of trans- and post-humanism, as well as the growing awareness of the fact that the world, as the place inhabited (and inhabitable) by human beings, can end, what can Bauman teach us about both?

In Sociology and postmodernity, Bauman assumes the position at the crossroads of two historical (social, cultural) periods: modernity and postmodernity. Turning away from the past to look towards the future, he offers thoughts on what a sociology adapted to the study of postmodern condition would be like. Instead of a “postmodern sociology” as a mimetic representation of (even if a pragmatic response to) postmodernity, he argues for a sociology that attempts to give a comprehensive account of the “aggregate of aspects” that cohere into a new, consumer society: the sociology of postmodernity. This form of account eschews the observation of the new as a deterioration, or aberration, of the old, and instead aims to come to terms with the system whose contours Bauman will go on to develop in his later work: the system characterised by a plurality of possible worlds, and not necessarily a way to reconcile them.

The point in time in which he writes lends itself fortuitously to the argument of the essay. Not only did Legislators and interpreters, in which he reframes intellectuals as translators between different cultural worlds, come out a year earlier; the publication of Sociology and postmodernity briefly precedes 1989, the year that will indeed usher a wholly new period in the history of Europe, including in Bauman’s native Poland.

On the one hand, he takes the long view back to post-war Europe, built, as it was, on the legacy of Holocaust as a pathology of modernity, and two approaches to preventing its repetition – market liberalism and political freedoms in the West, and planned economies and more restrictive political regimes in Central and Eastern parts of the subcontinent. On the other, he engages with some of the dilemmas for the study of society that the approaching fall of Berlin Wall and eventual unification of those two hitherto separated worlds was going to open. In this sense, Bauman really has the privilege of a two-facing version of Benjamin’s Angel of History. This probably helped him recognize the false dichotomy of consumer freedom and dictatorship over needs, which, as he stated, was quickly becoming the only imaginable alternative to the system – at least as far as imagination was that of the system itself.

The present point of view is not all too dissimilar from the one in which Bauman was writing. We regularly encounter pronouncements of an end of a whole host of things, among them history, classical distribution of labour, standards of objectivity in reporting, nation-states, even – or so we hope – capitalism itself. While some of Bauman’s fears concerning postmodernity may, from the present perspective, seem overstated or even straightforwardly ridiculous, we are inhabiting a world of many posts – post-liberal, post-truth, post-human. Many think that this calls for a rethinking of how sociology can adapt itself to these new conditions: for instance, in a recent issue of International Sociological Association’s Global Dialogue, Leslie Sklair considers what a new radical sociology, developed in response to the collapse of global capitalism, would be like.

As if sociology and the zeitgeist are involved in some weird pas-de-deux: changes in any domain of life (technology, political regime, legislation) almost instantaneously trigger calls for, if not the invention of new, then a serious reconsideration of old paradigms and approaches to its study.

I would like to suggest that one of the sources of continued appeal of this – which Mike Savage brilliantly summarised as epochal theorising – is not so much the heralding of the new, as the promise that there is an end to the present state of affairs. In order for a new ‘epoch’ to succeed, the old one needs to end. What Bauman warns about in the passage cited at the beginning is that in a world without finality – without death – there can be no morality. In T.S. Eliot’s lines from Burnt Norton: If all time is eternally present, all time is irredeemable. What we may read as Bauman’s fear, therefore, is not that worlds as we know them can (and will) end: it is that, whatever name we give to the present condition, it may go on reproducing itself forever. In other words, it is a vision of the future that looks just like the present, only there is more of it.

Which is worse? It is hard to tell. A rarely discussed side of epochal theorising is that it imagines a world in which social sciences still have a role to play, if nothing else, in providing a theoretical framing or empirically-informed running commentary of its demise, and thus offers salvation from the existential anxiety of the present. The ‘ontological turn’ – from object-oriented ontology, to new materialisms, to post-humanism – reflects, in my view, the same tendency. If objects ‘exist’ in the same way as we do, if matter ‘matters’ in the same way (if not in the same degree) in which, for instance, black lives matter, this provides temporary respite from the confines of our choices. Expanding the concept of agency so as to involve non-human actors may seem more complicated as a model of social change, but at least it absolves humans from the unique burden of historical responsibility – including that for the fate of the world.

Human (re)discovery of the world, thus, conveys less a newfound awareness of the importance of the lived environment, as much as the desire to escape the solitude of thinking about the human (as Dawson also notes, all too human) condition. The fear of relativism that postmodern ‘plurality’ of worlds brought about appears to have been preferable to the possibility that there is, after all, just the one world. If the latter is the case, the only escape from it lies, to borrow from Hamlet, in the country from whose bourn no traveller has ever returned: in other words, in death.

This impasse is perhaps felt strongest in sociology and anthropology because excursions into other worlds have been both the gist of their method and the foundations of their critical potential (including their self-critique, which focused on how these two elements combine in the construction of epistemic authority). The figure of the traveller to other worlds was more pronounced in the case of anthropology, at least at the time when it developed as the study of exotic societies on the fringe of colonial empires, but sociology is no stranger to visitation either: its others, and their worlds, delineated by sometimes less tangible boundaries of class, gender, race, or just epistemic privilege. Bauman was among theorists who recognized the vital importance of this figure in the construction of the foundations of European modernity, and thus also sensitive to its transformations in the context of postmodernity – exemplified, as he argued, in contemporary human’s ambiguous position: between “a perfect tourist” and a “vagabond beyond remedy”.

In this sense, the awareness that every journey has an end can inform the practice of social theory in ways that go beyond the need to pronounce new beginnings. Rather than using eulogies in order to produce more of the same thing – more articles, more commentary, more symposia, more academic prestige – perhaps we can see them as an opportunity to reflect on the always-unfinished trajectory of human existence, including our existence as scholars, and the responsibility that it entails. The challenge, in this case, is to resist the attractive prospect of escaping the current condition by ‘exit’ into another period, or another world – postmodern, post-truth, post-human, whatever – and remember that, no matter how many diverse and wonderful entities they may be populated with, these worlds are also human, all too human. This can serve as a reminder that, as Bauman wrote in his famous essay on heroes and victims of postmodernity, “Our life struggles dissolve, on the contrary, in that unbearable lightness of being. We never know for sure when to laugh and when to cry. And there is hardly a moment in life to say without dark premonitions: ‘I have arrived’”.

Boundaries and barbarians: ontological (in)security and the [cyber?] war on universities

baradurPrologue

One Saturday in late January, I go to the PhD office at the Department of Sociology at the University of Cambridge’s New Museums site (yes, PhD students shouldn’t work on Saturdays, and yes, we do). I swipe my card at the main gate of the building. Nothing happens.

I try again, and again, and still nothing. The sensor stays red. An interaction with a security guard who seems to appear from nowhere conveys there is nothing wrong with my card; apparently, there has been a power outage and the whole system has been reset. A rather distraught-looking man from the Department History and Philosophy of Science appears around the corner, insisting to be let back inside the building, where he had left a computer on with, he claims, sensitive data. The very amicable security guard apologises. There’s nothing he can do to let us in. His card doesn’t work, either, and the system has to be manually reset from within the computers inside each departmental building.

You mean the building noone can currently access, I ask.

I walk away (after being assured the issue would be resolved on Monday) plotting sci-fi campus novels in which Skynet is not part of a Ministry of Defense, but of a university; rogue algorithms claim GCSE test results; and classes are rescheduled in a way that sends engineering undergrads to colloquia in feminist theory, and vice versa (the distances one’ s mind will go to avoid thinking about impending deadlines)*. Regretfully pushing prospective pitches to fiction publishers aside (temporarily)**, I find the incident particularly interesting for the perspective it offers on how we think about the university as an institution: its spatiality, its materiality, its boundaries, and the way its existence relates to these categories – in other words, its social ontology.

War on universities?

Critiques of the current transformation of higher education and research in the UK often frame it as an attack, or ‘war’, on universities (this is where the first part of the title of my thesis comes from). Exaggeration for rhetorical purposes notwithstanding, being ‘under attack’ suggests is that it is possible to distinguish the University (and the intellectual world more broadly) from its environment, in this case at least in part populated by forces that threaten its very existence. Notably, this distinction remains almost untouched even in policy narratives (including those that seek to promote public engagement and/or impact) that stress the need for universities to engage with the (‘surrounding’) society, which tend to frame this imperative as ‘going beyond the walls of the Ivory Tower’.

The distinction between universities and the society has a long history in the UK: the university’s built environment (buildings, campuses, gates) and rituals (dress, residence requirements/’keeping term’, conventions of language) were developed to reflect the separateness of education from ordinary experience, enshrined in the dichotomies of intellectual vs. manual labour, active life vs. ‘life of the mind’ and, not least, Town vs. Gown. Of course, with the rise of ‘redbrick’, and, later, ‘plateglass’ universities, this distinction became somewhat less pronounced. Rather than in terms of blurring, however, I would like to suggest we need to think of this as a shift in scale: the relationship between ‘Town’ and ‘Gown’, after all, is embedded in the broader framework of distinctions between urban and suburban, urban and rural, regional and national, national and global, and the myriad possible forms of hybridisation between these (recent work by Addie, Keil and Olds, as well as Robertson et al., offers very good insights into issues related to theorising scale in the context of higher education).

Policing the boundaries: relational ontology and ontological (in)security

What I find most interesting, in this setting, is the way in which boundaries between these categories are maintained and negotiated. In sociology, the negotiation of boundaries in the academia has been studied in detail by, among others, Michelle Lamont (in How Professors Think, as well as in an overview by Lamont and Molnár), Thomas Gieryn (both in Cultural Boundaries of Science and few other texts), Andrew Abbott in The Chaos of Disciplines (and, of course, in sociologically-inclined philosophy of science, including Feyerabend’s Against Method, Lakatos’ work on research programmes, and Kuhn’s on scientific revolutions, before that). Social anthropology has an even longer-standing obsession with boundaries, symbolic as well as material – Mary Douglas’ work, in particular, as well as Augé’s Non-Places offer a good entry point, converging with sociology on the ground of neo-Durkheimian reading of the distinction between the sacred and profane.

My interest in the cultural framing of boundaries goes back to my first PhD, which explored the construal of the category of (romantic) relationship through the delineation of its difference from other types of interpersonal relations. The concept resurfaced in research on public engagement in UK higher education: here, the negotiation of boundaries between ‘inside’ (academics) and ‘outside’ (different audiences), as well as between different groups within the university (e.g. administrators vs. academics) becomes evident through practices of engaging in the dissemination and, sometimes, coproduction of knowledge, (some of this is in my contribution to this volume). The thread that runs through these cases is the importance of positioning in relation to a (relatively) specified Other; in other words, a relational ontology.

It is not difficult to see the role of negotiating boundaries between ‘inside’ and ‘outside’ in the concept of ontological security (e.g. Giddens, 1991). Recent work in IR (e.g. Ejdus, 2017) has shifted the focus from Giddens’ emphasis on social relations to the importance of stability of material forms, including buildings. I think we can extend this to universities: in this case, however, it is not (only) the building itself that is ‘at risk’ (this can be observed in intensified securitisation of campuses, both through material structure such as gates and cards-only entrances, and modes of surveillance such as Prevent – see e.g. Gearon, 2017), but also the materiality of the institution itself. While the MOOC hype may have (thankfully) subsided (though not dissappeared) there is the ubiquitous social media, which, as quite a few people have argued, tests the salience of the distinction between ‘inside’ and ‘outside’ (I’ve written a bit about digital technologies as mediating the boundary between universities and the ‘outside world’ here as well in an upcoming article in Globalisation, Education, Societies special issue that deals with reassembling knowledge production with/out the university).

Barbarians at the gates

In this context, it should not be surprising that many academics fear digital technologies: anything that tests the material/symbolic boundaries of our own existence is bound to be seen as troubling/dirty/dangerous. This brings to mind Kavafy’s poem (and J.M. Coetzee’s novel) Waiting for the Barbarians, in which an outpost of the Empire prepares for the attack of ‘the barbarians’ – that, in fact, never arrives. The trope of the university as a bulwark against and/or at danger of descending into barbarism has been explored by a number of writers, including Thorstein Veblen and, more recently, Roy Coleman. Regardless of the accuracy or historical stretchability of the trope, what I am most interested in is its use as a simultaneously diagnostic and normative narrative that frames and situates the current transformation of higher education and research.

As the last line of Kavafy’s poem suggests, barbarians represent ‘a kind of solution’: a solution for the otherwise unanswered question of the role and purpose of universities in the 21st century, which began to be asked ever more urgently with the post-war expansion of higher education, only to be shut down by the integration/normalization of the soixante-huitards in what Boltanski and Chiapello have recognised as contemporary capitalism’s almost infinite capacity to appropriate critique. Disentangling this dynamic is key to understanding contemporary clashes and conflicts over the nature of knowledge production. Rather than locating dangers to the university firmly beyond the gates, then, perhaps we could use the current crisis to think about how we perceive, negotiate, and preserve the boundaries between ‘in’ and ‘out’. Until we have a space to do that, I believe we will continue building walls only to realise we have been left on the wrong side.

(*) I have a strong interest in campus novels, both for PhD-related and unrelated reasons, as well as a long-standing interest in Sci-Fi, but with the exception of DeLillo’s White Noise can think of very few works that straddle both genres; would very much appreciate suggestions in this domain!

(**) I have been thinking for a while about a book that would be a spin-off from my current PhD that would combine social theory, literature, and critical cultural political economy, drawing on similarities and differences between critical and magical realism to look at universities. This can be taken as a sketch for one of the chapters, so all thoughts and comments are welcome.

On ‘Denial’: or, the uncanny similarity between Holocaust and mansplaining

hero_denial-2016

Last week, I finally got around to seeing Denial. It has many qualities and a few disadvantages – its attempt at hyperrealism treading on both – but I would like to focus on the aspect most reviews I’ve read so far seem to have missed. In other words: mansplaining.

Brief contextualization. Lest I be accused of equating Holocaust and mansplaining (I am not – similarity does not denote equivalence), my work deals with issues of expertise, fact, and public intellectualism; I have always found the Irving case interesting, for a variety of reasons (incidentally, I was also at Oxford during the famous event at the Oxford Union). At the same time, like, I suppose, every woman in the academia and beyond with more agency than a doormat, I have, over the past year, become embroiled in countless arguments about what mansplaining is, whether it is really so widespread, whether it is done only by men (and what to call it when it’s perpetrated by those who are not men?) and, of course, that pseudo-liberal what-passes-as-an-attempt at outmaneuvering the issue, which is whether using the term ‘mansplaining’ blames men as a group and is as such essentialising and oppressive, just like the discourses ‘we’ (feminists conveniently grouped under one umbrella) seek to condemn (otherwise known as a tu quoque argument).

Besides logical flaws, what many of these attacks seem to have in common with the one David Irving launched on Deborah Lipstadt (and Holocaust deniers routinely use) is the focus on evidence: how do we know that mansplaining occurs, and is not just some fabrication of a bunch of conceited females looking to get ahead despite their obvious lack of qualifications? Other uncanny similarities between arguments of Holocaust deniers and those who question the existence of mansplaining temporarily aside, one of undisputable qualities of Denial is that it provides multiple examples of what mansplaining looks like. It is, of course, a film, despite being based on a true story. Rather than presenting a downside, this allows for a concentrated portrayal of the practice – for those doubting its verisimilitude, I strongly recommend watching the film and deciding for yourself whether it resembles real-life situations. For those who do not, voilà, a handy cinematic case to present to those who prefer to plead ignorance as to what mansplaining ‘actually’ entails.

To begin with, the case portrayed in the film is a par excellence instance of mansplaining  as a whole: after all, it is about a self-educated (male) historian who sues an academic historian (a woman) because she does not accept his ‘interpretation’ of World War II (namely, that Holocaust did not happen) and, furthermore, dares to call him out on it. In the case (and the film), he sets out to explain to the (of course, male) judge and the public that Lipstadt (played by Rachel Weisz) is wrong and, furthermore, that her critique has seriously damaged his career (the underlying assumption being that he is entitled to lucrative publishing deals, while she, clearly, has to earn hers – exacerbated by his mockery of the fact that she sells books, whereas his, by contrast, are free). This ‘talking over’ and attempt to make it all about him (remember, he sues her) are brilliantly cast in the opening, when Irving (played by Timothy Spall) visits Lipstadt’s public talk and openly challenges her in the Q&A, ignoring her repeated refusal to engage with his arguments. Yet, it would be a mistake to locate the trope of mansplaining only in the relation Irving-Lipstadt. On the contrary – just like the real thing – it is at its most insidious when it comes from those who are, as it were, ‘on our side’.

A good example is the first meeting of the defence team, where Lipstadt is introduced to people working with her legal counsel, the famous Anthony Julius (Andrew Scott). There is a single woman on Julius’ team: Laura (Caren Pistorius), who, we are told, is a paralegal. Despite it being her first case, it seems she has developed a viable strategy: or at least so we are told by her boss, who, after announcing Laura’s brilliant contribution to the case, continues to talk over her – that is, explain her thoughts without giving her an opportunity to explain them herself. In this sense, what at first seems like an act of mentoring support – passing the baton and crediting a junior staff member – becomes a classical act in which a man takes it onto himself to interpret the professional intervention of a female colleague, appropriating it in the process.

The cases of professional mansplaining are abundant throughout the film: in multiple scenes lawyers explain the Holocaust as well as the concept of denial to Lipstadt despite her meek protests that she “has actually written a book about it”. Obvious irony aside, this serves as a potent reminder that women have to invoke professional credentials not to be recognized as experts, but in order to be recognized as equally valid participants in debate. By contrast, when it comes to the only difference in qualifications in the film that plays against Lipstadt – that of the knowledge of the British legal system – Weisz’s character conveniently remains a mixture of ignorance and naïveté couched in Americanism. One would be forgiven to assume that long-term involvement in a libel case, especially one that carries so much emotional and professional weight, would have provoked a university professor to get acquainted with at least the basic rules of the legal system in which the case was processed, but then, of course, that would have stripped the male characters of the opportunity to shine the light of their knowledge in contrast to her supposed ignorance.

Of course, emotional involvement is, in the film, presented as a clear disadvantage when it comes to the case. While Lipstadt first assumes she will, and then repeatedly asks to be allowed to testify, her legal team insists she would be too emotional a witness. The assumption that having an emotional reaction (even if one that is quite expected – it is, after all, the Holocaust we are talking about) and a cold, hard approach to ‘facts’ are mutually exclusive is played off succinctly in the scenes that take place at Auschwitz. While Lipstadt, clearly shaken (as anyone, Jewish or not, is bound to be when standing at the site of such a potent example of mass slaughter), asks the party to show respect for the victims, the head barrister Richard Rampton (Tom Wilkinson) is focused on calmly gathering evidence. The value of this, however, only becomes obvious in the courtroom, where he delivers his coup de grâce, revealing that his calm pacing around the perimeter of Auschwitz II-Birkenau (which makes him arrive late and upsets everyone, Lipstadt in particular) was actually measuring the distance between the SS barracks and the gas chambers, allowing him to disprove Irving’s assertion that the gas chambers were built as air raid shelters, and thus tilt the whole case in favour of the defence.

The mansplaining triumph, however, happens even before this Sherlockian turn, in the scene in which Rampton visits Lipstadt in her hotel room (uninvited, unannounced) in order to, yet again, convince her that she should not testify or engage with Irving in any form. After he gently (patronisingly) persuades her that  “What feels best isn’t necessarily what works best” (!), she, emotionally moved, agrees to “pass her conscience” to him – that is, to a man. By doing this, she abandons not only her own voice, but also the possibility to speak for Holocaust survivors – the one that appears as a character in the film also, poignantly, being female. In Lipstadt’s concession that silence is better because it “leads to victory”, it is not difficult to read the paradoxical (pseudo)pragmatic assertion that openly challenging male privilege works, in fact, against gender equality, because it provokes a counterreaction. Initially protesting her own silencing, Lipstadt comes to accept what her character in the script dubs “self-denial” as the only way to beat those who deny the Holocaust.

Self-denial: for instance, denying yourself food for fear of getting ‘fat’ (and thus unattractive for the male gaze); denying yourself fun for fear of being labeled easy or promiscuous (and thus undesirable as a long-term partner); denying yourself time alone for fear of being seen as selfish or uncaring (and thus, clearly, unfit for a relationship). Silence: for instance, letting men speak first for fear of being seen as pushy (and thus too challenging); for instance, not speaking up when other women are oppressed, for fear of being seen as too confrontational (and thus, of course, difficult); for instance, not reporting sexual harassment, for fear of retribution, shame, isolation (self-explanatory). In celebrating ‘self-denial’, the film, then, patently reinscribes the stereotype of the patient, silent female.

Obviously, there is value in refusing to engage with outrageous liars; equally, there are issues that should remain beyond discussion – whether Holocaust happened being one of them. Yet, selective silencing masquerading as strategy – note that Lipstadt is not allowed to speak (not even to the media), while Rampton communicates his contempt for Irving by not looking at him (thus, denying him the ‘honour’ of the male gaze) – too often serves to reproduce the structural inequalities that can persist even under a legal system that purports to be egalitarian.

Most interestingly, the fact that a film that is manifestly about mansplaining manages to reproduce quite a few of mansplaining tropes (and, I would argue, not always in a self-referential or ironic manner) serves as a poignant reminder how deeply the ‘splaining complex is embedded not only in politics or the academia, but also in cultural representations. This is something we need to remain acutely aware of in the age of ‘post-truth’ or ‘post-facts’. If resistance to lying politicians and the media is going to take the form of (re)assertion of one, indisputable truth, and the concomitant legitimation of those who claim to know it – strangely enough, most often white, privileged men – then we’d better think of alternatives, and quickly.

@Grand_Hotel_Abyss: digital university and the future of critique

[This post was originally published on 03/01 2017 in Discover Society Special Issue on Digital Futures. I am also working on a longer (article) version of it, which will be uploaded soon].

It is by now commonplace to claim that digital technologies have fundamentally transformed knowledge production. This applies not only to how we create, disseminate, and consume knowledge, but also who, in this case, counts as ‘we’. Science and technology studies (STS) scholars argue that knowledge is an outcome of coproduction between (human) scientists and objects of their inquiry; object-oriented ontology and speculative realism go further, rejecting the ontological primacy of humans in the process. For many, it would not be overstretching to say machines do not only process knowledge, but are actively involved in its creation.

What remains somewhat underexplored in this context is the production of critique. Scholars in social sciences and humanities fear that the changing funding and political landscape of knowledge production will diminish the capacity of their disciplines to engage critically with the society, leading to what some have dubbed the ‘crisis’ of the university. Digital technologies are often framed as contributing to this process, speeding up the rate of production, simultaneously multiplying and obfuscating the labour of academics, perhaps even, as Lyotard predicted, displacing it entirely. Tensions between more traditional views of the academic role and new digital technologies are reflected in, often heated, debates over academics’ use of social media (see, for instance, #seriousacademic on Twitter). Yet, despite polarized opinions, there is little systematic research into links between the transformation of the conditions of knowledge production and critique.

My work is concerned with the possibility – that is, the epistemological and ontological foundations – of critique, and, more precisely, how academics negotiate it in contemporary (‘neoliberal’) universities. Rather than trying to figure out whether digital technologies are ‘good’ or ‘bad’, I think we need to consider what it is about the way they are framed and used that makes them either. From this perspective, which could be termed the social ontology of critique, we can ask: what is it about ‘the social’ that makes critique possible, and how does it relate to ‘the digital’? How is this relationship constituted, historically and institutionally? Lastly, what does this mean for the future of knowledge production?

Between pre-digital and post-critical 

There are a number of ways one can go about studying the relationship between digital technologies and critique in the contemporary context of knowledge production. David Berry and Christian Fuchs, for instance, both use critical theory to think about the digital. Scholars in political science, STS, and sociology of intellectuals have written on the multiplication of platforms from which scholars can engage with the public, such as Twitter and blogs. In “Uberfication of the University”, Gary Hall discusses how digital platforms transform the structure of academic labour. This joins the longer thread of discussions about precarity, new publishing landscapes, and what this means for the concept of ‘public intellectual’.

One of the challenges of theorising this relationship is that it has to be developed out of the very conditions it sets out to criticise. This points to limitations of viewing ‘critique’ as a defined and bounded practice, or the ‘public intellectual’ as a fixed and separate figure, and trying to observe how either has changed with the introduction of the digital. While the use of social media may be a more recent phenomenon, it is worth recalling that the bourgeois public sphere that gave rise to the practice of critique in its contemporary form was already profoundly mediatised. Whether one thinks of petitions and pamphlets in the Dreyfus affair, or discussions on Twitter and Facebook – there is no critique without an audience, and digital technologies are essential in how we imagine them. In this sense, grounding an analysis of the contemporary relationship between the conditions of knowledge production and critique in the ‘pre-digital’ is similar to grounding it in the post-critical: both are a technique of ‘ejecting’ oneself from the confines of the present situation.

The dismissiveness Adorno and other members of the Frankfurt school could exercise towards mass media, however, is more difficult to parallel in a world in which it is virtually impossible to remain isolated from digital technologies. Today’s critics may, for instance, avoid having a professional profile on Twitter or Facebook, but they are probably still using at least some type of social media in their private lives, not to mention responding to emails, reading articles, and searching and gathering information through online platforms. To this end, one could say that academics publicly criticising social media engage, in fact, in a performative contradiction: their critical stance is predicated on the existence of digital technologies both as objects of critique and main vehicles for its dissemination.

This, I believe, is an important source of perceived tensions between the concept of critique and digital technologies. Traditionally, critique implies a form of distancing from one’s social environment. This distancing is seen as both spatial and temporal: spatial, in the sense of providing a vantage point from which the critic can observe and (choose to) engage with the society; temporal, in the sense of affording shelter from the ‘hustle and bustle’ of everyday life, necessary to stimulate critical reflection. Universities, at least in a good part of 20th century, were tasked with providing both. Lukács, in his account of the Frankfurt school, satirized this as “taking residence in the ‘Grand Hotel Abyss’”: engaging in critique from a position of relative comfort, from which one can stare ‘into nothingness’. Yet, what if the Grand Hotel Abyss has a wifi connection?

Changing temporal frames: beyond the Twitter intellectual?

Some potential perils of the ‘always-on’ culture and contracting temporal frames for critique are reflected in the widely publicized case of Steven Salaita, an internationally recognized scholar in the field of Native American studies and American literature. In 2013, Salaita was offered a tenured position at the University of Illinois. However, in 2014 the Board of Trustees withdrew the offer, citing Salaita’s “incendiary” posts on Twitter as the reason. Salaita is a vocal critic of Israel, and his Tweets at the time concerned Israeli military offensive in the Gaza Strip; some of the University’s donors found this problematic and pressured the Board to withdraw the offer. Salaita has in the meanwhile appealed the decision and received a settlement from the University of Illinois, but the case – though by no means unique – drew attention to the issue of the (im)possibility of separating the personal, political and professional on social media.

At the same time, social media can provide venues for practicing critique in ways not confined by the conventions or temporal cycles of the academia. The example of Eric Jarosinski, “The rock star philosopher of Twitter”, shows this clearly. Jarosinski is a Germanist whose Tweets contain clever puns on the Frankfurt school, as well as, among others, Hegel and Nietzsche. In 2013, he took himself out of consideration for tenure at the University of Pennsylvania, but continued to compose philosophically-inspired Tweets, eventually earning a huge following, as well as a column in the two largest newspapers in Germany and The Netherlands. Jarosinski’s moniker, #failedintellectual, is an auto-ironic reminder that it is possible to succeed whilst deviating from the established routes of intellectual critique.

Different ways in which it can be performed on Twitter should not, however, detract from the fact that critique operates in fundamentally politicized and stratified spaces; digital technologies can render them more accessible, but that does not mean that they are more democratic or offer a better view of ‘the public’. This is particularly worth remembering in the light of recent political events in the UK and the US. Once the initial shock following the US election and the British EU referendum had subsided, many academics (and intellectuals more broadly) have taken to social media to comment, evaluate, or explain what had happened. Yet, for the most part, these interventions end exactly where they began – on social media. This amounts to live Tweeting from the balcony of the Grand Hotel Abyss: the view is good, but the abyss no less gaping for it.

By sticking to critique on social media, intellectuals are, essentially, doing what they have always been good at – engaging with audiences and in ways they feel comfortable with. To this end, criticizing the ‘alt-right’ on Twitter is not altogether different from criticising it in lecture halls. Of course, no intellectual critique can aspire to address all possible publics, let alone equally. However, it makes sense to think how the ways in which we imagine our publics influences our capacity to understand the society we live in; and, perhaps more importantly, how it influences our ability to predict – or imagine – its future. In its present form, critique seems far better suited to an idealized Habermasian public sphere, than to the political landscape that will carry on in the 21st century. Digital technologies can offer an approximation, perhaps even a good simulation, of the former; but that, in and of itself, does not mean that they can solve problems of the latter.

Jana Bacevic is a PhD researcher at the Department of Sociology at the University of Cambridge. She works on social theory and the politics of knowledge production; her thesis deals with the social, epistemological and ontological foundations of the critique of neoliberalism in higher education and research in the UK. Previously, she was Marie Curie fellow at the University of Aarhus in Denmark at Universities in Knowledge Economies (UNIKE). She tweets at @jana_bacevic

Against academic labour: foraging in the wildlands of digital capitalism

sqrl
Central Park, NYC, November 2013

I am reading a book called “The Slow Professor: Challenging the Culture of Speed in the Academy”, by two Canadian professors, Maggie Berg and Barbara Seeber. Published earlier in 2016, to (mostly) wide critical acclaim, it critiques the changing conditions of knowledge production in the academia, in particular those associated with the expectation to produce more and at faster rates (also known as ‘acceleration‘). As an antidote, as the Slow Professor Manifesto appended to the Preface suggests, faculty should resist the corporatisation of the university by adopting the principles of Slow Movement (as in Slow Food etc.) in their professional practices.

While the book is interesting, the argument is not particularly exceptional in the context of the expanding genre of diagnoses of the ‘end’ or ‘crisis’ of the Western university. The origins of the genre could be traced to Bill Readings’ 1996 ‘University in Ruins’ (though, of course, one could always stretch the lineage back to 1918 and Veblen’s ‘The Higher Learning in America’; predecessors in Britain include E.P. Thompson’s ‘Warwick University Ltd.’ (1972) and Halsey’s ‘The Decline of Donnish Dominion’ (1982)). Among contemporary representatives of the genre are Nussbaum’s ‘Not for Profit: Why Democracy Needs the Humanities’ (2010), Collini’s ‘What Are Universities For’ (2012), and Giroux’s ‘Neoliberal Attack on Higher Education’ (2013), to name but a few; in other words, there is no shortage of works documenting how the transformation of the conditions of academic labour fundamentally threatens the role and function of universities in the Western societies – and, by extension, the survival of these societies themselves.

I would like to say straight away that I do not, for a single moment, dispute or doubt the toll that the transformation of the conditions of academic labour is having on those who are employed at universities. Having spent the past twelve years researching the politics of academic knowledge, and most of those working in higher education in a number of different countries, I encountered hardly a single academic or student not pressured, threatened, or at the very least insecure about their future employment. What I want to argue, instead, is that the critique of the transformation of knowledge production that focuses on academic labour is no longer sufficient. Concomitantly, the critique of time – as in labour time – isn’t either.

In lieu of labour, I suggest we could think of what academics do as foraging. By this I do not in any way mean to trivialize union struggles that focus on working conditions for faculty or the position of students; these are and continue to be very important, and I have always been proud to support them. However, unfortunately, they cannot capture the way knowledge has already changed. This is not only due to the growing academic ‘precariat’ (or ‘cognitariat’): while the absence of stable or full-time employment has been used to inform both analyses and specific forms of political action on both sides of the Atlantic, they still frame the problem as fundamentally dependent on academic labour. While this may for the time being represent a good strategy in the political sense, it creates a set of potential contradictions in the conceptual.

For one, labour implies the concept of use: Marx’s labour theory of value postulates that this is what it allows it to be exchanged for something (money, favours). Yet, we as  academics are often the first to point out that lot of knowledge is not directly useful: for every paradigmatic scientist in a white lab coat that cures cancer, there is the equally paradigmatic bookworm reading 18th-century poetry (bear with me, it’s that time of the year when clichés abound). Trying to measure their value by the same or even similar standard risks slipping into the pathologies of impact, or, worse, vague statements about the necessity of social sciences and humanities for democracy, freedom, and human rights (despite personal sympathy for the latter argument, it warrants mentioning that the link between democratic regimes and academic freedom is historically contingent, rather than causal).

Second, framing what academics do as labour makes it very difficult to avoid embracing some form of measurement of output. This isn’t always related to quantity: one can also measure the quality of publications (e.g., by rating them in relation to the impact factors of journals they were published in). Often, however, the ideas of productivity and excellence go hand in hand. This contributes to the proliferation of academic writing – not all of which is exceptional, to say the very least – and, in turn, creates incentives to produce both more and better (‘slow’ academia is underpinned by the argument that taking more time creates better writing).

This also points to why the critique of the conditions of knowledge production is so focused on the notion of time. As long as creating knowledge is primarily defined as a form of labour, it depends on socially and culturally defined cycles of production and consumption. Advocating ‘slowness’, thus, does not amount to the critique of the centrality of time to capitalist production: it just asks for more of it.

The concept of foraging, by contrast, is embedded in a different temporal cycle: seasonal, rather that annual or REF-able. This isn’t some sort of neo-primitivist glorification of supposed forms of sustenance of the humanity’s forebears before the (inevitable) fall from grace; it’s, rather, a more precise description of how knowledge works. To this end, we could say most academics forage anyway: they collect bits and scraps of ideas and information, and turn them into something that can be consumed (if only by other academics). Some academics will discover new ‘edible’ things, either by trial and error or by learning from (surveying) the population that lives in the area, and introduce this to other academics. Often, however, this does not amount to creating something entirely new or original, as much to the recombination of existing flavours. This is why it is not abundance as such as much as diversity that plays a role in how interesting an environment a university, city, or region will become.

However, unlike labour, foraging is not ‘naturally’ given to the creation of surplus: while foraged food can be stored, most of it is collected and prepared more or less in relation to the needs of those who eat it. Similarly, it is also by default somewhat undisciplined: foragers must keep an eye out for the plants and other foodstuffs that may be useful to them. This does not mean that it does not rely on tradition, or that it is not susceptible to prejudice – often, people will ignore or attribute negative properties to forms of food that they are unfamiliar with, much like academics ignore or fear disciplines or approaches that do not form part of their ‘tribe’ or school of thought.

As appealing as it may sound, foraging is not a romanticized, or, worse, sterile vision of what academics do. Some academics, indeed, labour. Some, perhaps, even invent. But increasing numbers are actually foraging: hunting for bits and pieces, some of which can be exchanged for other stuff – money, prestige – thus allowing them to survive another winter. This isn’t easy: in the vast digital landscape, knowing how to spot ideas and thoughts that will have traction – and especially those that can be exchanged – requires continued focus and perseverance, as well as a lot of previously accumulated knowledge. Making a mistake can be deadly, perhaps not in the literal sense, but certainly as far as reputation is concerned.

So, workers of all lands, happy New Year, and spare a thought for the foragers in the wildlands of digital capitalism.

We are all postliberals now: teaching Popper in the era of post-truth politics

blackswan
Adelaide, South Australia, December 2014

Late in the morning after the US election, I am sitting down to read student essays for the course on social theory I’m supervising. This part of the course involves the work of Popper, Kuhn, Lakatos, and Feyerabend, and its application in the social sciences. The essay question is: do theories need to be falsifiable, and how to choose between competing theories if they aren’t? The first part is a standard essay question; I added the second a bit more than a week ago, interested to see how students would think about criteria of verification in absence of an overarching regime of truth.

This is one of my favourite topics in the philosophy of science. When I was a student at the University of Belgrade, feeling increasingly out of place in the post-truth and intensely ethnographic though anti-representationalist anthropology, the Popper-Kuhn debate in Criticism and the Growth of Knowledge held the promise that, beyond classification of elements of material culture of the Western Balkans, lurked bigger questions of the politics and sociology of knowledge (paradoxically, this may have been why it took me very long to realize I actually wanted to do sociology).

I was Popper-primed well before that, though: the principle of falsification is integral to the practice of parliamentary-style academic debating, in which the task of the opposing team(s) is to ‘disprove’ the motion. In the UK, this practice is usually associated with debate societies such as the Oxford and Cambridge Union, but it is widespread in the US as well as the rest of the world; during my undergraduate studies, I was an active member of Yugoslav (now Serbian) Universities Debating Network, known as Open Communication. Furthermore, Popper’s political ideas – especially those in Open Society and its Enemies – formed the ideological core of the Open Society Foundation, founded by the billionaire George Soros to help the promotion of democracy and civil society in Central and Eastern Europe.

In addition to debate societies, the Open Society Foundation supported and funded a greater part of civil society activism in Serbia. At the time, most of it was conceived as the opposition to the regime of Slobodan Milošević, a one-time-banker-turned-politician who ascended to power in the wake of the dissolution of the Socialist federal republic of Yugoslavia. Milošević played a major role in the conflicts in its former republics, simultaneously plunging Serbia deeper into economic and political crisis exacerbated by international isolation and sanctions, culminating in the NATO intervention in 1999. Milošević’s rule ended in a coup following a disputed election in 2000.

I had been part of the opposition from the earliest moment conceivable, skipping classes in secondary school to go to anti-government demos in 1996 and 1997. The day of the coup – 5 October 2000 – should have been my first day at university, but, together with most students and staff, I was at what would turn out to be the final public protest that ended up in the storming of the Parliament. I swallowed quite a bit of tear gas, twice in situations I expected not to get out of alive (or at the very least unharmed), but somehow made it to a friend’s house, where, together with her mom and grandma, we sat in the living room and watched one of Serbia’s hitherto banned TV and radio stations – the then-oppositional B92 – come back on air. This is when we knew it was over.

Sixteen years and little more than a month later, I am reading students’ essays on truth and falsehood in science. This, by comparison, is a breeze, and it’s always exciting to read different takes on the issue. Of course, in the course of my undergraduate studies, my own appreciation of Popper was replaced by excitement at the discovery of Kuhn – and the concomitant realization of the inertia of social structures, which, just like normal science, are incredibly slow to change – and succeeded by light perplexity by Lakatos (research programmes seemed equal parts reassuring and inherently volatile – not unlike political coalitions). At the end, obviously, came infatuation with Feyerabend: like every self-respecting former liberal, I reckoned myself a methodological (and not only methodological) anarchist.

Unsurprisingly, most of the essays I read exhibit the same trajectory. Popper is, quite obviously, passé; his critique of Marxism (and other forms of historicism) not particularly useful, his idea of falsificationism too strict a criterion for demarcation, and his association with the ideologues of neoliberalism did probably not help much either.

Except that…. this is what Popper has to say:

It is undoubtedly true that we have a more direct knowledge of the ‘inside of the human atom’ than we have of physical atoms; but this knowledge is intuitive. In other words, we certainly use our knowledge of ourselves in order to frame hypotheses about some other people, or about all people. But these hypotheses must be tested, they must be submitted to the method of selection by elimination.

(The Poverty of Historicism, 127)

Our knowledge of ourselves: for instance, our knowledge that we could never, ever, elect a racist, misogynist, reality TV star for the president of one of world’s superpowers. That we would never vote to leave the European Union, despite the fact that, like all supranational entities, it has flaws, but look at how much it invests in our infrastructure. Surely – as Popper would argue – we are rational animals: and rational animals would not do anything that puts them in unnecessary danger.

Of course, we are correct. The problem, however, is that we have forgotten about the second part of Popper’s claim: we use knowledge of ourselves to form hypotheses about other people. For instance: since we understand that a rich businessman is not likely to introduce economic policies that harm the elite, the poor would never vote for him. For instance: since we remember the victims of Nazism and fascism, everyone must understand how frail is the liberal consensus in Europe.

This is why the academia came to be “shocked” by Trump’s victory, just like it was shocked by the outcome of the Brexit referendum. This is also the key to the question of why polls “failed” to predict either of these outcomes. Perhaps we were too focused on extrapolating our assumptions to other people, and not enough on checking whether they hold.

By failing to understand that the world is not composed of left-leaning liberals with a predilection for social justice, we commit, time and again, what Bourdieu termed scholastic fallacy – propensity to attribute categories of our own thinking to those we study. Alternatively, and much worse, we deny them common standards of rationality: the voters whose political choices differ from ours are then cast as uneducated, deluded, suffering from false consciousness. And even if they’re not, they must be a small minority, right?

Well, as far as hypotheses are concerned, that one has definitely failed. Maybe it’s time we started considering alternatives.

One more time with [structures of] feeling: anxiety, labour, and social critique in/of the neoliberal academia

906736_10151382284833302_1277162293_o
Florence, April 2013

Last month, I attended the symposium on Anxiety and Work in the Accelerated Academy, the second in the Accelerated Academy series that explores the changing scapes of time, work, and productivity in the academia. Given that my research is fundamentally concerned with the changing relationships between universities and publics, and the concomitant reframing of the subjectivity, agency, and reflexivity of academics, I naturally found the question of the intersection of academic labour and time relevant. One particular bit resonated for a long time: in her presentation, Maggie O’Neill from the University of York suggested anxiety has become the primary structure of feeling in the neoliberal academia. Having found myself, in the period leading up to the workshop, increasingly reflecting on the structures of feeling,  I was intrigued by the salience of the concept. Is there a place for theoretical concepts such as this in research on the transformations of knowledge production in contemporary capitalism, and where is it?

All the feels

“Structure of feeling” may well be one of those ideas whose half-life way superseded their initial purview. Raymond Williams introduced it in a brief chapter included in Marxism and Literature, contributing to carving out what would become known as the distinctly British take on the relationship between “base” and “superstructure”: cultural studies. In it, he says:

Specific qualitative changes are not assumed to be epiphenomena of changed institutions, formations, and beliefs, or merely secondary evidence of changed social and economic relations between and within classes. At the same time they are from the beginning taken as social experience, rather than as ‘personal’ experience or as the merely superficial or incidental ‘small change’ of society. They are social in two ways that distinguish them from reduced senses of the social as the institutional and the formal: first, in that they are changes of presence (while they are being lived this is obvious; when they have been lived it is still their substantial characteristic); second, in that although they are emergent or pre-emergent, they do not have to await definition, classification, or rationalization before they exert palpable pressures and set effective limits on experience and on action. Such changes can be defined as changes in structures of feeling. (Williams, 1977:130).

Williams thus introduces structures of feeling as a form of social diagnostic; he posits it against the more durable but also more formal concepts of ‘world-view’ or ‘ideology’. Indeed, the whole chapter is devoted to the critique of the reificatory tendencies of Marxist social analysis: the idea of things (or ideas) being always ‘finished’, always ‘in the past’, in order for them to be subjected to analytical scrutiny. The concept of “structure of feeling” is thus invoked in order to keep tabs on social change and capture the perhaps less palpable elements of transformation as they are happening.

Emotions and the scholastic disposition

Over the past years, discourse of feelings has certainly become more prominent in the academia. Just last week, Cambridge’s Festival of Ideas featured a discussion on the topic, framing it within issues of free speech and trigger warnings on campus. While the debate itself has a longer history in the US, it had begun to attract more attention in the UK – most recently in relation to challenging colonial legacies at both Oxford and Cambridge.

Despite multiple nuances of political context and the complex interrelation between imperialism and higher education, the debate in the media predominantly plays out in dichotomies of ‘thinking’ and ‘feeling’. Opponents tend to pit trigger warnings or the “culture of offence” against the concept of academic freedom, arguing that today’s students are too sensitive and “coddled” which, in their view, runs against the very purpose of university education. From this perspective, education is about ‘cultivating’ feelings: exercising control, submerging them under the strict institutional structures of the intellect.

Feminist scholars, in particular, have extensively criticised this view for its reductionist properties and, not least, its propensity to translate into institutional and disciplinary policies that seek to exclude everything framed as ‘emotional’, bodily, or material (and, by association, ‘feminine’) from academic knowledge production. But the cleavage runs deeper. Research in social sciences is often framed in the dynamic of ‘closeness’ and ‘distancing’, ‘immersion’ and ‘purification’: one first collects data by aiming to be as close as possible to the social context of the object of research, but then withdraws from it in order to carry out analysis. While approaches such as grounded theory or participatory methods (cl)aim to transcend this boundary, its echoes persist in the structure of presentation of academic knowledge (for instance, the division between data and results), as well as the temporal organisation of graduate education (for instance, the idea that the road to PhD includes a period of training in methods and theories, followed by data collection/fieldwork, followed by analysis and the ‘writing up’ of results).

The idea of ‘distanced reflection’ is deeply embedded in the history of academic knowledge production. In Pascalian Meditations, Bourdieu relates it to the concept of skholē – the scholarly disposition – predicated on the distinction between intellectual and manual labour. In other words, in order for reflection to exist, it needed to be separated from the vagaries of everyday existence. One of its radical manifestations is the idea of the university as monastic community. Oxford and Cambridge, for instance, were explicitly constructed on this model, giving rise to animosities between ‘town’ and ‘gown’: concerns of the ‘lay’ folk were thought to be diametrically opposed to those of the educated. While arguably less prominent in (most) contemporary institutions of knowledge production, the dichotomy is still unproblematically transposed in concepts such as “university’s contribution to society”, which assumes universities are distinct from the society, or at least their interests radically different from those of “the society” – raising obvious questions about who, in fact, is this society.

Emotions, reason, and critique

Paradoxically, perhaps, one of the strongest reverberations of the idea is to be found in the domain of social critique. On the one hand, this sounds counter-intuitive – after all, critical social science should be about abandoning the ‘veneer’ of neutrality and engaging with the world in all of its manifestations. However, establishing the link between social science and critique rests on something that Boltanski, in his critique of Bourdieu’s sociology of domination, calls the metacritical position:

For this reason we shall say that critical theories of domination are metacritical in order. The project of taking society as an object and describing the components of social life or, if you like, its framework, appeals to a thought experiment that consists in positioning oneself outside this framework in order to consider it as a whole. In fact, a framework cannot be grasped from within. From an internal perspective, the framework coincides with reality in its imperious necessity. (Boltanski, 2011:6-7)

Academic critique, in Boltanski’s view, requires assuming a position of exteriority. A ‘simple’ form of exteriority rests on description: it requires ‘translation’ of lived experience (or practices) into categories of text. However, passing the kind of moral judgements critical theory rests on calls for, he argues, a different form of distancing: complex exteriority.

In the case of sociology, which at this level of generality can be regarded as a history of the present, with the result that the observer is part of what she intends to describe, adopting a position of exteri­ority is far from self-evident… This imaginary exit from the viscosity of the real initially assumes stripping reality of its character of implicit necessity and proceeding as if it were arbitrary (as if it could be other than it is or even not be);

This “exit from the viscosity of the real” (a lovely phrase!) proceeds in two steps. The first takes the form of “control of desire”, that is, procedural distancing from the object of research. The second is the act of judgement by which a social order is ‘ejected’, seen in its totality, and as such evaluated from the outside:

In sociology the possibility of this externalization rests on the existence of a laboratory – that is to say, the employment of protocols and instructions respect for which must constrain the sociologist to control her desires (conscious or unconscious). In the case of theories of domination, the exteriority on which cri­tique is based can be called complex, in the sense that it is established at two different levels. It must first of all be based on an exteriority of the first kind to equip itself with the requisite data to create the picture of the social order that will be submitted to critique. A meta­ critical theory is in fact necessarily reliant on a descriptive sociology or anthropology. But to be critical, such a theory also needs to furnish itself, in ways that can be explicit to very different degrees, with the means of passing a judgement on the value of the social order being described. (ibid.)

Critique: inside, outside, in-between?

To what degree could we say that this categorisation can be applied to the current critique of conditions of knowledge production in the academia? After all, most of those who criticize the neoliberal transformation of higher education and research are academics. In this sense, it would make sense to question the degree to which they can lay claims to a position of exteriority. However, more problematically (or interestingly), it is also questionable to which degree a position of exteriority is achievable at all.

Boltanski draws attention to this problem by emphasising the distinction between the cognition – awareness – of ‘ordinary’ actors, and that of sociologists (or other social scientists), the latter, presumably, able to perceive structures of domination that the subjects of their research do not:

Metacritical theories of domination tackle these asymmetries from a particular angle – that of the miscognition by the actors themselves of the exploitation to which they are subject and, above all, of the social conditions that make this exploitation possible and also, as a result, of the means by which they could stop it. That is why they present themselves indivisibly as theories of power, theories of exploitation and theories of knowledge. By this token, they encounter in an especially vexed fashion the issue of the relationship between the knowledge of social reality which is that of ordinary actors, reflexively engaged in practice, and the knowledge of social reality conceived from a reflexivity reliant on forms and instruments of totalization – an issue which is itself at the heart of the tensions out of which the possibility of a social science must be created (Boltanski, 2011:7)

Hotel Academia: you can check out any time you like, but you can never leave?

How does one go about thinking about the transformation of the conditions of knowledge production when one is at the same time reflexively engaged in practice and relying on the reflexivity provided by sociological instruments? Is it at all possible? The feelings of anxiety, to this end, could be provoked exactly by this lack of opportunity to step aside – to disembed oneself from the academic life and reflect on it at the leisurely pace of skholē. On the one hand, this certainly has to do with the changing structure and tempo of academic life – acceleration and demands for increased output: in this sense, anxiety is a reaction to the changes perceived and felt, the feeling that the ground is no longer stable, like a sense of vertigo. On the other hand, however, this feeling of decentredness could be exactly what contemporary critique calls for.

The challenge, of course, is how to turn this “structure of feeling” into something that has analytical as well as affective power – and can transform the practice itself. Stravinsky’s Rite of Spring, I think, is a wonderful example of this. As a melody, it is fundamentally disquieting: its impact primarily drawn from the fact that it disrupted what were, at the time, expectations of the (musical) genre, and in the process, rewrote them.

In other words, anxiety could be both creative and destructive. This, however, is not some broad call to “embrace anxiety”. There is a clear and pertinent need to understand the way in which the transformations of working conditions – everywhere, and also in the context of knowledge production – are influencing the sense of self and what is commonly referred to as mental health or well-being.

However, in this process, there is no need to externalise anxiety (nor other feelings): that is, frame it as if caused by forces outside of, or completely independent from, human influence, including within the academia itself (for instance, government policies, or political changes on supranational level). Conversely, there is no need to completely internalise it, in the sense of ascribing it to the embodied experience of individuals only. If feelings occupy the unstable ‘middle ground’ between institutions and individuals, this is the position from which they will have to be thought. If anxiety is an interpretation of the changes of the structures of knowledge production, its critique cannot but stem from the same position. This position is not ‘outside’, but rather ‘in-between’; insecure and thought-provoking, but no less potent for that.

Which, come to think of it, may be what Williams was trying to say all along.

All the feels

This poster drew my attention while I was working in the library of Cambridge University a couple of weeks ago:

lovethelib

 

For a while now, I have been fascinated with the way in which the language of emotions, or affect, has penetrated public discourse. People ‘love’ all sorts of things: the way a film uses interior light, the icing on a cake, their friend’s new hairstyle. They ‘hate’ Donald Trump, the weather, next door neighbours’ music. More often than not, conversations involving emotions would not be complete without mentioning online expressions of affect, such as ‘likes’ or ‘loves’ on Facebook or on Twitter.

Of course, the presence of emotions in human communication is nothing new. Even ‘ordinary’ statements – such as, for instance, “it’s going to rain tomorrow” – frequently entail an affective dimension (most people would tend to get at least slightly disappointed at the announcement). Yet, what I find peculiar is that the language of affect is becoming increasingly present not only in non-human-mediated communication, but also in relation to non-human entities. Can you really ‘love’ a library? Or be ‘friends’ with your local coffee place?

This isn’t to in any way concede ground to techno-pessimists who blame social media for ‘declining’ standards in human communication, nor even to express concern over the ways in which affective ‘reaction’ buttons allow tracking online behaviour (privacy is always a problem, and ‘unmediated’ communication largely a fiction). Even if face-to-face is qualitatively different from online interaction, there is nothing to support the claim that makes it inherently more valuable, or, indeed, ‘real’ (see: “IRL fetish[i]). It is the social and cultural framing of these emotions, and, especially, the way social sciences think about it – the social theory of affect, if you wish – that concerns me here.

Fetishism and feeling

So what is different about ‘loving’ your library as opposed to, say, ‘loving’ another human being? One possible way of going about this is to interpret expressions of emotion directed at or through non-human entities as ‘shorthand’ for those aimed at other human beings. The kernel of this idea is contained in Marx’s concept of commodity fetishism: emotion, or affect, directed at an object obscures the all-too-human (in his case, capital) relationship behind it. In this sense, ‘liking’ your local coffee place would be an expression of appreciation for the people who work there, for the way they make double macchiato, or just for the times you spent there with friends or other significant others. In human-to-human communication, things would be even more straightforward: generally speaking, ‘liking’ someone’s status updates, photos, or Tweets would signify appreciation of/for the person, agreement with, or general interest in, what they’re saying.

But what if it is actually the inverse? What if, in ‘liking’ something on Facebook or on Twitter, the human-to-human relationship is, in fact, epiphenomenal to the act? The prime currency of online communication is thus the expenditure of (emotional) energy, not the relationship that it may (or may not) establish or signify. In this sense, it is entirely irrelevant whether one is liking an inanimate object (or concept), or a person. Likes or other forms of affective engagement do not constitute any sort of human relationship; the only thing they ‘feed’ is the network itself. The network, at the same time, is not an expression, reflection, or (even) simulation of human relationships: it is the primary structure of feeling.

All hail…

Yuval Noah Harari’s latest book, Homo Deus, puts the issue of emotions at the centre of the discussion of the relationship between human and AI. In a review in The Guardian, David Runciman writes:

“Human nature will be transformed in the 21st century because intelligence is uncoupling from consciousness. We are not going to build machines any time soon that have feelings like we have feelings: that’s consciousness. Robots won’t be falling in love with each other (which doesn’t mean we are incapable of falling in love with robots). But we have already built machines – vast data-processing networks – that can know our feelings better than we know them ourselves: that’s intelligence. Google – the search engine, not the company – doesn’t have beliefs and desires of its own. It doesn’t care what we search for and it won’t feel hurt by our behaviour. But it can process our behaviour to know what we want before we know it ourselves. That fact has the potential to change what it means to be human.”

On the surface level, this makes sense. Algorithms can measure our ‘likes’ and other emotional reactions and combine them into ‘meaningful’ patterns – e.g., correlate them with specific background data (age, gender, location), time of day, etc., and, on the basis of this, predict how you will act (click, shop) in specific situations. However, does this amount to ‘knowledge’? In other words, if machines cannot have feelings – and Harari seems adamant that they cannot – how can they actually ‘know’ them?

Frege on Facebook

This comes close to a philosophical problem I’ve  been trying to get a grip on recently: the Frege-Geach (alternatively, the embedding, or Frege-Geach-Searle) problem. It is comprised of two steps. The first is to claim that there is a qualitative difference between moral and descriptive statements – for instance, between saying “It is wrong to kill” and “It is raining”. Most humans, I believe, would agree with this. The second is to observe that there is no basis for claiming this sort of difference based on sentence structure alone, which then leads to the problem of explaining its source – how do we know there is one? In other words, how it could be that moral and descriptive terms have exactly the same sort of semantic properties in complex sentences, even though they have different kinds of meaning? Where does this difference stem from?

The argument can be extended to feelings: how do we know that there is a qualitative difference between statements such as “I love you” and “I eat apples”? Or loving someone and ‘liking’ an online status? From a formal (syntactic) perspective, there isn’t. More interestingly, however, there is no reason why machines should not be capable of such a form of expression. In this sense, there is no way to reliably establish that likes coming from a ‘real’ person and, say, a Twitterbot, are qualitatively different. As humans, of course, we would claim to know the difference, or at least be able to spot it. But machines cannot. There is nothing inherent in the expression of online affect that would allow algorithms to distinguish between, say, the act of ‘loving’ the library and the act of loving a person. Knowledge of emotions, in other words, is not reducible to counting, even if counting takes increasingly sophisticated forms.

How do you know what you do not know?

The problem, however, is that humans do not have superior knowledge of emotions, their own or other people’s. I am not referring to situations in which people are unsure or ‘confused’ about how they feel [ii], but rather to the limited language – forms of expression – available to us. The documentary “One More Time With Feeling”, which I saw last week, engages with this issue in a way I found incredibly resonant. Reflecting on the loss of his son, Nick Cave relates how the words that he or people around him could use to describe the emotions seemed equally misplaced, maladjusted and superfluous (until the film comes back into circulation, Amanda Palmer’s review which addresses a similar question is  here) – not because they couldn’t reflect it accurately, but because there was no necessary link between them and the structure of feeling at all.

Clearly, the idea that language does not reflect, but rather constructs  – and thus also constrains – human reality is hardly new: Wittgenstein, Lacan, and Rorty (to name but a few) have offered different interpretations of how and why this is the case. What I found particularly poignant about the way Cave frames it in the film is that it questions the whole ontology of emotional expression. It’s not just that language acts as a ‘barrier’ to the expression of grief; it is the idea of the continuity of the ‘self’ supposed to ‘have’ those feelings that’s shattered as well.

Love’s labour’s lost (?): between practice and theory

This brings back some of my fieldwork experiences from 2007 and 2008, when I was doing a PhD in anthropology, writing on the concept of romantic relationships. Whereas most of my ‘informants’ – research participants – could engage in lengthy elaboration of the criteria they use in choosing (‘romantic’) partners (as well as, frequently, the reasons why they wouldn’t designate someone as a partner), when it came to emotions their narratives could frequently be reduced to one word: love (it wasn’t for lack of expressive skills: most were highly educated). It was framed as a binary phenomenon: either there or not there. At the time, I was more interested in the way their (elaborated) narratives reflected or coded markers of social inequality – for instance, class or status. Recently, however, I have been going back more to their inability (or unwillingness) to elaborate on the emotion that supposedly underpins, or at least buttresses, those choices.

Theoretical language is not immune to these limitations. For instance, whereas social sciences have made significant steps in deconstructing notions such as ‘man’, ‘woman, ‘happiness’, ‘family’, we are still miles away from seriously examining concepts such as ‘love’, ‘hate’, or ‘fear’. Moira Weigel’s and Eva Illouz’ work are welcome exceptions to the rule: Weigel uses the feminist concept of emotional labour to show how the responsibility for maintaining relationships tends to be unequally distributed between men and women, and Illouz demonstrates how modern notions of dating come to define subjectivity and agency of persons in ways conducive to the reproduction of capitalism. Yet, while both do a great job in highlighting social aspects of love, they avoid engaging with its ontological basis. This leaves the back door open for an old-school dualism that either assumes there is an (a- or pre-social?) ‘basis’ to human emotions, which can  be exploited or ‘harvested’ through relationships of power; or, conversely, that all emotional expression is defined by language, and thus its social construction the only thing worth studying. It’s almost as if ‘love’ is the last construct left standing, and we’re all too afraid to disenchant it.

For a relational ontology

A relational ontology of human emotions could, in principle, aspire to de-throne this nominalist (or, possibly worse, truth-proceduralist) notion of love in favour of one that sees it as a by-product of relationality. This isn’t claiming that ‘love’ is epiphenomenal: to the degree to which it is framed as a motivating force, it becomes part and parcel of the relationship itself. However, not seeing it as central to this inquiry would hopefully allow us to work on the diversification of the language of emotions. Instead of using a single marker (even as polysemic as ‘love’) for the relationship with one’s library and one’s significant other, we could start thinking about ways in which they are (or are not) the same thing. This isn’t, of course, to sanctify ‘live’ human-to-human emotion: I am certain that people can feel ‘love’ for pets, places, or deceased ones. Yet, calling it all ‘love’ and leaving it at that is a pretty shoddy way of going about feelings.

Furthermore, a relational ontology of human emotions would mean treating all relationships as unique. This isn’t, to be clear, a pseudoanarchist attempt to deny standards of or responsibility for (inter)personal decency; and even less a default glorification of long-lasting relationships. Most relationships change over time (as do people inside them), and this frequently means they can no longer exist; some relationships cannot coexist with other relationships; some relationships are detrimental to those involved in them, which hopefully means they cease to exist. Equally, some relationships are superficial, trivial, or barely worth a mention. However, this does not make them, analytically speaking, any less special.

This also means they cannot be reduced to the same standard, nor measured against each other. This, of course, runs against one of capitalism’s dearly-held assumptions: that all humans are comparable and, thus, mutually replaceable. This assumption is vital not only for the reproduction of labour power, but also, for instance, for the practice of dating [iii], whether online or offline. Moving towards a relational concept of emotions would allow us to challenge this notion. In this sense, ‘loving’ a library is problematic not because the library is not a human being, but because ‘love’, just like other human concepts, is a relatively bad proxy. Contrary to what pop songs would have us believe, it’s never an answer, and, quite possibly, neither the question.

Some Twitter wisdom for the end….

————————————————————————–

[i] Thanks go to Mark Carrigan who sent this to me.

[ii] While I am very interested in the question of self-knowledge (or self-ignorance), for some reason, I never found this particular aspect of the question analytically or personally intriguing.

[iii] Over the past couple of years, I’ve had numerous discussions on the topic of dating with friends, colleagues, but also acquaintances and (almost) strangers (the combination of having a theoretical interest in the topic and not being in a relationship seem to be particularly conducive to becoming involved in such conversations, regardless of whether one wants it or not). I feel compelled to say that my critique of dating (and the concomitant refusal to engage in it, at least as far as its dominant social forms go) does not, in any way, imply a criticism of people who do. There is quite a long list of people whom I should thank for helping me clarify this, but instead I promise to write another longer post on the topic, as well as, finally, develop that app  :).

Out of place? On Pokémon, foxes, and critical cultural political economy

WightFoxBanner
Isle of Wight, August 2016

Last week, I attended the Second international conference in Cultural political economy organized by the Centre for globalization, education and social futures at the University of Bristol. It was through working with Susan Robertson and other folk at the Graduate School of Education, where I had spent parts of 2014 and 2015 as a research fellow, that I first got introduced to cultural political economy.

The inaugural conference last year took place in Lancaster, so it was a great opportunity to both meet other people working within this paradigm and do a bit of hiking in the Lake District. This year, I was particularly glad to be in Bristol – the city that, to a great degree, comes closest to ‘home’, and where – having spent the majority of those two years not really living anywhere – I felt I kind of belonged. The conference’s theme – “Putting culture in its place” – held, for me, in this sense, a double meaning: it was both about critically assessing the concept of culture in cultural political economy, and about being in a particular place from which to engage in doing just that.

 Cultural political economy (CPE) unifies (or hybridises) approaches from cultural studies and those from (Marxist) political economy, in order to address the challenges of growing complexity (and possible incommensurability, or what Jessop refers to as in/compossibility) of elements of global capitalism. Of course, as Andrew Sayer pointed out, the ‘cultural’ streak in political economy can be traced all the way to Marx, if not downright to Aristotle. Developing it as a distinct approach, then, needs to be understood both genealogically – as a way to reconcile two strong traditions in British sociology – and politically, inasmuch as it aspires to make up for what some authors have described as cultural studies’ earlier disregard of the economic, without, at the same time, reverting to the old dichotomies of base/superstructure.

 Whereas it would be equal parts wrong, pretentious, and not particularly useful to speak of “the” way of doing cultural political economy – in fact, one of its strongest points, in my view, is that it has so far successfully eschewed theoretical and institutional ossification that seems to be an inevitable corollary of having (or building) ‘disciples’ (in both senses: as students, and as followers of a particular disciplinary approach) – what it emphasises is the interrelationship between the ‘cultural’ (as identities, materialities, civilisations, or, in Jessop and Sum’s – to date the most elaborate – view, processes of meaning-making), the political, and the economic, whilst avoiding reducing them one onto another. Studying how these interact over time, then, can help understand how specific configurations (or ‘imaginaries’) of capitalism – for instance, competitiveness and the knowledge-based economy – come into being.

My relationship to CPE is somewhat ambiguous. CPE is grounded in the ontology of critical realism, which, ceteris paribus, comes closest to my own views of reality [*]. Furthermore, having spent a good portion of the past ten years researching knowledge production in a variety of regional and historical contexts, the observation that factors we call ‘cultural’ play a role in each makes sense to me, both intuitively and analytically. On the other hand, being trained in anthropology means I am highly suspicious of the reifying and exclusionary potential of concepts such as ‘culture’ and, especially, ‘civilisation’ (in ways which, I would like to think, go beyond the (self-)righteousness immanent in many of their critiques on the Left). Last, but not least, despite a strong sense of solidarity with a number of identity-based causes, my experience in working in post-conflict environments has led me to believe that politics of identity, almost inevitably, fails to be progressive.[†]

For these reasons, the presentation I did at the conference was aimed at clarifying the different uses of the concept of ‘culture’ (and, to a lesser degree, ‘civilisation’) in cultural political economy, and discussing their political implications. To begin with, it might make sense to put culture through the 5W1H of journalistic inquiry. What is culture (or, what is its ontology)? Who is it – in other words, when we say that ‘culture does things’, how do we define agency? Where is it – in other words, how does it extend in space, and how do we know where its boundaries are? When is it – or what is its temporal dimension, and why does it seem easiest to define when it has either already passed, or is at least ‘in decline’, the label that seems particularly given to application to the Western civilisation? How is it (applied as an analytical concept)? This last bit is particularly relevant, as ‘culture’ sometimes appears in social research as a cause, sometimes as a mediating force (in positivist terms, ‘intervening variable’), and sometimes as an outcome, or consequence. Of course, the standard response is that it is, in fact, all of these, but instead of foreclosing the debate, this just opens up the question of WHY: if culture is indeed everything (or can be everything), what is its value as an analytical term?

A useful metaphor to think about different meanings of ‘culture’ could be the game of Pokémon Go. It figures equally as an entity (in the case of Pokémon, entities are largely fictional, but this is of lesser importance – many entities we identify as culturally significant, for instance deities, are); as a system of rules and relationships (for instance, those governing the game, as well as online and offline relationships between players); as a cause of behaviour (in positivist terms, an independent variable); and as an indicator (for instance, Pokémon Go is taken as a sign of globalization, alienation, revolution [in gaming], etc.). The photos in the presentation reflect some of these uses, and they are from Bristol: the first is a Pikachu caught in Castle Park (no, not mine :)); the other is from an event in July, when the Bristol Zoo was forced to close because too many people turned up for a Pokémon lure party. This brings in the political economy of the game; however, just like in CPE, the ‘lifeworld’ of Pokémon Go cannot be reduced to it, despite the fact it would not exist without it. So, when we go ‘hunting’ for culture, where should we look?

Clarifying the epistemic uses of the concept of culture serves not only to prevent treating culture as what Archer has referred to as ‘epiphenomenal’, or what Rojek & Urry have (in a brilliantly scathing review) characterised as ‘decorative’, but primarily to avoid what Woolgar & Pawluch dubbed ‘ontological gerrymandering’. Ontological gerrymandering refers to conceptual sliding in social problems definitions, and consists of “making problematic the truth status of certain states of affairs selected for analysis and explanation, while backgrounding or minimizing the possibility that the same problems apply to assumptions upon which the analysis depends. (…) Some areas are portrayed as ripe for ontological doubt and others portrayed as (at least temporarily) immune to doubt”[‡].

In the worst of cases, ‘culture’ lends itself to this sort of use – one moment almost an ‘afterthought’ of the more foundational processes related to politics and economy; the other foundational, at the very root of the transformations we see in everyday life; and yet, at other moments, mediating, as if a ‘lens’ that refracts reality. Of course, different concepts and uses of the term have been dissected and discussed at length in social theory; however, in research, just like in practice, ‘culture’ frequently resurfaces as a blackbox that can be conveniently proffered to explain elements not attributable (or reducible) to other factors.

This is important not only for theoretical but also, and possibly more, for political reasons. Culture is often seen as a space of freedom, for expression and experimentation. The line from which I borrow the title of my talk – “When I hear the word culture” – is an example of a right-wing reaction to exactly that sort of concept. Variously misattributed to Goering, Gebels, or even Hitler, the line actually comes from Schlageter, a play by Hanns Johst, written in Germany in 1933, which celebrates Nazi ideology. At some point, one of the characters breaks into a longish rant on why he hates the concept of culture – he sees it as ‘lofty’, ‘idealistic’, and in many ways distant from what he perceives to be ‘real struggles’, guns and ammo – which is why it crescendoes in the famous “When I hear the word culture, I release the safety on my Browning”. This idea of ‘culture’ as fundamentally opposed to the vagaries of material existence has informed many anti-intellectualist movements, but, equally importantly, it has also penetrated the reaction to them, resulting in the often unreflexive glorification of ‘folk’ poetry, drama, or art, as almost instantaneously effective expressions of resistance to anti-intellectualism.

Yet, in contemporary political discourse, the concept of culture has been equally appropriated by the left and the right: witness the ‘culture wars’ in the US, or the more recent use of the term to describe social divisions in the UK. Rather than disappearing, political struggles, I believe, will be increasingly framed in terms of culture. The ‘burkini ban’ in France is one case. Some societies deal with cultural diversity differently, at least on the face of it. New Zealand, where I did a part of my research, is a bicultural society. Its universities are founded on the explicit recognition of the concept of mātauranga Māori, which implies the existence of fundamentally culturally different epistemologies. This, of course, raises a number of other interesting issues; but those issues are not something we shouldn’t be prepared to face.

 As we are becoming better at dealing with culture and with the economy, it still remains a challenge to translate these insights to the political. An obvious case where we’re failing at this is knowledge production itself – cultural political economy is very well suited for analysing the transformation of universities in neoliberalism, yet none the wiser – or more efficient – in tackling these challenges in ways that provide a lasting political alternative.

——-

Later that evening, I go see two of my closest friends from Bristol. Walking back to the flat where I’m staying – right between Clifton and Stokes Croft – I run across a fox. Foxes are not particularly exceptional in Bristol, but I still remember my first encounter with one, as I was walking across Cotham side in 2014: I thought it was a large cat at first, and it was only the tail that gave it away. Having grown up in a highly urbanised environment, I cannot help but see encounters with wildlife as somewhat magical. They are, to me, visitors from another world, creatures temporarily inhabiting the same plane of existence, but subject to different motivations and rules of behaviour: in other words, completely alien. This particular night, this particular fox crosses the road and goes through the gates of Cotham School, which I find so patently symbolic that I am reluctant to share it for fear of being accused of peddling clichés.

And this, of course, marks the return of culture en pleine force. As a concept, it is constructed in opposition to ‘nature’; as a practice, its primary role is to draw boundaries – between the sacred and the profane, between the living and the dead, the civilised and the wild. I know – from my training in anthropology, if nothing else – that fascination with this particular encounter stems from the feeling of it being ‘out of place’: foxes in Bristol are magical because they transgress boundaries – in this case, between ‘cultured’, human worlds, and ‘nature’, the outer world.

I walk on, and right around St. Matthew’s church, there is another one. This one stops, actually, and looks at me. “Hey”, I say, “Hello, fox”. It waits for about six seconds, and then slowly turns around and disappears through the hedge.

I wish I could say that there was sense in that stare, or that I was able to attribute it purpose. There was none, and this is what made it so poignant. The ultimate indecipherability of its gaze made me realise I was as much out of place as the fox was. From its point of view, I was as immaterial and as transgressive as it was from mine: creature from another realm, temporarily inhabiting the same plane, but ultimately of no interest. And there it was, condensed in one moment: what it means to be human, what it means to be somewhere, what it means to belong – and the fragility, precariousness, and eternal incertitude it comes with.

[*] In truth, I’m still planning to write a book that hybridises magical realism with critical realism, but this is not the place to elaborate on that particular project.

[†] I’ve written a bit on the particular intersection of class- and identity-based projects in From Class to Identity; the rich literature on liberalism, multiculturalism, and politics of recognition is impossible to summarise here, but the Stanford Encyclopaedia of Philosophy has a decent summary overview under the entry “Identity Politics”.

[‡] I am grateful to Federico Brandmayr who initially drew my attention to this article.

Do we need academic celebrities?

 

[This post originally appeared on the Sociological Review blog on 3 August, 2016].

Why do we need academic celebrities? In this post, I would like to extend the discussion of academic celebrities from the focus on these intellectuals’ strategies, or ‘acts of positioning’, to what makes them possible in the first place, in the sense of Kant’s ‘conditions of possibility’. In other words, I want to frame the conversation in the broader framework of a critical cultural political economy. This is based on a belief that, if we want to develop an understanding of knowledge production that is truly relational, we need to analyse not only what public intellectuals or ‘academic celebrities’ do, but also what makes, maintains, and, sometimes, breaks, their wider appeal, including – not least importantly – our own fascination with them.

To begin with, an obvious point is that academic stardom necessitates a transnational audience, and a global market for intellectual products. As Peter Walsh argues, academic publishers play an important role in creating and maintaining such a market; Mark Carrigan and Eliran Bar-El remind us that celebrities like Giddens or Žižek are very good at cultivating relationships with that side of the industry. However, in order for publishers to operate at an even minimal profit, someone needs to buy the product. Simply put, public intellectuals necessitate a public.

While intellectual elites have always been to some degree transnational, two trends associated with late modernity are, in this sense, of paramount importance. One is the expansion and internationalization of higher education; the other is the supremacy of English as the language of global academic communication, coupled with the growing digitalization of the process and products of intellectual labour. Despite the fact that access to knowledge still remains largely inequitable, they have contributed to the creation of an expanded potential ‘customer base’. And yet – just like in the case of MOOCs – the availability or accessibility of a product is not sufficient to explain (or guarantee) interest in it. Regardless of whether someone can read Giddens’ books in English, or is able to watch Žižek’s RSA talk online, their arguments, presumably, still need to resonate: in other words, there must be something that people derive from them. What could this be?

In ‘The Existentialist Moment’, Patrick Baert suggests the global popularity of existentialism can be explained by Sartre’s (and other philosophers’ who came to be identified with it, such as De Beauvoir and Camus) successful connecting of core concepts of existentialist philosophy, such as choice and responsibility, to the concerns of post-WWII France. To some degree, this analysis could be applied to contemporary academic celebrities – Giddens and Bauman wrote about the problems of late or liquid modernity, and Žižek frequently comments on the contradictions and failures of liberal democracy. It is not difficult to see how they would strike a chord with the concerns of a liberal, educated, Western audience. Yet, just like in the case of Sartre, this doesn’t mean their arguments are always presented in the most palatable manner: Žižek’s writing is complex to the point of obscurantism, and Bauman is no stranger to ‘thick description’. Of the three, Giddens’ work is probably the most accessible, although this might have more to do with good editing and academic English’s predilection for short sentences, than with the simplicity of ideas themselves. Either way, it could be argued that reading their work requires a relatively advanced understanding of the core concepts of social theory and philosophy, and the patience to plough through at times arcane language – all at seemingly no or very little direct benefit to the audience.

I want to argue that the appeal of star academics has very little to do with their ideas or the ways in which they are framed, and more to do with the combination of charismatic authority they exude, and the feeling of belonging, or shared understanding, that the consumption of their ideas provides. Similarly to Weber’s priests and magicians, star academics offer a public performance of the transfiguration of abstract ideas into concrete diagnosis of social evils. They offer an interpretation of the travails of late moderns – instability, job insecurity, surveillance, etc. – and, at the same time, the promise that there is something in the very act of intellectual reflection, or the work of social critique, that allows one to achieve a degree of distance from their immediate impact. What academic celebrities thus provide is – even if temporary – (re)‘enchantment’ of the world in which the production of knowledge, so long reserved for the small elite of the ‘initiated’, has become increasingly ‘profaned’, both through the massification of higher education and the requirement to make the stages of its production, as well as its outcomes, measurable and accountable to the public.

For the ‘common’ (read: Western, left-leaning, highly educated) person, the consumption of these celebrities’ ideas offers something akin to the combination of a music festival and a mindfulness retreat: opportunity to commune with the ‘like-minded’ and take home a piece of hope, if not for salvation, then at least for temporary exemption from the grind of neoliberal capitalism. Reflection is, after all, as Marx taught us, the privilege of the leisurely; engaging in collective acts of reflection thus equals belonging to (or at least affinity with) ‘the priesthood of the intellect’. As Bourdieu noted in his reading of Weber’s sociology of religion, laity expect of religion “not only justifications of their existence that can offer them deliverance from the existential anguish of contingency or abandonment, [but] justification of their existence as occupants of a particular position in the social structure”. Thus, Giddens’ or Žižek’s books become the structural or cultural equivalent of the Bible (or Qur’an, or any religious text): not many people know what is actually in them, even fewer can get the oblique references, but everyone will want one on the bookshelf – not necessarily for what they say, but because of what having them signifies.

This helps explain why people flock to hear Žižek or, for instance, Yannis Varoufakis, another leftist star intellectual. In public performances, their ideas are distilled to the point of simplicity, and conveniently latched onto something the public can relate to. At the Subversive Festival in Zagreb, Croatia in 2013, for instance, Žižek propounded the idea of the concept of ‘love’ as a political act. Nothing new, one would say – but who in the audience would not want to believe their crush has potential to turn into an act of political subversion? Therefore, these intellectuals’ utterances represent ‘speech acts’ in quite a literal sense of the term: not because they are truly (or consequentially) performative, but because they offer the public an illusion that listening (to them) and speaking (about their work) represents, in itself, a political act.

From this perspective, the mixture of admiration, envy and resentment with which these celebrities are treated in the academic establishment represents a reflection of their evangelical status. Those who admire them quarrel about the ‘correct’ interpretation of their works and vie for the status of the nominal successor, which would, of course, also feature ritualistic patricide – which may be the reason why, although surrounded by followers, so few academic celebrities actually elect one. Those who envy them monitor their rise to fame in hope of emulating it one day. Those who resent them, finally, tend to criticize their work for intellectual ‘baseness’, an argument that is in itself predicated on the distinction between academic (and thus ‘sacred’) and popular, ‘common’ knowledge.

Many are, of course, shocked when their idols turn out not to be ‘original’ thinkers channeling divine wisdom, but plagiarists or serial repeaters. Yet, there is very little to be surprised by; academic celebrities, after all, are creatures of flesh and blood. Discovering their humanity and thus ultimate fallibility – in other words, the fact that they cheat, copy, rely on unverified information, etc. – reminds us that, in the final instance, knowledge production is work like any other. In other words, it reminds us of our own mortality. And yet, acknowledging it may be the necessary step in dismantling the structures of rigid, masculine, God-like authority that still permeate the academia. In this regard, it makes sense to kill your idols.

What after Brexit? We don’t know, and if we did, we wouldn’t dare say

[This post originally appeared on the Sociological Review blog, Sunday 3rd July, 2016]

In dark times
Will there also be singing?
Yes, there will be singing
About the dark times.

– Bertolt Brecht

Sociologists are notoriously bad at prediction. The collapse of the Soviet Union is a good example – not only did no one (or almost no one) predict it would happen, it also challenged social theory’s dearly-held assumptions about the world order and the ‘nature’ of both socialism and capitalism. When the next big ‘extraneous’ shocks to the Western world – 9/11 and the 2008 economic crisis – hit, we were almost as unprepared: save for a few isolated voices, no one foresaw either the events or the full scale of their consequences.

The victory of the Leave campaign and Britain’s likely exit from the European Union present a similar challenge. Of course, in this case, everyone knew it might happen, but there are surprisingly few ideas of what the consequences will be – not on the short-term political level, where the scenarios seem pretty clear; but in terms of longer-term societal impact – either on the macro- or micro-sociological level.

Of course, anyone but the direst of positivists will be quick to point out sociology does not predict events – it can, at best, aim to explain them retroactively (for example). Public intellectuals have already offered explanations for the referendum result, ranging from the exacerbation of xenophobia due to austerity, to the lack of awareness of what the EU does. However, as Will Davies’ more in-depth analysis suggests, how these come together is far from obvious. While it is important to work on understanding them, the fact that we are at a point of intensified morphogenesis, or multiple critical junctures – means we cannot stand on the side and wait until they unfold.

Methodological debates temporarily aside, I want to argue that one of the things that prevent us from making (informed) predictions is that we’re afraid of what the future might hold. The progressive ethos that permeates the discipline can make it difficult to think of scenarios predicated on a different worldview. A similar bias kept social scientists from realizing that countries seen as examples of real socialism – like the Soviet Union, and particularly former Yugoslavia – could ever fall apart, especially in a violent manner. The starry-eyed assumption that exit from the European Union could be a portent of a new era of progressive politics in the UK is a case in point. As much as I would like to see it happen, we need to seriously consider other possibilities – or, perhaps, that what the future has in stock is beyond our darkest dreams. In the past years, there has been a resurgence of thinking about utopias as critical alternatives to neoliberalism. Together with this, we need to actively start thinking about dystopias – not as a way of succumbing to despair, but as a way of using sociological imagination to understand both societal causes of the trends we’re observing – nationalism, racism, xenophobia, and so on – and our own fear of them.

Clearly, a strong argument against making long-term predictions is the reputational risk – to ourselves and the discipline – this involves. If the failure of Marx’s prediction of the inevitability of capitalism’s collapse is still occasionally brought up as a critique of Marxism, offering longer-term forecasts in the context where social sciences are increasingly held accountable to the public (i.e. policymakers) rightfully seems tricky. But this is where the sociological community has a role to play. Instead of bemoaning the glory of bygone days, we can create spaces from which to consider possible scenarios – even if some of them are bleak. In the final instance, to borrow from Henshel – the future cannot be predicted, but futures can be invented.

Jana Bacevic is a PhD researcher in the Department of Sociology at the University of Cambridge. She tweets at @jana_bacevic.

Education – cure or symptom?

[This post originally appeared on the website of REKOM, the initiative for the establishment of a reconciliation commission for former Yugoslavia].

When speaking of the processes of facing the past and reconciliation within the context of violent conflict, education is often accorded a major role. Educational practices and discourses have the ability to reproduce or widen existing social inequalities, or even to create new divisions. The introduction of textbooks which have painted a “purified” picture of a nation’s participation in and responsibility for the war crimes perpetrated during the wars in the 1990s, or the abolition of educational programmes and classes taught in minority languages, are just some of the examples found in the former Yugoslavia. Such moves are usually linked with a repressive politics that existed before, during and sometimes after the conflict itself.

Because of that, reconciliation programmes are often aimed at achieving formal equality within institutions or an equal representation of differing views in public discourses. Such an approach is based on the idea that a change of the public paradigm is the necessary first step in coming to terms with the past. In this particular case, the process of reconciliation is being led by the political and social elites which influence the shaping of public opinion. Similar to the “trickle-down theory” in economics, the assumption is that a change in the official narrative through the institutions, including those in the educational field, will, in time, bring about a change in public awareness – that is, lead the rest of the population to face its traumatic past.

Although the influence of formal discourses cannot be neglected, it is important that we understand that the causes and consequences of conflict, and thus the prosecution of those responsible, usually depend on a whole array of social and economic factors. It is highly unlikely that critical narratives examining the past will find a fertile ground in the educational institutions of divided and isolated societies. In this respect, the textbooks are just the metaphorical tip of the iceberg. It bears repeating that all educational institutions in Bosnia and Herzegovina, from elementary schools to universities, are ethnically segregated. The situation is similar in Kosovo, where this institutional segregation is virtually complete – just like in the nineties, there are in practice two parallel systems in existence. The universities in Macedonia also reflect its constitutional make-up, based on the division of political power between its two largest ethnic groups. Even in more ethnically homogenous communities, such as those found in parts of Serbia or Croatia, the presence of religious education in school curricula – a subject which, in its present format, segregates students according to their faith – stands as a lasting symbol of the impact of identity-based politics on the education system.

The institutionalization of divisions rooted in the legacy of the conflict fought in the former Yugoslavia does not end with education, but instead pervades other relationships and activities as well, such as employment, freedom of movement, family structure and the creation of informal social networks. It goes without saying that the political parties in all the successor-states are, by and large, made up of those who have profited in some way from the breakup of Yugoslavia. The transition from socialist self-governance to neoliberal capitalism has served to further degrade the stability and independence of social institutions. Such a context fosters political ideologies such as chauvinism and nationalism, and breeds fear of all that is different. What we must therefore ask ourselves is, not just how to change the content and the paradigm of education in the former Yugoslavia, but also – who profits from it staying the way it is?

These questions require critical analysis, not just of the responsibility for the crimes perpetrated during the conflict in the former Yugoslavia, but also of the economic and political legacy of its breakup. This is a huge challenge, which implies dialogue between the different parts of society in each successor-state. Educational institutions, universities and science institutes in particular, can play a potentially major role in establishing such a dialogue. This implies, first and foremost, an agreement on what its rules and goals are – which Habermas considered a crucial element in the development of the public sphere. For as long as there is no such agreement in place, deliberations on contemporary history will remain fragmented along the lines of ideological affiliation or political belief. Education based on such interpretations of the past thus continues to serve as an instrument of the proliferation of the same (or at least similar) divisions which shaped the dynamics of the conflict following the breakup of the former Yugoslavia, rather than as a motor of change.

This, of course, does not mean that every change in education requires the whole social structure to be changed beforehand, but it does mean that these two elements go hand in hand. Although this change is very likely to be gradual, it is far more important to ensure that it is permanent. In the end, the educational narratives we are dealing with might brush up against the past, but they concern the future.

Jana Bacevic works on social theory and the relationships between knowledge (and education) and political agency. She is presently writing her PhD in sociology at the University of Cambridge, Great Britain, and has a PhD in anthropology from the University of Belgrade. She has worked as a Marie Curie Fellow at the University of Arhus and taught at the Central European University in Budapest and Singidunum University in Belgrade. Her book “From Class to Identity: Politics of Education Reforms in Former Yugoslavia” was published in 2014 by Central European University Press.

Europe of Knowledge: Paradoxes and Challenges

 

[This article originally appeared in the Federation of Young European Greens’ ‘Youth Emancipation’ publication]

The Bologna process was a step towards creating a “Europe of Knowledge” where ideas and people could travel freely throughout Europe. Yet, this goal is threatened by changes to the structure of the higher education sector and perhaps by the nature of academia itself.

“The Europe of knowledge” is a sentence one can hardly avoid hearing today. It includes the goal of building the European higher education area through the Bologna process; the aim of making mobility a reality for many young (and not only young) people through programs of the European Commission such as Erasmus; and numerous scientific cooperation programmes aimed at boosting research and innovation. The European Commission has committed to assuring that up to 20% young people in the European Union will be academically mobile by 2020. The number of universities, research institutes, think tanks and other organizations whose mission is to generate, spread and apply knowledge seems to be growing by the minute. As information technologies continue to develop, knowledge becomes more readily available to a growing number of individuals across the world. In a certain sense, Europe is today arguably more “knowledgeable” than it ever was in the past.

And yet, this picture masks deeper tensions below the surface. Repeated students’ protests across Europe show that the transformation of European higher education and research entails, as Guy Neave [1] once diplomatically put it, an “inspiring number of contradictions”. This text will proceed to outline some of these contradictions or, as I prefer to call them, paradoxes, and then point to the main challenges generated by these paradoxes – challenges that will not only have to be answered if the “Europe of knowledge” is ever to become anything but a catchy slogan, but will also continue to pop up in the long process of transforming it into a political reality for all Europeans.

Paradoxes: Commercialisation, Borders and the Democratic Deficit

Although a “Europe of knowledge” hints at a shared space where everyone has the same (or similar) access and right to participate in the creation and transmission of knowledge, this is hardly the case. To begin with, Europe is not without borders; some of them are towards the outside, but many are also inside. A number of education and research initiatives distinguish between people and institutions based on whether they are from the EU – despite the fact that 20 out of 47 countries that make up the European Higher Education Area are not EU member states. European integration in higher education and research has maybe simplified, but did not remove obstacles to free circulation of knowledge: for many students, researchers and scholars who are not citizens of the EU, mobility entails lengthy visa procedures, stringent criteria for obtaining residence permits, and reporting requirements that not only resemble surveillance, but also can directly interfere with their learning processes.

Another paradox of the Europe of knowledge is that the massification and globalization of higher education have, in many cases, led to the growing construction of knowledge as a commodity – something that can be bought or sold. The privatisation of education and research has not only changed the entire ethos related to knowledge production, it also brought very tangible consequences for financing of higher education (with tuition fees becoming at the same time higher and more prominent way of paying for education), access to knowledge (with scholarly publishers increasingly charging exorbitant prices both for access and publishing), and changing working conditions for those in the academia (with short-term and precarious modes of employment becoming more prominent). On a more paradigmatic level, it led to the instrumentalisation of knowledge – its valorisation only or primarily in terms of its contribution to economic growth, and the consequent devaluation of other, more “traditional” purposes, such as self-awareness, development and intellectual pursuit for its own sake, which some critics associate with the Humboldtian model of university.

It is possible to see these paradoxes and contradictions as inevitable parts of global transformations, and thus accept their consequences as unavoidable. However, this text wants to argue that it is still possible to use knowledge in order to fight for a better world, but that this process entails a number of tough challenges. The ensuing section will outline some of them.

Challenges: Equality and the Conservativism of Academia

Probably the biggest challenge is to ensure that knowledge contributes to the equality of opportunities and chances for everyone. This should not translate into political clichés, or remain limited to policies that try to raise the presence or visibility of underrepresented populations in education and research. Recognizing inequalities is a first step, but changing them is a far more complex endeavour than it may at first appear. Sociologists of education have shown that one of the main purposes of education – and especially higher education – is to distinguishing between those who have it and those who don’t, bestowing the former with higher economic and social status. In other words, education reproduces social inequalities not only because it is unfair at the point of entry, but also because it is supposed to create social stratification. Subverting social inequalities in education, thus, can only work if becomes a part of a greater effort to eliminate or minimise inequalities based on class, status, income or power. Similarly, research that is aimed only at economic competitiveness – not to mention military supremacy – can hardly contribute to making a more equal or peaceful world. As long as knowledge remains a medium of power, it will continue to serve the purposes of maintaining the status quo.

This brings us to the key challenge in thinking about knowledge. In theory as well as in practice, knowledge always rests somewhere on the slippery ground between reproduction and innovation. On the one hand, one of the primary tasks of education as the main form of knowledge transmission is to integrate people into the society – e.g. teaching them to read, write and count, as well as to “fit” within the broader social structure. In this sense, all education is, essentially, conservative: it is focused on preserving human societies, rather than changing them. On the other hand, knowledge is also there to change the world: both in the conventional sense of the development of science and technology, but also in the more challenging sense of awareness of what it means to be human, and what are the implications and consequences – including, but not limited to, the consequences of technological development. The latter task, traditionally entrusted to the social sciences and humanities, is to always doubt, challenge, and “disrupt” the dominant or accepted modes of thinking.

The balance between these two “faces” of knowledge is very delicate. In times of scarcity or crisis, the uses of knowledge too easily slip into the confines of reproduction – assuring that human societies preserve themselves, usually with the power relationships and inequalities intact, and not infrequently at the expense of others, including our own environment. On the other hand, one-sided emphasis on the uses of knowledge for development can obscure the conditions of sustainability, as insights from environmental research and activism have displayed numerous times. The challenge, thus, is in maintaining both of these aspects, while not allowing only one to assume a dominant role.

Conclusion

These paradoxes and challenges are just a fraction of the changes that are now facing higher education and research in Europe. Yet, without knowing what they and their consequences are, action will remain lost in the woods of technical jargon and petty “turf wars” between different movements, fractions, disciplines and institutions. The higher education and research policies developed in Europe today to a large extent try to smooth over these conflicts and tensions by coating them in a neutral language that promises equality, efficiency and prosperity. Checking and probing the meaning of these terms is a task for the future.


[1] Neave, G. 2002. (2002) Anything Goes: Or, How the Accommodation of Europe’s Universities to European Integration Integrates an Inspiring Number of Contradictions. Tertiary Education and Management, 8 (3). pp. 181-197. ISSN 1358-3883

Higher education and politics in the Balkans

In this entry of the thematic week on crisis, Jana Bacevic from the Department of Public Policy, Central European University (Budapest)  examines higher education in the context of  ethnic and religious divisions in recent Balkan history. 

In situations of crisis – whether it’s economic, environmental, or humanitarian – higher education is hardly the first to come to mind. Aid and development packages tend to focus on primary education, essential for teaching reading, writing and calculus, as well as successful socialization in peer groups, and, in some cases, on secondary – usually vocational – education, supposed to enable people to work both during and in the immediate aftermath of the crisis. However, slowly but steadily, higher education is beginning to occupy a more prominent place in contexts of crisis. Why is this the case?

Critics would say higher education is a luxury, and that focus on higher education is hardly anything but empty rhetoric aimed at rallying support for the agendas of politicians or trade unions. However, there are many reasons why higher education should not be ignored, even in times of crisis. Issues and policies related to higher education hardly ever stay confined to the university campus, or even to the boundaries of nation-states, whether new or old.

Access to higher education is directly linked to the access to work, income, and, to some extent, social and political participation. In this sense, who and how can access higher education (and under which conditions) are questions that have explicit political consequences for human and minority rights, social stratification and (in)equality,  and the overall quality of life. Higher education institutions do not only reflect the dominant ethos of a society; they also create and reproduce it. Politicians and policymakers know this, and this is why higher education can become such a politically charged issue.

The recent history of higher education in the successor states of former Yugoslavia provides many examples of the interplay between higher education and political dynamics. Early during the conflict, two universities in Bosnia and Herzegovina were divided between ethnic groups. The Serbian staff and students of the University of Sarajevo founded the separate University of East Sarajevo in 1992. The University of Mostar was split between the Croatian part (University of Mostar, or “Sveučilište u Mostaru”) and the Muslim part (University of Mostar “Džemal Bijedić”). In Kosovo, the University of Prishtina was at the very center of political contestation between the two biggest ethnic groups, Albanians and Serbs. Following series of Kosovo Albanian demonstrations at the end of the 1980s, the Serbian authorities forbade the university to accept any more Albanian students. The result was a complete split of the academic sphere into two domains – the “official”, Serbian one, and the “parallel”, Albanian, which existed outside of the institutional frameworks.

After the NATO intervention in 1999, the Serbian students and staff fled to the northern part of the province, predominantly controlled by the central Serbian government, re-establishing the university as the “University of Prishtina temporarily located in Kosovska Mitrovica”. Meanwhile, Albanian students and staff returned to the premises of the university in Prishtina, developing a new system under close supervision of the international administration. Just like in Bosnia, the configuration of higher education today reflects the deep ethnic and social cleavages that are the legacy of the conflict.

Higher education can become a subject of political contestation even in the absence of a large-scale armed conflict. For instance, one of the issues that precipitated the conflict between ethnic Albanians and Macedonian police in the Former Yugoslav Republic of Macedonia in 2001 was the demand of ethnic Albanian parties for a separate university in their own language. Following the de facto consociational arrangement provided by the terms of the Ohrid Framework Agreement peace treaty, the previously private Tetovo University was given public status in 2004. However, the same town was already home to the Southeast European University, founded in 2001 by the international community (primarily the OSCE) in order to work on the post-conflict development and foster integration of the ethnic Albanian and ethnic Macedonian youth. Currently, two universities coexist, teaching similar programmes and even sharing staff, although differing in the approach to the use of languages, as well as in the composition of student body.

A similar story can be told about Novi Pazar, the administrative center of Sandžak, a multiethnic region of Serbia with high proportion of Bosniak Muslims. The private International University of Novi Pazar was founded by a local Muslim religious leader in 2002, with support from the government in Belgrade who, at the time, thought it would be a good solution for the integration of Bosniak Muslims within the framework of the state. Two years later, however, after the change of government and political climate, the state founded a new university, named the State University of Novi Pazar, withdrawing support from the International University. The two universities continue to exist side by side, teaching similar programmes and, in theory, competing for the same population of students. Their internal rivalries reflect and reproduce the political, social and, not least of all, ethnic cleavages in Sandžak.

Universities in the Western Balkans are just some of the examples in which the links between higher education and social divisions can be seen most clearly. However, they are neither isolated nor unique: conflicts can persist and occur across and outside of ethnic and religious lines, sometimes teeming below the surface even in societies that, from the outside, appear peaceful and stable. This is why higher education should not only be reactive, responding to cleavages and conflicts once they become visible, but rather proactive, revealing and working to abolish the multiple and often hidden structures of power that reproduce inequalities. On the one hand, this can be done through policies that seek to ensure equal access to and representation in higher education institutions. On the other, it can also mean engagement in research and activism aimed at raising awareness of the mechanisms through which inequalities and injustice are perpetuated. This latter mission, however, requires that higher education institutions turn a critical eye towards their own policies and practices, and examine the ways in which they are – perhaps unwittingly – reproducing the societal divisions that, in times of crisis, can easily evolve into open conflicts. Frequently, this is the hardest task of all.

—–

Jana Bacevic holds a PhD (2008) in Social Anthropology from the University of Belgrade. Previously she taught at the University of Belgrade and Singidunum University and worked as higher education expert on a number of projects aimed at developing education in the post-conflict societies of the Western Balkans. Her research interests are in the intersection between sociology, anthropology, politics and philosophy of knowledge, and her book, “From class to identity: politics of education reforms in former Yugoslavia” is being published by CEU Press in 2013.