Until recently, Professor Marenbon writes, university strikes in Cambridge were a hardly noticeable affair. Life, he says, went on as usual. The ongoing industrial action that UCU members are engaging in at UK’s universities has changed all that. Dons, rarely concerned with the affairs of the lesser mortals, seem to be up in arms. They are picketing, almost every day, in the wind and the snow; marching; shouting slogans. For Heaven’s sake, some are even dancing. Cambridge, as pointed out on Twitter, has not seen such upheaval ever since we considered awarding Derrida an honorary degree.
This is possibly the best thing that has happened to UK higher education, at least since the end of the 1990s. Not that there’s much competition: this period, after all, brought us the introduction, then removal of tuition fee caps; abolishment of maintenance grants; REF and TEF; and as crowning (though short-lived) glory, appointment of Toby Young to the Office for Students. Yet, for most of this period, academics’ opposition to these reforms conformed to ‘civilised’ ways of protest: writing a book, giving a lecture, publishing a blog post or an article in Times Higher Education, or, at best, complaining on Twitter. While most would agree that British universities have been under threat for decades, concerted effort to counter these reforms – with a few notable exceptions – remained the provenance of the people Professor Marenbon calls ‘amiable but over-ideological eccentrics’.
This is how we have truly let down our students. Resistance was left to student protests and occupations. Longer-lasting, transgenerational solidarity was all but absent: at the end of the day, professors retreated to their ivory towers, precarious academics engaged in activism on the side of ever-increasing competition and pressure to land a permanent job. Students picked up the tab: not only when it came to tuition fees, used to finance expensive accommodation blocks designed to attract more (tuition-paying) students, but also when it came to the quality of teaching and learning, increasingly delivered by an underpaid, overworked, and precarious labour force.
This is why the charge that teach-outs of dubious quality are replacing lectures comes across as particularly disingenuous. We are told that ‘although students are denied lectures on philosophy, history or mathematics, the union wants them to show up to “teach-outs” on vital topics such as “How UK policy fuels war and repression in the Middle East” and “Neoliberal Capitalism versus Collective Imaginaries”’. Although this is but one snippet of Cambridge UCU’s programme of teach-outs, the choice is illustrative.
The link between history and UK’s foreign policy in the Middle East strikes me as obvious. Students in philosophy, politics or economics could do worse than a seminar on the development of neoliberal ideology (the event was initially scheduled as part of the Cambridge seminar in political thought). As for mathematics – anybody who, over the past weeks, has had to engage with the details of actuarial calculation and projections tied to the USS pension scheme has had more than a crash refresher course: I dare say they learned more than they ever hoped they would.
Teach-outs, in this sense, are not a replacement for education “as usual”. They are a way to begin bridging the infamous divide between “town and gown”, both by being held in more open spaces, and by, for instance, discussing how the university’s lucrative development projects are impacting on the regional economy. They are not meant to make up for the shortcomings of higher education: if anything, they render them more visible.
What the strikes have made clear is that academics’ ‘life as usual’ is vice-chancellors’ business as usual. In other words, it is precisely the attitude of studied depoliticisation that allowed the marketization of higher education to continue. Markets, after all, are presumably ‘apolitical’. Other scholars have expanded considerable effort in showing how this assumption had been used to further policies whose results we are now seeing, among other places, in the reform of the pensions system. Rather than repeat their arguments, I would like to end with the words of another philosopher, Hannah Arendt, who understood well the ambiguous relationship between the academia and politics:
‘Very unwelcome truths have emerged from the universities, and very unwelcome judgments have been handed down from the bench time and again; and these institutions, like other refuges of truth, have remained exposed to all the dangers arising from social and political power. Yet the chances for truth to prevail in public are, of course, greatly improved by the mere existence of such places and by the organization of independent, supposedly disinterested scholars associated with them.
This authentically political significance of the Academe is today easily overlooked because of the prominence of its professional schools and the evolution of its natural science divisions, where, unexpectedly, pure research has yielded so many decisive results that have proved vital to the country at large. No one can possibly gainsay the social and technical usefulness of the universities, but this importance is not political. The historical sciences and the humanities, which are supposed to find out, stand guard over, and interpret factual truth and human documents, are politically of greater relevance.’
In this sense, teach-outs, and industrial action in general, are a way to for us to recognise our responsibility to protect the university from the undue incursion of political power, while acknowledging that such responsibility is in itself political. At this moment in history, I can think of no service to scholarship greater than that.
A serious line of division runs through my household. It does not concern politics, music, or even sports: it concerns the possibility of large-scale collapse of social and political order, which I consider very likely. Specific scenarios aside for the time being, let’s just say we are talking more human-made climate-change-induced breakdown involving possibly protracted and almost certainly lethal conflict over resources, than ‘giant asteroid wipes out Earth’ or ‘rogue AI takes over and destroys humanity’.
Ontological security or epistemic positioning?
It may be tempting to attribute the tendency towards catastrophic predictions to psychological factors rooted in individual histories. My childhood and adolescence took place alongside the multi-stage collapse of the country once known as the Socialist Federal Republic of Yugoslavia. First came the economic crisis, when the failure of ‘shock therapy’ to boost stalling productivity (surprise!) resulted in massive inflation; then social and political disintegration, as the country descended into a series of violent conflicts whose consequences went far beyond the actual front lines; and then actual physical collapse, as Serbia’s long involvement in wars in the region was brought to a halt by the NATO intervention in 1999, which destroyed most of the country’s infrastructure, including parts of Belgrade, where I was living at the time*. It makes sense to assume this results in quite a different sense of ontological security than one, say, the predictability of a middle-class English childhood would afford.
But does predictability actually work against the capacity to make accurate predictions? This may seem not only contradictory but also counterintuitive – any calculation of risk has to take into account not just the likelihood, but also the nature of the source of threat involved, and thus necessarily draws on the assumption of (some degree of) empirical regularity. However, what about events outside of this scope? A recent article by Faulkner, Feduzi and Runde offers a good formalization of this problem (the Black Swans and ‘unknown unknowns’) in the context of the (limited) possibility to imagine different outcomes (see table below). Of course, as Beck noted a while ago, the perception of ‘risk’ (as well as, by extension, any other kind of future-oriented thinking) is profoundly social: it depends on ‘calculative devices‘ and procedures employed by networks and institutions of knowledge production (universities, research institutes, think tanks, and the like), as well as on how they are presented in, for instance, literature and the media.
In The Great Derangement (probably the best book I’ve read in 2017), Amitav Gosh argues that this can explain, for instance, the surprising absence of literary engagement with the problem of climate change. The problem, he claims, is endemic to Western modernity: a linear vision of history cannot conceive of a problem that exceeds its own scale**. This isn’t the case only with ‘really big problems’ such as economic crises, climate change, or wars: it also applies to specific cases such as elections or referendums. Of course, social scientists – especially those qualitatively inclined – tend to emphasise that, at best, we aim to explain events retroactively. Methodological modesty is good (and advisable), but avoiding thinking about the ways in which academic knowledge production is intertwined with the possibility of prediction is useless, for at least two reasons.
One is that, as reflected in the (by now overwrought and overdetermined) crisis of expertise and ‘post-truth’, social researchers increasingly find themselves in situations where they are expected to give authoritative statements about the future direction of events (for instance, about the impact of Brexit). Even if they disavow this form of positioning, the very idea of social science rests on (no matter how implicit) assumption that at least some mechanisms or classes or objects will exhibit the same characteristics across cases; consequently, the possibility of inference is implied, if not always practised. Secondly, given the scope of challenges societies face at present, it seems ridiculous to not even attempt to engage with – and, if possibly, refine – the capacity to think how they will develop in the future. While there is quite a bit of research on individual predictive capacity and the way collective reasoning can correct for cognitive bias, most of these models – given that they are usually based on experiments, or simulations – cannot account for the way in which social structures, institutions, and cultures of knowledge production interact with the capacity to theorise, model, and think about the future.
The relationship between social, political, and economic factors, on the one hand, and knowledge (including knowledge about those factors), on the other, has been at the core of my work, including my current PhD. While it may seem minor compared to issues such as wars or revolutions, the future of universities offers a perfect case to study the relationship between epistemic positioning, positionality, and the capacity to make authoritative statements about reality: what Boltanski’s sociology of critique refers to as ‘complex externality’. One of the things it allowed me to realise is that while there is a good tradition of reflecting on positionality (or, in positivist terms, cognitive ‘bias’) in relation to categories such as gender, race, or class, we are still far from successfully theorising something we could call ‘ontological bias’: epistemic attachment to the object of research.
The postdoctoral project I am developing extends this question and aims to understand its implications in the context of generating and disseminating knowledge that can allow us to predict – make more accurate assessments of – the future of complex social phenomena such as global warming or the development of artificial intelligence. This question has, in fact, been informed by my own history, but in a slightly different manner than the one implied by the concept of ontological security.
Legitimation and prediction: the case of former Yugoslavia
Socialist Federal Republic of Yugoslavia had a relatively sophisticated and well developed networks of social scientists, which both of my parents were involved in***. Yet, of all the philosophers, sociologists, political scientists etc. writing about the future of the Yugoslav federation, only one – to the best of my knowledge – predicted, in eerie detail, the political crisis that would lead to its collapse: Bogdan Denitch, whose Legitimation of a revolution: the Yugoslav case(1976) is, in my opinion, one of the best books about former Yugoslavia ever written.
A Yugoslav-American, Denitch was a professor of sociology at the City University of New York. He was also a family friend, a fact I considered of little significance (having only met him once, when I was four, and my mother and I were spending a part of our summer holiday at his house in Croatia; my only memory of it is being terrified of tortoises roaming freely in the garden), until I began researching the material for my book on education policies and the Yugoslav crisis. In the years that followed (I managed to talk to him again in 2012; he passed away in 2016), I kept coming back to the question: what made Denitch more successful in ‘predicting’ the crisis that would ultimately lead to the dissolution of former Yugoslavia than virtually anyone writing on Yugoslavia at the time?
Denitch had a pretty interesting trajectory. Born in 1929 to Croat Serb parents, he spent his childhood in a series of countries (including Greece and Egypt), following his diplomat father; in 1946, the family emigrated to the United States (the fact his father was a civil servant in the previous government would have made it impossible for them to continue living in Yugoslavia after the Communist regime, led by Josip Broz Tito, formally took over). There, Denitch (in evident defiance of his upper-middle-class legacy) trained as a factory worker, while studying for a degree in sociology at CUNY. He also joined the Democratic Socialist Alliance – one of American socialist parties – whose member (and later functionary) he would remain for the rest of his life.
In 1968, Denitch was awarded a major research grant to study Yugoslav elites. The project was not without risks: while Yugoslavia was more open to ‘the West’ than other countries in Eastern Europe, visits by international scholars were strictly monitored. My mother recalls receiving a house visit from an agent of the UDBA, the Yugoslav secret police – not quite the KGB but you get the drift – who tried to elicit the confession that Denitch was indeed a CIA agent, and, in the absence of that, the promise that she would occasionally report on him****.
Despite these minor throwbacks, the research continued: Legitimation of a revolution is one of its outcomes. In 1973, Denitch was awarded a PhD by the Columbia University and started teaching at CUNY, eventually retiring in 1994. His last book, Ethnic nationalism: the tragic death of Yugoslaviacame out in the same year, a reflection on the conflict that was still going on at the time, and whose architecture he had foreseen with such clarity eighteen years earlier (the book is remarkably bereft of “told-you-so”-isms, so warmly recommended for those wishing to learn more about Yugoslavia’s dissolution).
Did personal history, in this sense, have a bearing on one’s epistemic position, and by extension, on the capacity to predict events? One explanation (prevalent in certain versions of popular intellectual history) would be that Denitch’s position as both a Yugoslav and an American would have allowed him to escape the ideological traps other scholars were more likely to fall into. Yugoslavs, presumably, would be at pains to prove socialism was functioning; Americans, on the other hand, perhaps egalitarian in theory but certainly suspicious of Communist revolutions in practice, would be looking to prove it wasn’t, at least not as an economic model. Yet this assumption hardly stands even the lightest empirical interrogation. At least up until the show trials of Praxis philosophers, there was a lively critique of Yugoslav socialism within Yugoslavia itself; despite the mandatory coating of jargon, Yugoslav scholars were quite far from being uniformly bright-eyed and bushy-tailed about socialism. Similarly, quite a few American scholars were very much in favour of the Yugoslav model, eager, if anything, to show that market socialism was possible – that is, that it’s possible to have a relatively progressive social policy and still be able to afford nice things. Herein, I believe, lies the beginning of the answer as to why neither of these groups was able to predict the type or the scale of the crisis that will eventually lead to the dissolution of former Yugoslavia.
Simply put, both groups of scholars depended on Yugoslavia as a source of legitimation of their work, though for different reasons. For Yugoslav scholars, the ‘exceptionality’ of the Yugoslav model was the source of epistemic legitimacy, particularly in the context of international scientific collaboration: their authority was, in part at least, constructed on their identity and positioning as possessors of ‘local’ knowledge (Bockman and Eyal’s excellent analysis of the transnational roots of neoliberalism makes an analogous point in terms of positioning in the context of the collaboration between ‘Eastern’ and ‘Western’ economists). In addition to this, many of Yugoslav scholars were born and raised in socialism: while, some of them did travel to the West, the opportunities were still scarce and many were subject to ideological pre-screening. In this sense, both their professional and their personal identity depended on the continued existence of Yugoslavia as an object; they could imagine different ways in which it could be transformed, but not really that it could be obliterated.
For scholars from the West, on the other hand, Yugoslavia served as a perfect experiment in mixing capitalism and socialism. Those more on the left saw it as a beacon of hope that socialism need not go hand-in-hand with Stalinist-style repression. Those who were more on the right saw it as proof that limited market exchange can function even in command economies, and deduced (correctly) that the promise of supporting failing economies in exchange for access to future consumer markets could be used as a lever to bring the Eastern Bloc in line with the rest of the capitalist world. If no one foresaw the war, it was because it played no role in either of these epistemic constructs.
This is where Denitch’s background would have afforded a distinct advantage. The fact his parents came from a Serb minority in Croatia meant he never lost sight of the salience of ethnicity as a form of political identification, despite the fact socialism glossed over local nationalisms. His Yugoslav upbringing provided him not only with fluency in the language(s), but a degree of shared cultural references that made it easier to participate in local communities, including those composed of intellectuals. On the other hand, his entire professional and political socialization took place in the States: this meant he was attached to Yugoslavia as a case, but not necessarily as an object. Not only was his childhood spent away from the country; the fact his parents had left Yugoslavia after the regime change at the end of World War II meant that, in a way, for him, Yugoslavia-as-object was already dead. Last, but not least, Denitch was a socialist, but one committed to building socialism ‘at home’. This means that his investment in the Yugoslav model of socialism was, if anything, practical rather than principled: in other words, he was interested in its actual functioning, not in demonstrating its successes as a marriage of markets and social justice. This epistemic position, in sum, would have provided the combination needed to imagine the scenario of Yugoslav dissolution: a sufficient degree of attachment to be able to look deeply into a problem and understand its possible transformations; and a sufficient degree of detachment to be able to see that the object of knowledge may not be there forever.
Onwards to the…future?
What can we learn from the story? Balancing between attachment and detachment is, I think, one of the key challenges in any practice of knowing the social world. It’s always been there; it cannot be, in any meaningful way, resolved. But I think it will become more and more important as the objects – or ‘problems’ – we engage with grow in complexity and become increasingly central to the definition of humanity as such. Which means we need to be getting better at it.
(*) I rarely bring this up as I think it overdramatizes the point – Belgrade was relatively safe, especially compared to other parts of former Yugoslavia, and I had the fortune to never experience the trauma or hardship people in places like Bosnia, Kosovo, or Croatia did.
(**) As Jane Bennett noted in Vibrant Matter, this resonates with Adorno’s notion of non-identity in Negative Dialectics: a concept always exceeds our capacity to know it. We can see object-oriented ontology, (e.g. Timothy Morton’s Hyperobjects) as the ontological version of the same argument: the sheer size of the problem acts as a deterrent from the possibility to grasp it in its entirety.
(***) This bit lends itself easily to the Bourdieusian “aha!” argument – academics breed academics, etc. The picture, however, is a bit more complex – I didn’t grow up with my father and, until about 16, had a very vague idea of what my mother did for a living.
(****) Legend has it my mother showed the agent the door and told him never to call on her again, prompting my grandmother – her mother – to buy funeral attire, assuming her only daughter would soon be thrown into prison and possibly murdered. Luckily, Yugoslavia was not really the Soviet Union, so this did not come to pass.
A woman needs a fridge of her own if she is to write theory. In fact, I’d wager a woman needs a fridge of her own if she is to write pretty much anything, but since what I am writing at the moment is (mostly) theory, let’s assume that it can serve as a metaphor for intellectual labour more broadly.
In her famous injunction to undergraduates at Girton College in Cambridge (the first residential college for women that offered education to degree level) Virginia Woolf stated that a woman needed two things in order to write: a room of her own, and a small independent income (Woolf settled on 500 pounds a year; as this website helpfully informed me, this would be £29,593 in today’s terms). In addition to the room and the income, a woman who wants to write, I want to argue, also needs a fridge. Not a shelf or two in a fridge in a kitchen in a shared house or at the end of the staircase; a proper fridge of her own. Let me explain.
The immateriality of intellect
Woolf’s broader point in A Room of One’s Ownis that intellectual freedom and creativity require the absence of material constraints. In and of itself, this argument is not particularly exceptional: attempts to define the nature of intellectual labour have almost unfailingly centred on its rootedness in leisure – skholē – as the opportunity for peaceful contemplation, away from the vagaries of everyday existence. For ancient Greeks, contemplation was opposed to the political (as in the everyday life of the polis): what we today think of as the ‘private’ was not even a candidate, being the domain of women and slaves, neither of which were considered proper citizens. For Marx, it was the opposite of material labour, with its sweat, noise, and capitalist exploitation. But underpinning it all was the private sphere – that amorphous construct that, as feminist scholars pointed out, includes the domestic and affective labour of care, cleaning, cooking, and, yes, the very act of biological reproduction. The capacity to distance oneself from these kinds of concerns thus became the sine qua non of scholarly reflection, particularly in the case of theōria, held to be contemplation in its pure(st) form. After all, to paraphrase Kant, it is difficult to ponder the sublime from too close.
This thread runs from Plato and Aristotle through Marx to Arendt, who made it the gist of her analysis of the distinction between vita activa and vita contemplativa; and onwards to Bourdieu, who zeroed in on the ‘scholastic reason’ (raison scolastique) as the source of Homo Academicus’ disposition to project the categories of scholarship – skholē – onto everyday life. I am particularly interested in the social framing of this distinction, given that I think it underpins a lot of contemporary discussions on the role of universities. But regardless of whether we treat it as virtue, a methodological caveat, or an interesting research problem, detachment from the material persists as the distinctive marker of the academic enterprise.
What about today?
So I think we can benefit from thinking about what would be the best way to achieve this absolution from the material for women who are trying to write today. One solution, obviously, would be to outsource the cooking and cleaning to a centralised service – like, for instance, College halls and cafeterias. This way, one would have all the time to write: away with the vile fridge! (It was anyway rather unseemly, poised as it was in the middle of one’s room). Yet, outsourcing domestic labour means we are potentially depriving other people of the opportunity to develop their own modes of contemplation. If we take into account that the majority of global domestic labour is performed by women, perfecting our scholarship would most likely be off the back of another Shakespeare’s (or, for consistency’s sake, let’s say Marx’s) sister. So, let’s keep the fridge, at least for the time being.
But wait, you will say, what about eating out – in restaurants and such? It’s fine you want to do away with outsourced domestic labour, but surely you wouldn’t scrap the entire catering industry! After all, it’s a booming sector of the economy (and we all know economic growth is good), and it employs so many people (often precariously and in not very nice conditions, but we are prone to ignore that during happy hour). Also, to be honest, it’s so nice to have food prepared by other people. After all, isn’t that what Simone de Beauvoir did, sitting, drinking and smoking (and presumably also eating) in cafés all day? This doesn’t necessarily mean we would need to do away with the fridge, but a shelf in a shared one would suffice – just enough to keep a bit of milk, some butter and eggs, fruit, perhaps even a bottle of rosé? Here, however, we face the economic reality of the present. Let’s do a short calculation.
£500 a year gets you very far…or not
The £29,593 Woolf proposes as sufficient independent income comes from an inheritance. Those of us who are less fortunate and are entering the field of theory today can hope to obtain one of many scholarships. Mine is currently at £13,900 a year (no tax); ESRC-funded students get a bit more, £14,000. This means we fall well short of today’s equivalent of 500 pound/year sum Woolf suggested to students at Girton. Starting from £14,000, assuming that roughly £2000 pounds annually are spent on things such as clothes, books, cosmetics, and ‘incidentals’ – for instance, travel to see one’s family or medical costs (non-EU students are subject to something called the Immigration Health Surcharge, paid upfront at the point of application for a student visa, which varies between £150 and £200 per year, but doesn’t cover dental treatment, prescriptions, or eye tests – so much for “NHS tourism”) – this leaves us with roughly £1000 per month. Out of this, accommodation costs anything between 400 and 700 pounds, depending on bills, council tax etc. – for a “room of one’s own”, that is, a room in a shared house or college accommodation – that, you’re guessing it, almost inevitably comes with a shared fridge.
So the money that’s left is supposed to cover eating in cafés, perhaps even an occasional glass of wine (it’s important to socialise with other writers or just watch the world go by). Assuming we have 450/month after paying rent and bills, this leaves us with a bit less than 15 pounds per day. This suffices for about one meal and a half daily in most cheap high street eateries, if you do not eat a lot, do not drink, nor have tea or coffee. Ever. Even at colleges, where food is subsidised, this would be barely enough. Remember: this means you never go out for a drink with friends or to a cinema, you never buy presents, never pay for services: in short, it makes for a relatively boring and constrained life. This could turn writing, unless you’re Emily Dickinson, somewhat difficult. Luckily, you have Internet, that is, if it’s included in your bills. And you pray your computer does not break down.
Well, you can always work, you say. If the money you’re given is not enough to provide the sort of lifestyle you want, go earn more! But there’s a catch. If you are in full-time education, you are only allowed to work part-time. If you are a foreign national, there are additional constraints. This means the amount of money you can get is usually quite limited. And there are tradeoffs. You know all those part-time jobs that pay a lot, offer stability and future career progression, and everyone is flocking towards? I don’t either. If you ever wondered where the seemingly inexhaustible supply of cheap labour at universities – sessional lecturers, administrative assistants, event managers, servers etc. came from, look around you: more likely than not, it’s hungry graduate students.
The poverty of student life
Increasingly, this is not in the Steve Jobs “stay hungry” sense. As I’ve argued recently, “staying hungry” has quite a different tone when instead of a temporary excursion into relative deprivation (seen as part of ‘character building’ education is supposed to be about) it reflects the threat of, virtually, struggling to make ends meet way after graduation. Given the state of the economy and graduate debt, that is a threat faced by growing proportions of young people (and, no surprise, women are much more likely to end up in precarious employment). Of course, you could always argue that many people have it much worse: you are (relatively) young, well educated, and with likely more cultural and social capital than the average person. Sure you can get by. But remember – this isn’t about making it from one day to another. What you’re trying to do is write. Contemplate. Comprehend the beauty (and, sometimes, ugliness) of the world in its entirety. Not wonder whether you’ll be able to afford the electricity bill.
This is why a woman needs to have her own fridge. If you want access to healthy, cheap food, you need to be able to buy it in greater quantities, so you don’t have to go to the supermarket every other day, and store it at home, so you can prepare it quickly and conveniently, as well as plan ahead. For the record, by healthy I do not mean quinoa waffles, duck eggs and shitake mushrooms (not that there’s anything wrong with any of these, though I’ve never tried duck eggs). I mean the sort of food that keeps you full whilst not racking up your medical expenses further down the line. For this you need a fridge. Not half a vegetable drawer among opened cans of lager that some bro you happen to share a house with forgot to throw away months ago, but an actual fridge. Of your own. It doesn’t matter if it comes with a full kitchen – you can always share a stove, wait for your turn for the microwave, and cooking (and eating) together can be a very pleasurable way of spending time. But keep your fridge.
But, you will protest, what about women who live with partners? Surely we want to share fridges with our loved ones! Well, good for you, go ahead. But you may want to make sure that it’s not always you remembering to buy the milk, it’s not always you supplying fresh fruit and vegetables, it’s not always you throwing away the food whose use-by date had long expired. That it doesn’t mean you pay the half of household bills, but still do more than half the work. For, whether we like it or not, research shows that in heterosexual partnerships women still perform a greater portion of domestic labour, not to mention the mental load of designing, organising, and dividing tasks. And yes, this impacts your ability to write. It’s damn difficult to follow the line of thought if you need to stop five times in order to take the laundry out, empty the bins, close the windows because it just started raining, pick up the mail that came through the door, and add tea to the shopping list – not even mentioning what happens if you have children on top of all this.
So no, a fridge cannot – and will not – solve the problem of gender inequality in the academia, let alone gender inequality on a more general level (after all, academics are very, very privileged). What it can do, though, is rebalance the score in the sense of reminding us that cooking, cleaning, and cutting up food are elements of life as much as citing, cross-referencing, and critique. It can begin to destroy, once and for all, the gendered (and classed) assumption that contemplation happens above and beyond the material, and that all reminders of its bodily manifestations – for instance, that we still need to eat whilst thinking – should be if not abolished entirely, then at least expelled beyond the margins of awareness: to communal kitchens, restaurants, kebab vans, anywhere where they do not disturb the sacred space of the intellect. So keep your income, get a room, and put a fridge in it. Then start writing.
Getting reacquainted with Bauman’s 1988 essay “Sociology and postmodernity”, I accidentally misread the first word of this quote as “mortality”. In the context of the writing of this piece, it would be easy to interpret this as a Freudian slip – yet, as slips often do, it betrays a deeper unease. If it is true that morality is a functional prerequisite of a finite world, it is even truer that such a world calls for mortality – the ultimate human experience of irreversibility. In the context of trans- and post-humanism, as well as the growing awareness of the fact that the world, as the place inhabited (and inhabitable) by human beings, can end, what can Bauman teach us about both?
In Sociology and postmodernity, Bauman assumes the position at the crossroads of two historical (social, cultural) periods: modernity and postmodernity. Turning away from the past to look towards the future, he offers thoughts on what a sociology adapted to the study of postmodern condition would be like. Instead of a “postmodern sociology” as a mimetic representation of (even if a pragmatic response to) postmodernity, he argues for a sociology that attempts to give a comprehensive account of the “aggregate of aspects” that cohere into a new, consumer society: the sociology of postmodernity. This form of account eschews the observation of the new as a deterioration, or aberration, of the old, and instead aims to come to terms with the system whose contours Bauman will go on to develop in his later work: the system characterised by a plurality of possible worlds, and not necessarily a way to reconcile them.
The point in time in which he writes lends itself fortuitously to the argument of the essay. Not only did Legislators and interpreters, in which he reframes intellectuals as translators between different cultural worlds, come out a year earlier; the publication of Sociology and postmodernity briefly precedes 1989, the year that will indeed usher a wholly new period in the history of Europe, including in Bauman’s native Poland.
On the one hand, he takes the long view back to post-war Europe, built, as it was, on the legacy of Holocaust as a pathology of modernity, and two approaches to preventing its repetition – market liberalism and political freedoms in the West, and planned economies and more restrictive political regimes in Central and Eastern parts of the subcontinent. On the other, he engages with some of the dilemmas for the study of society that the approaching fall of Berlin Wall and eventual unification of those two hitherto separated worlds was going to open. In this sense, Bauman really has the privilege of a two-facing version of Benjamin’s Angel of History. This probably helped him recognize the false dichotomy of consumer freedom and dictatorship over needs, which, as he stated, was quickly becoming the only imaginable alternative to the system – at least as far as imagination was that of the system itself.
The present point of view is not all too dissimilar from the one in which Bauman was writing. We regularly encounter pronouncements of an end of a whole host of things, among them history, classical distribution of labour, standards of objectivity in reporting, nation-states, even – or so we hope – capitalism itself. While some of Bauman’s fears concerning postmodernity may, from the present perspective, seem overstated or even straightforwardly ridiculous, we are inhabiting a world of many posts – post-liberal, post-truth, post-human. Many think that this calls for a rethinking of how sociology can adapt itself to these new conditions: for instance, in a recent issue of International Sociological Association’s Global Dialogue, Leslie Sklair considers what a new radical sociology, developed in response to the collapse of global capitalism, would be like.
As if sociology and the zeitgeist are involved in some weird pas-de-deux: changes in any domain of life (technology, political regime, legislation) almost instantaneously trigger calls for, if not the invention of new, then a serious reconsideration of old paradigms and approaches to its study.
I would like to suggest that one of the sources of continued appeal of this – which Mike Savage brilliantly summarised as “epochal theorising” – is not so much the heralding of the new, as the promise that there is an end to the present state of affairs. In order for a new ‘epoch’ to succeed, the old one needs to end. What Bauman warns about in the passage cited at the beginning is that in a world without finality – without death – there can be no morality. In T.S. Eliot’s lines from Burnt Norton: If all time is eternally present, all time is irredeemable. What we may read as Bauman’s fear, therefore, is not that worlds as we know them can (and will) end: it is that, whatever name we give to the present condition, it may go on reproducing itself forever. In other words, it is a vision of the future that looks just like the present, only there is more of it.
Which is worse? It is hard to tell. A rarely discussed side of epochal theorising is that it imagines a world in which social sciences still have a role to play, if nothing else, in providing a theoretical framing or empirically-informed running commentary of its demise, and thus offers salvation from the existential anxiety of the present. The ‘ontological turn’ – from object-oriented ontology, to new materialisms, to post-humanism – reflects, in my view, the same tendency. If objects ‘exist’ in the same way as we do, if matter ‘matters’ in the same way (if not in the same degree) in which, for instance, black lives matter, this provides temporary respite from the confines of our choices. Expanding the concept of agency so as to involve non-human actors may seem more complicated as a model of social change, but at least it absolves humans from the unique burden of historical responsibility – including that for the fate of the world.
Human (re)discovery of the world, thus, conveys less a newfound awareness of the importance of the lived environment, as much as the desire to escape the solitude of thinking about the human (as Dawson also notes, all too human) condition. The fear of relativism that postmodern ‘plurality’ of worlds brought about appears to have been preferable to the possibility that there is, after all, just the one world. If the latter is the case, the only escape from it lies, to borrow from Hamlet, in the country from whose bourn no traveller has ever returned: in other words, in death.
This impasse is perhaps felt strongest in sociology and anthropology because excursions into other worlds have been both the gist of their method and the foundations of their critical potential (including their self-critique, which focused on how these two elements combine in the construction of epistemic authority). The figure of the traveller to other worlds was more pronounced in the case of anthropology, at least at the time when it developed as the study of exotic societies on the fringe of colonial empires, but sociology is no stranger to visitation either: its others, and their worlds, delineated by sometimes less tangible boundaries of class, gender, race, or just epistemic privilege. Bauman was among theorists who recognized the vital importance of this figure in the construction of the foundations of European modernity, and thus also sensitive to its transformations in the context of postmodernity – exemplified, as he argued, in contemporary human’s ambiguous position: between “a perfect tourist” and a “vagabond beyond remedy”.
In this sense, the awareness that every journey has an end can inform the practice of social theory in ways that go beyond the need to pronounce new beginnings. Rather than using eulogies in order to produce more of the same thing – more articles, more commentary, more symposia, more academic prestige – perhaps we can see them as an opportunity to reflect on the always-unfinished trajectory of human existence, including our existence as scholars, and the responsibility that it entails. The challenge, in this case, is to resist the attractive prospect of escaping the current condition by ‘exit’ into another period, or another world – postmodern, post-truth, post-human, whatever – and remember that, no matter how many diverse and wonderful entities they may be populated with, these worlds are also human, all too human. This can serve as a reminder that, as Bauman wrote in his famous essay on heroes and victims of postmodernity, “Our life struggles dissolve, on the contrary, in that unbearable lightness of being. We never know for sure when to laugh and when to cry. And there is hardly a moment in life to say without dark premonitions: ‘I have arrived’”.
Last week, I finally got around to seeing Denial. It has many qualities and a few disadvantages – its attempt at hyperrealism treading on both – but I would like to focus on the aspect mostreviews I’ve read so far seem to have missed. In other words: mansplaining.
Brief contextualization. Lest I be accused of equating Holocaust and mansplaining (I am not – similarity does not denote equivalence), my work deals with issues of expertise, fact, and public intellectualism; I have always found the Irving case interesting, for a variety of reasons (incidentally, I was also at Oxford during the famous event at the Oxford Union). At the same time, like, I suppose, every woman in the academia and beyond with more agency than a doormat, I have, over the past year, become embroiled in countless arguments about what mansplaining is, whether it is really so widespread, whether it is done only by men (and what to call it when it’s perpetrated by those who are not men?) and, of course, that pseudo-liberal what-passes-as-an-attempt at outmaneuvering the issue, which is whether using the term ‘mansplaining’ blames men as a group and is as such essentialising and oppressive, just like the discourses ‘we’ (feminists conveniently grouped under one umbrella) seek to condemn (otherwise known as a tu quoque argument).
Besides logical flaws, what many of these attacks seem to have in common with the one David Irving launched on Deborah Lipstadt (and Holocaust deniers routinely use) is the focus on evidence: how do we know that mansplaining occurs, and is not just some fabrication of a bunch of conceited females looking to get ahead despite their obvious lack of qualifications? Other uncanny similarities between arguments of Holocaust deniers and those who question the existence of mansplaining temporarily aside, one of undisputable qualities of Denial is that it provides multiple examples of what mansplaining looks like. It is, of course, a film, despite being based on a true story. Rather than presenting a downside, this allows for a concentrated portrayal of the practice – for those doubting its verisimilitude, I strongly recommend watching the film and deciding for yourself whether it resembles real-life situations. For those who do not, voilà, a handy cinematic case to present to those who prefer to plead ignorance as to what mansplaining ‘actually’ entails.
To begin with, the case portrayed in the film is a par excellence instance of mansplaining as a whole: after all, it is about a self-educated (male) historian who sues an academic historian (a woman) because she does not accept his ‘interpretation’ of World War II (namely, that Holocaust did not happen) and, furthermore, dares to call him out on it. In the case (and the film), he sets out to explain to the (of course, male) judge and the public that Lipstadt (played by Rachel Weisz) is wrong and, furthermore, that her critique has seriously damaged his career (the underlying assumption being that he is entitled to lucrative publishing deals, while she, clearly, has to earn hers – exacerbated by his mockery of the fact that she sells books, whereas his, by contrast, are free). This ‘talking over’ and attempt to make it all about him (remember, he sues her) are brilliantly cast in the opening, when Irving (played by Timothy Spall) visits Lipstadt’s public talk and openly challenges her in the Q&A, ignoring her repeated refusal to engage with his arguments. Yet, it would be a mistake to locate the trope of mansplaining only in the relation Irving-Lipstadt. On the contrary – just like the real thing – it is at its most insidious when it comes from those who are, as it were, ‘on our side’.
A good example is the first meeting of the defence team, where Lipstadt is introduced to people working with her legal counsel, the famous Anthony Julius (Andrew Scott). There is a single woman on Julius’ team: Laura (Caren Pistorius), who, we are told, is a paralegal. Despite it being her first case, it seems she has developed a viable strategy: or at least so we are told by her boss, who, after announcing Laura’s brilliant contribution to the case, continues to talk over her – that is, explain her thoughts without giving her an opportunity to explain them herself. In this sense, what at first seems like an act of mentoring support – passing the baton and crediting a junior staff member – becomes a classical act in which a man takes it onto himself to interpret the professional intervention of a female colleague, appropriating it in the process.
The cases of professional mansplaining are abundant throughout the film: in multiple scenes lawyers explain the Holocaust as well as the concept of denial to Lipstadt despite her meek protests that she “has actually written a book about it”. Obvious irony aside, this serves as a potent reminder that women have to invoke professional credentials not to be recognized as experts, but in order to be recognized as equally valid participants in debate. By contrast, when it comes to the only difference in qualifications in the film that plays against Lipstadt – that of the knowledge of the British legal system – Weisz’s character conveniently remains a mixture of ignorance and naïveté couched in Americanism. One would be forgiven to assume that long-term involvement in a libel case, especially one that carries so much emotional and professional weight, would have provoked a university professor to get acquainted with at least the basic rules of the legal system in which the case was processed, but then, of course, that would have stripped the male characters of the opportunity to shine the light of their knowledge in contrast to her supposed ignorance.
Of course, emotional involvement is, in the film, presented as a clear disadvantage when it comes to the case. While Lipstadt first assumes she will, and then repeatedly asks to be allowed to testify, her legal team insists she would be too emotional a witness. The assumption that having an emotional reaction (even if one that is quite expected – it is, after all, the Holocaust we are talking about) and a cold, hard approach to ‘facts’ are mutually exclusive is played off succinctly in the scenes that take place at Auschwitz. While Lipstadt, clearly shaken (as anyone, Jewish or not, is bound to be when standing at the site of such a potent example of mass slaughter), asks the party to show respect for the victims, the head barrister Richard Rampton (Tom Wilkinson) is focused on calmly gathering evidence. The value of this, however, only becomes obvious in the courtroom, where he delivers his coup de grâce, revealing that his calm pacing around the perimeter of Auschwitz II-Birkenau (which makes him arrive late and upsets everyone, Lipstadt in particular) was actually measuring the distance between the SS barracks and the gas chambers, allowing him to disprove Irving’s assertion that the gas chambers were built as air raid shelters, and thus tilt the whole case in favour of the defence.
The mansplaining triumph, however, happens even before this Sherlockian turn, in the scene in which Rampton visits Lipstadt in her hotel room (uninvited, unannounced) in order to, yet again, convince her that she should not testify or engage with Irving in any form. After he gently (patronisingly) persuades her that “What feels best isn’t necessarily what works best” (!), she, emotionally moved, agrees to “pass her conscience” to him – that is, to a man. By doing this, she abandons not only her own voice, but also the possibility to speak for Holocaust survivors – the one that appears as a character in the film also, poignantly, being female. In Lipstadt’s concession that silence is better because it “leads to victory”, it is not difficult to read the paradoxical (pseudo)pragmatic assertion that openly challenging male privilege works, in fact, against gender equality, because it provokes a counterreaction. Initially protesting her own silencing, Lipstadt comes to accept what her character in the script dubs “self-denial” as the only way to beat those who deny the Holocaust.
Self-denial: for instance, denying yourself food for fear of getting ‘fat’ (and thus unattractive for the male gaze); denying yourself fun for fear of being labeled easy or promiscuous (and thus undesirable as a long-term partner); denying yourself time alone for fear of being seen as selfish or uncaring (and thus, clearly, unfit for a relationship). Silence: for instance, letting men speak first for fear of being seen as pushy (and thus too challenging); for instance, not speaking up when other women are oppressed, for fear of being seen as too confrontational (and thus, of course, difficult); for instance, not reporting sexual harassment, for fear of retribution, shame, isolation (self-explanatory). In celebrating ‘self-denial’, the film, then, patently reinscribes the stereotype of the patient, silent female.
Obviously, there is value in refusing to engage with outrageous liars; equally, there are issues that should remain beyond discussion – whether Holocaust happened being one of them. Yet, selective silencing masquerading as strategy – note that Lipstadt is not allowed to speak (not even to the media), while Rampton communicates his contempt for Irving by not looking at him (thus, denying him the ‘honour’ of the male gaze) – too often serves to reproduce the structural inequalities that can persist even under a legal system that purports to be egalitarian.
Most interestingly, the fact that a film that is manifestly about mansplaining manages to reproduce quite a few of mansplaining tropes (and, I would argue, not always in a self-referential or ironic manner) serves as a poignant reminder how deeply the ‘splaining complex is embedded not only in politics or the academia, but also in cultural representations. This is something we need to remain acutely aware of in the age of ‘post-truth’ or ‘post-facts’. If resistance to lying politicians and the media is going to take the form of (re)assertion of one, indisputable truth, and the concomitant legitimation of those who claim to know it – strangely enough, most often white, privileged men – then we’d better think of alternatives, and quickly.
The victory of the Leave campaign and Britain’s likely exit from the European Union present a similar challenge. Of course, in this case, everyone knew it might happen, but there are surprisingly few ideas of what the consequences will be – not on the short-term political level, where the scenarios seem pretty clear; but in terms of longer-term societal impact – either on the macro- or micro-sociological level.
Methodological debates temporarily aside, I want to argue that one of the things that prevent us from making (informed) predictions is that we’re afraid of what the future might hold. The progressive ethos that permeates the discipline can make it difficult to think of scenarios predicated on a different worldview. A similar bias kept social scientists from realizing that countries seen as examples of real socialism – like the Soviet Union, and particularly former Yugoslavia – could ever fall apart, especially in a violent manner. The starry-eyed assumption that exit from the European Union could be a portent of a new era of progressive politics in the UK is a case in point. As much as I would like to see it happen, we need to seriously consider other possibilities – or, perhaps, that what the future has in stock is beyond our darkest dreams. In the past years, there has been a resurgence of thinking about utopias as critical alternatives to neoliberalism. Together with this, we need to actively start thinking about dystopias – not as a way of succumbing to despair, but as a way of using sociological imagination to understand both societal causes of the trends we’re observing – nationalism, racism, xenophobia, and so on – and our own fear of them.
Clearly, a strong argument against making long-term predictions is the reputational risk – to ourselves and the discipline – this involves. If the failure of Marx’s prediction of the inevitability of capitalism’s collapse is still occasionally brought up as a critique of Marxism, offering longer-term forecasts in the context where social sciences are increasingly held accountable to the public (i.e. policymakers) rightfully seems tricky. But this is where the sociological community has a role to play. Instead of bemoaning the glory of bygone days, we can create spaces from which to consider possible scenarios – even if some of them are bleak. In the final instance, to borrow from Henshel – the future cannot be predicted, but futures can be invented.
Jana Bacevic is a PhD researcher in the Department of Sociology at the University of Cambridge. She tweets at @jana_bacevic.
[This post originally appeared on the website ofREKOM, the initiative for the establishment of a reconciliation commission for former Yugoslavia].
When speaking of the processes of facing the past and reconciliation within the context of violent conflict, education is often accorded a major role. Educational practices and discourses have the ability to reproduce or widen existing social inequalities, or even to create new divisions. The introduction of textbooks which have painted a “purified” picture of a nation’s participation in and responsibility for the war crimes perpetrated during the wars in the 1990s, or the abolition of educational programmes and classes taught in minority languages, are just some of the examples found in the former Yugoslavia. Such moves are usually linked with a repressive politics that existed before, during and sometimes after the conflict itself.
Because of that, reconciliation programmes are often aimed at achieving formal equality within institutions or an equal representation of differing views in public discourses. Such an approach is based on the idea that a change of the public paradigm is the necessary first step in coming to terms with the past. In this particular case, the process of reconciliation is being led by the political and social elites which influence the shaping of public opinion. Similar to the “trickle-down theory” in economics, the assumption is that a change in the official narrative through the institutions, including those in the educational field, will, in time, bring about a change in public awareness – that is, lead the rest of the population to face its traumatic past.
Although the influence of formal discourses cannot be neglected, it is important that we understand that the causes and consequences of conflict, and thus the prosecution of those responsible, usually depend on a whole array of social and economic factors. It is highly unlikely that critical narratives examining the past will find a fertile ground in the educational institutions of divided and isolated societies. In this respect, the textbooks are just the metaphorical tip of the iceberg. It bears repeating that all educational institutions in Bosnia and Herzegovina, from elementary schools to universities, are ethnically segregated. The situation is similar in Kosovo, where this institutional segregation is virtually complete – just like in the nineties, there are in practice two parallel systems in existence. The universities in Macedonia also reflect its constitutional make-up, based on the division of political power between its two largest ethnic groups. Even in more ethnically homogenous communities, such as those found in parts of Serbia or Croatia, the presence of religious education in school curricula – a subject which, in its present format, segregates students according to their faith – stands as a lasting symbol of the impact of identity-based politics on the education system.
The institutionalization of divisions rooted in the legacy of the conflict fought in the former Yugoslavia does not end with education, but instead pervades other relationships and activities as well, such as employment, freedom of movement, family structure and the creation of informal social networks. It goes without saying that the political parties in all the successor-states are, by and large, made up of those who have profited in some way from the breakup of Yugoslavia. The transition from socialist self-governance to neoliberal capitalism has served to further degrade the stability and independence of social institutions. Such a context fosters political ideologies such as chauvinism and nationalism, and breeds fear of all that is different. What we must therefore ask ourselves is, not just how to change the content and the paradigm of education in the former Yugoslavia, but also – who profits from it staying the way it is?
These questions require critical analysis, not just of the responsibility for the crimes perpetrated during the conflict in the former Yugoslavia, but also of the economic and political legacy of its breakup. This is a huge challenge, which implies dialogue between the different parts of society in each successor-state. Educational institutions, universities and science institutes in particular, can play a potentially major role in establishing such a dialogue. This implies, first and foremost, an agreement on what its rules and goals are – which Habermas considered a crucial element in the development of the public sphere. For as long as there is no such agreement in place, deliberations on contemporary history will remain fragmented along the lines of ideological affiliation or political belief. Education based on such interpretations of the past thus continues to serve as an instrument of the proliferation of the same (or at least similar) divisions which shaped the dynamics of the conflict following the breakup of the former Yugoslavia, rather than as a motor of change.
This, of course, does not mean that every change in education requires the whole social structure to be changed beforehand, but it does mean that these two elements go hand in hand. Although this change is very likely to be gradual, it is far more important to ensure that it is permanent. In the end, the educational narratives we are dealing with might brush up against the past, but they concern the future.
Jana Bacevic works on social theory and the relationships between knowledge (and education) and political agency. She is presently writing her PhD in sociology at the University of Cambridge, Great Britain, and has a PhD in anthropology from the University of Belgrade. She has worked as a Marie Curie Fellow at the University of Arhus and taught at the Central European University in Budapest and Singidunum University in Belgrade. Her book “From Class to Identity: Politics of Education Reforms in Former Yugoslavia” was published in 2014 by Central European University Press.
[This post originally appeared on the blog of HEDDA (Higher Education Development Association) of the University of Oslo, Norway, on Dec132012].
In this entry of the thematic week on crisis, Jana Bacevic from the Department of Public Policy, Central European University (Budapest) examines higher education in the context of ethnic and religious divisions in recent Balkan history.
In situations of crisis – whether it’s economic, environmental, or humanitarian – higher education is hardly the first to come to mind. Aid and development packages tend to focus on primary education, essential for teaching reading, writing and calculus, as well as successful socialization in peer groups, and, in some cases, on secondary – usually vocational – education, supposed to enable people to work both during and in the immediate aftermath of the crisis. However, slowly but steadily, higher education is beginning to occupy a more prominent place in contexts of crisis. Why is this the case?
Critics would say higher education is a luxury, and that focus on higher education is hardly anything but empty rhetoric aimed at rallying support for the agendas of politicians or trade unions. However, there are many reasons why higher education should not be ignored, even in times of crisis. Issues and policies related to higher education hardly ever stay confined to the university campus, or even to the boundaries of nation-states, whether new or old.
Access to higher education is directly linked to the access to work, income, and, to some extent, social and political participation. In this sense, who and how can access higher education (and under which conditions) are questions that have explicit political consequences for human and minority rights, social stratification and (in)equality, and the overall quality of life. Higher education institutions do not only reflect the dominant ethos of a society; they also create and reproduce it. Politicians and policymakers know this, and this is why higher education can become such a politically charged issue.
The recent history of higher education in the successor states of former Yugoslavia provides many examples of the interplay between higher education and political dynamics. Early during the conflict, two universities in Bosnia and Herzegovina were divided between ethnic groups. The Serbian staff and students of the University of Sarajevo founded the separate University of East Sarajevo in 1992. The University of Mostar was split between the Croatian part (University of Mostar, or “Sveučilište u Mostaru”) and the Muslim part (University of Mostar “Džemal Bijedić”). In Kosovo, the University of Prishtina was at the very center of political contestation between the two biggest ethnic groups, Albanians and Serbs. Following series of Kosovo Albanian demonstrations at the end of the 1980s, the Serbian authorities forbade the university to accept any more Albanian students. The result was a complete split of the academic sphere into two domains – the “official”, Serbian one, and the “parallel”, Albanian, which existed outside of the institutional frameworks.
After the NATO intervention in 1999, the Serbian students and staff fled to the northern part of the province, predominantly controlled by the central Serbian government, re-establishing the university as the “University of Prishtina temporarily located in Kosovska Mitrovica”. Meanwhile, Albanian students and staff returned to the premises of the university in Prishtina, developing a new system under close supervision of the international administration. Just like in Bosnia, the configuration of higher education today reflects the deep ethnic and social cleavages that are the legacy of the conflict.
Higher education can become a subject of political contestation even in the absence of a large-scale armed conflict. For instance, one of the issues that precipitated the conflict between ethnic Albanians and Macedonian police in the Former Yugoslav Republic of Macedonia in 2001 was the demand of ethnic Albanian parties for a separate university in their own language. Following the de facto consociational arrangement provided by the terms of the Ohrid Framework Agreement peace treaty, the previously private Tetovo University was given public status in 2004. However, the same town was already home to the Southeast European University, founded in 2001 by the international community (primarily the OSCE) in order to work on the post-conflict development and foster integration of the ethnic Albanian and ethnic Macedonian youth. Currently, two universities coexist, teaching similar programmes and even sharing staff, although differing in the approach to the use of languages, as well as in the composition of student body.
A similar story can be told about Novi Pazar, the administrative center of Sandžak, a multiethnic region of Serbia with high proportion of Bosniak Muslims. The private International University of Novi Pazar was founded by a local Muslim religious leader in 2002, with support from the government in Belgrade who, at the time, thought it would be a good solution for the integration of Bosniak Muslims within the framework of the state. Two years later, however, after the change of government and political climate, the state founded a new university, named the State University of Novi Pazar, withdrawing support from the International University. The two universities continue to exist side by side, teaching similar programmes and, in theory, competing for the same population of students. Their internal rivalries reflect and reproduce the political, social and, not least of all, ethnic cleavages in Sandžak.
Universities in the Western Balkans are just some of the examples in which the links between higher education and social divisions can be seen most clearly. However, they are neither isolated nor unique: conflicts can persist and occur across and outside of ethnic and religious lines, sometimes teeming below the surface even in societies that, from the outside, appear peaceful and stable. This is why higher education should not only be reactive, responding to cleavages and conflicts once they become visible, but rather proactive, revealing and working to abolish the multiple and often hidden structures of power that reproduce inequalities. On the one hand, this can be done through policies that seek to ensure equal access to and representation in higher education institutions. On the other, it can also mean engagement in research and activism aimed at raising awareness of the mechanisms through which inequalities and injustice are perpetuated. This latter mission, however, requires that higher education institutions turn a critical eye towards their own policies and practices, and examine the ways in which they are – perhaps unwittingly – reproducing the societal divisions that, in times of crisis, can easily evolve into open conflicts. Frequently, this is the hardest task of all.
Jana Bacevic holds a PhD (2008) in Social Anthropology from the University of Belgrade. Previously she taught at the University of Belgrade and Singidunum University and worked as higher education expert on a number of projects aimed at developing education in the post-conflict societies of the Western Balkans. Her research interests are in the intersection between sociology, anthropology, politics and philosophy of knowledge, and her book, “From class to identity: politics of education reforms in former Yugoslavia” is being published by CEU Press in 2013.