Between legitimation and imagination: epistemic attachment, ontological bias, and thinking about the future

Greyswans
Some swans are…grey (Cambridge, August 2017)

 

A serious line of division runs through my household. It does not concern politics, music, or even sports: it concerns the possibility of large-scale collapse of social and political order, which I consider very likely. Specific scenarios aside for the time being, let’s just say we are talking more human-made climate-change-induced breakdown involving possibly protracted and almost certainly lethal conflict over resources, than ‘giant asteroid wipes out Earth’ or ‘rogue AI takes over and destroys humanity’.

Ontological security or epistemic positioning?

It may be tempting to attribute the tendency towards catastrophic predictions to psychological factors rooted in individual histories. My childhood and adolescence took place alongside the multi-stage collapse of the country once known as the Socialist Federal Republic of Yugoslavia. First came the economic crisis, when the failure of ‘shock therapy’ to boost stalling productivity (surprise!) resulted in massive inflation; then social and political disintegration, as the country descended into a series of violent conflicts whose consequences went far beyond the actual front lines; and then actual physical collapse, as Serbia’s long involvement in wars in the region was brought to a halt by the NATO intervention in 1999, which destroyed most of the country’s infrastructure, including parts of Belgrade, where I was living at the time*. It makes sense to assume this results in quite a different sense of ontological security than one, say, the predictability of a middle-class English childhood would afford.

But does predictability actually work against the capacity to make accurate predictions? This may seem not only contradictory but also counterintuitive – any calculation of risk has to take into account not just the likelihood, but also the nature of the source of threat involved, and thus necessarily draws on the assumption of (some degree of) empirical regularity. However, what about events outside of this scope? A recent article by Faulkner, Feduzi and Runde offers a good formalization of this problem (the Black Swans and ‘unknown unknowns’) in the context of the (limited) possibility to imagine different outcomes (see table below). Of course, as Beck noted a while ago, the perception of ‘risk’ (as well as, by extension, any other kind of future-oriented thinking) is profoundly social: it depends on ‘calculative devices‘ and procedures employed by networks and institutions of knowledge production (universities, research institutes, think tanks, and the like), as well as on how they are presented in, for instance, literature and the media.

Screen shot 2017-12-18 at 3.58.23 PM
From: Faulkner, Feduzi and Runde: Unknowns, Black Swans and the risk/uncertainty distinction, Cambridge Journal of Economics 41 (5), August 2017, 1279-1302

 

Unknown unknowns

In The Great Derangement (probably the best book I’ve read in 2017), Amitav Gosh argues that this can explain, for instance, the surprising absence of literary engagement with the problem of climate change. The problem, he claims, is endemic to Western modernity: a linear vision of history cannot conceive of a problem that exceeds its own scale**. This isn’t the case only with ‘really big problems’ such as economic crises, climate change, or wars: it also applies to specific cases such as elections or referendums. Of course, social scientists – especially those qualitatively inclined – tend to emphasise that, at best, we aim to explain events retroactively. Methodological modesty is good (and advisable), but avoiding thinking about the ways in which academic knowledge production is intertwined with the possibility of prediction is useless, for at least two reasons.

One is that, as reflected in the (by now overwrought and overdetermined) crisis of expertise and ‘post-truth’, social researchers increasingly find themselves in situations where they are expected to give authoritative statements about the future direction of events (for instance, about the impact of Brexit). Even if they disavow this form of positioning, the very idea of social science rests on (no matter how implicit) assumption that at least some mechanisms or classes or objects will exhibit the same characteristics across cases; consequently, the possibility of inference is implied, if not always practised. Secondly, given the scope of challenges societies face at present, it seems ridiculous to not even attempt to engage with – and, if possibly, refine – the capacity to think how they will develop in the future. While there is quite a bit of research on individual predictive capacity and the way collective reasoning can correct for cognitive bias, most of these models – given that they are usually based on experiments, or simulations – cannot account for the way in which social structures, institutions, and cultures of knowledge production interact with the capacity to theorise, model, and think about the future.

The relationship between social, political, and economic factors, on the one hand, and knowledge (including knowledge about those factors), on the other, has been at the core of my work, including my current PhD. While it may seem minor compared to issues such as wars or revolutions, the future of universities offers a perfect case to study the relationship between epistemic positioning, positionality, and the capacity to make authoritative statements about reality: what Boltanski’s sociology of critique refers to as ‘complex externality’. One of the things it allowed me to realise is that while there is a good tradition of reflecting on positionality (or, in positivist terms, cognitive ‘bias’) in relation to categories such as gender, race, or class, we are still far from successfully theorising something we could call ‘ontological bias’: epistemic attachment to the object of research.

The postdoctoral project I am developing extends this question and aims to understand its implications in the context of generating and disseminating knowledge that can allow us to predict – make more accurate assessments of – the future of complex social phenomena such as global warming or the development of artificial intelligence. This question has, in fact, been informed by my own history, but in a slightly different manner than the one implied by the concept of ontological security.

Legitimation and prediction: the case of former Yugoslavia

Socialist Federal Republic of Yugoslavia had a relatively sophisticated and well developed networks of social scientists, which both of my parents were involved in***. Yet, of all the philosophers, sociologists, political scientists etc. writing about the future of the Yugoslav federation, only one – to the best of my knowledge – predicted, in eerie detail, the political crisis that would lead to its collapse: Bogdan Denitch, whose Legitimation of a revolution: the Yugoslav case (1976) is, in my opinion, one of the best books about former Yugoslavia ever written.

A Yugoslav-American, Denitch was a professor of sociology at the City University of New York. He was also a family friend, a fact I considered of little significance (having only met him once, when I was four, and my mother and I were spending a part of our summer holiday at his house in Croatia; my only memory of it is being terrified of tortoises roaming freely in the garden), until I began researching the material for my book on education policies and the Yugoslav crisis. In the years that followed (I managed to talk to him again in 2012; he passed away in 2016), I kept coming back to the question: what made Denitch more successful in ‘predicting’ the crisis that would ultimately lead to the dissolution of former Yugoslavia than virtually anyone writing on Yugoslavia at the time?

Denitch had a pretty interesting trajectory. Born in 1929 to Croat Serb parents, he spent his childhood in a series of countries (including Greece and Egypt), following his diplomat father; in 1946, the family emigrated to the United States (the fact his father was a civil servant in the previous government would have made it impossible for them to continue living in Yugoslavia after the Communist regime, led by Josip Broz Tito, formally took over). There, Denitch (in evident defiance of his upper-middle-class legacy) trained as a factory worker, while studying for a degree in sociology at CUNY. He also joined the Democratic Socialist Alliance – one of American socialist parties – whose member (and later functionary) he would remain for the rest of his life.

In 1968, Denitch was awarded a major research grant to study Yugoslav elites. The project was not without risks: while Yugoslavia was more open to ‘the West’ than other countries in Eastern Europe, visits by international scholars were strictly monitored. My mother recalls receiving a house visit from an agent of the UDBA, the Yugoslav secret police – not quite the KGB but you get the drift – who tried to elicit the confession that Denitch was indeed a CIA agent, and, in the absence of that, the promise that she would occasionally report on him****.

Despite these minor throwbacks, the research continued: Legitimation of a revolution is one of its outcomes. In 1973, Denitch was awarded a PhD by the Columbia University and started teaching at CUNY, eventually retiring in 1994. His last book, Ethnic nationalism: the tragic death of Yugoslavia came out in the same year, a reflection on the conflict that was still going on at the time, and whose architecture he had foreseen with such clarity eighteen years earlier (the book is remarkably bereft of “told-you-so”-isms, so warmly recommended for those wishing to learn more about Yugoslavia’s dissolution).

Did personal history, in this sense, have a bearing on one’s epistemic position, and by extension, on the capacity to predict events? One explanation (prevalent in certain versions of popular intellectual history) would be that Denitch’s position as both a Yugoslav and an American would have allowed him to escape the ideological traps other scholars were more likely to fall into. Yugoslavs, presumably,  would be at pains to prove socialism was functioning; Americans, on the other hand, perhaps egalitarian in theory but certainly suspicious of Communist revolutions in practice, would be looking to prove it wasn’t, at least not as an economic model. Yet this assumption hardly stands even the lightest empirical interrogation. At least up until the show trials of Praxis philosophers, there was a lively critique of Yugoslav socialism within Yugoslavia itself; despite the mandatory coating of jargon, Yugoslav scholars were quite far from being uniformly bright-eyed and bushy-tailed about socialism. Similarly, quite a few American scholars were very much in favour of the Yugoslav model, eager, if anything, to show that market socialism was possible – that is, that it’s possible to have a relatively progressive social policy and still be able to afford nice things. Herein, I believe, lies the beginning of the answer as to why neither of these groups was able to predict the type or the scale of the crisis that will eventually lead to the dissolution of former Yugoslavia.

Simply put, both groups of scholars depended on Yugoslavia as a source of legitimation of their work, though for different reasons. For Yugoslav scholars, the ‘exceptionality’ of the Yugoslav model was the source of epistemic legitimacy, particularly in the context of international scientific collaboration: their authority was, in part at least, constructed on their identity and positioning as possessors of ‘local’ knowledge (Bockman and Eyal’s excellent analysis of the transnational roots of neoliberalism makes an analogous point in terms of positioning in the context of the collaboration between ‘Eastern’ and ‘Western’ economists). In addition to this, many of Yugoslav scholars were born and raised in socialism: while, some of them did travel to the West, the opportunities were still scarce and many were subject to ideological pre-screening. In this sense, both their professional and their personal identity depended on the continued existence of Yugoslavia as an object; they could imagine different ways in which it could be transformed, but not really that it could be obliterated.

For scholars from the West, on the other hand, Yugoslavia served as a perfect experiment in mixing capitalism and socialism. Those more on the left saw it as a beacon of hope that socialism need not go hand-in-hand with Stalinist-style repression. Those who were more on the right saw it as proof that limited market exchange can function even in command economies, and deduced (correctly) that the promise of supporting failing economies in exchange for access to future consumer markets could be used as a lever to bring the Eastern Bloc in line with the rest of the capitalist world. If no one foresaw the war, it was because it played no role in either of these epistemic constructs.

This is where Denitch’s background would have afforded a distinct advantage. The fact his parents came from a Serb minority in Croatia meant he never lost sight of the salience of ethnicity as a form of political identification, despite the fact socialism glossed over local nationalisms. His Yugoslav upbringing provided him not only with fluency in the language(s), but a degree of shared cultural references that made it easier to participate in local communities, including those composed of intellectuals. On the other hand, his entire professional and political socialization took place in the States: this meant he was attached to Yugoslavia as a case, but not necessarily as an object. Not only was his childhood spent away from the country; the fact his parents had left Yugoslavia after the regime change at the end of World War II meant that, in a way, for him, Yugoslavia-as-object was already dead. Last, but not least, Denitch was a socialist, but one committed to building socialism ‘at home’. This means that his investment in the Yugoslav model of socialism was, if anything, practical rather than principled: in other words, he was interested in its actual functioning, not in demonstrating its successes as a marriage of markets and social justice. This epistemic position, in sum, would have provided the combination needed to imagine the scenario of Yugoslav dissolution: a sufficient degree of attachment to be able to look deeply into a problem and understand its possible transformations; and a sufficient degree of detachment to be able to see that the object of knowledge may not be there forever.

Onwards to the…future?

What can we learn from the story? Balancing between attachment and detachment is, I think, one of the key challenges in any practice of knowing the social world. It’s always been there; it cannot be, in any meaningful way, resolved. But I think it will become more and more important as the objects – or ‘problems’ – we engage with grow in complexity and become increasingly central to the definition of humanity as such. Which means we need to be getting better at it.

 

———————————-

(*) I rarely bring this up as I think it overdramatizes the point – Belgrade was relatively safe, especially compared to other parts of former Yugoslavia, and I had the fortune to never experience the trauma or hardship people in places like Bosnia, Kosovo, or Croatia did.

(**) As Jane Bennett noted in Vibrant Matter, this resonates with Adorno’s notion of non-identity in Negative Dialectics: a concept always exceeds our capacity to know it. We can see object-oriented ontology, (e.g. Timothy Morton’s Hyperobjects) as the ontological version of the same argument: the sheer size of the problem acts as a deterrent from the possibility to grasp it in its entirety.

(***) This bit lends itself easily to the Bourdieusian “aha!” argument – academics breed academics, etc. The picture, however, is a bit more complex – I didn’t grow up with my father and, until about 16, had a very vague idea of what my mother did for a living.

(****) Legend has it my mother showed the agent the door and told him never to call on her again, prompting my grandmother – her mother – to buy funeral attire, assuming her only daughter would soon be thrown into prison and possibly murdered. Luckily, Yugoslavia was not really the Soviet Union, so this did not come to pass.

The biopolitics of higher education, or: what’s the problem with two-year degrees?

[Note: a shorter version of this post was published in Times Higher Education’s online edition, 26 December 2017]

The Government’s most recent proposal to introduce the possibility of two-year (‘accelerated’) degrees has already attracted quite a lot of criticism. One aspect is student debt: given that universities will be allowed to charge up to £2,000 more for these ‘fast-track’ degrees, there are doubts in terms of how students will be able to afford them. Another concerns the lack of mobility: since the Bologna Process assumes comparability of degrees across European higher education systems, students in courses shorter than three or four years would find it very difficult to participate in Erasmus or other forms of student exchange. Last, but not least, many academics have said the idea of ‘accelerated’ learning is at odds with the nature of academic knowledge, and trivializes or debases the time and effort necessary for critical reflection.

However, perhaps the most curious element of the proposal is its similarity to the Diploma of Higher Education (DipHE), a two-year qualification proposed by Mrs Thatcher at the time when she was State Secretary for Education and Science. Of course, DipHE had a more vocational character, meant to enable access equally to further education and the labour market. In this sense, it was both a foundation degree and a finishing qualification. But there is no reason to believe those in new two-year programmes would not consider continuing their education through a ‘top-up’ year, especially if the labour market turns out not to be as receptive for their qualification as the proposal seems to hope. So the real question is: why introduce something that serves no obvious purpose – for the students or, for that matter, for the economy – and, furthermore, base it on resurrecting a policy that proved unpopular in 1972 and was abandoned soon after introduction?

One obvious answer is that the Conservative government is desperate for a higher education policy to match Labour’s proposal to abolish tuition fees (despite the fact that, no matter how commendable, abolishing tuition fees is little but a reversal of measures put in place by the last Labour government). But the case of higher education in Britain is more curious than that. If one sees policy as a set of measures designed to bring about a specific vision of society, Britain never had much of a higher education policy to begin with.

Historically, British universities evolved as highly autonomous units, which meant that the Government felt little need to regulate them until well into the 20th century. Until the 1960s, the University Grants Committee succeeded in maintaining the ‘gentlemanly conversation’ between the universities and the Government. The 1963 report of the Robbins Committee, thus, was to be the first serious step into higher education policy-making. Yet, despite the fact that the Robbins report was more complex than many who cite it approvingly give it credit for, its main contribution was to open the door of universities for, in the memorable phrase, “all who qualify by ability and attainment”. What it sought to regulate was thus primarily who should access higher education – not necessarily how it should be done, nor, for that matter, what the purpose of this was.

Even the combined pressures of the economic crisis and an uneven rate of expansion in the 1970s and the 1980s did little to orient the government towards a more coherent strategy for higher education. This led Peter Scott to comment in 1982 “so far as we have in Britain any policy for higher education it is the binary policy…[it] is the nearest thing we have to an authoritative statement about the purposes of higher education”. The ‘watershed’ moment of 1992, abolishing the division between universities and polytechnics, was, in that sense, less of a policy and more of an attempt to undo the previous forays into regulating the sector.

Two major reviews of higher education since Robbins, the Dearing report and the Browne review, represented little more than attempts to deal with the consequences of massification through, first, tying education more closely to the supposed needs of the economy, and, second, introducing tuition fees. The difference between Robbins and subsequent reports in terms of scope of consultation and collected evidence suggests there was little interest in asking serious questions about the strategic direction of higher education, the role of the government, and its relationship to universities. Political responsibility was thus outsourced to ‘the Market’, that rare point of convergence between New Labour and Conservatives – at best a highly abstract aggregate of unreliable data concerning student preferences, and, at worst, utter fiction.

Rather than as a policy in a strict sense of the term, this latest proposal should be seen as another attempt at governing populations, what Michel Foucault called biopolitics. Of course, there is nothing wrong with the fact that people learn at different speeds: anyone who has taught in a higher education institution is more than aware that students have varying learning styles. But the Neo-Darwinian tone of “highly motivated students hungry for a quicker pace of learning” combined with the pseudo-widening-participation pitch of “mature students who have missed out on the chance to go to university as a young person” neither acknowledges this, nor actually engages with the need to enable multiple pathways into higher education. Rather, funneling students through a two-year degree and into the labour market is meant to ensure they swiftly become productive (and consuming) subjects.

 

IMAG3397
People’s history museum, Manchester

 

Of course, whether the labour market will actually have the need for these ‘accelerated’ subjects, and whether universities will have the capacity to teach them, remains an open question. But the biopolitics of higher education is never about the actual use of degrees or specific forms of learning. As I have shown in my earlier work on vocationalism and education for labour, this type of political technology is always about social control; in other words, it aims to prevent potentially unruly subjects from channeling their energy into forms of action that could be disruptive of the political order.

Education – in fact, any kind of education policy – is perfect in this sense because it is fundamentally oriented towards the future. It occupies the subject now, but transposes the horizon of expectation into the ever-receding future – future employment, future fulfillment, future happiness. The promise of quicker, that is, accelerated delivery into this future is a particularly insidious form of displacement of political agency: the language of certainty (“when most students are completing their third year of study, an accelerated degree student will be starting work and getting a salary”) is meant to convey that there is a job and salary awaiting, as it were, at the end of the proverbial rainbow.

The problem is not simply that such predictions (or promises) are based on an empty rhetoric, rather than any form of objective assessment of the ‘needs’ of the labour market. Rather, it is that future needs of the labour market are notoriously difficult to assess, and even more so in periods of economic contraction. Two-year degrees, in this sense, are just a way to defer the compounding problems of inequality, unemployment, and social insecurity. Unfortunately, to this date, no higher education qualification has proven capable of doing that.

If on a winter’s night a government: a tale of universities and the state with some reference to present circumstances

Imagine you were a government. I am not saying imagine you were THE government, or any particular government; interpretations are beyond the scope of this story. For the sake of illustration, let’s say you are the government of Cimmeria, the fictional country in Italo Calvino’s If on a winter’s night a traveler...

I’m not saying you – the reader – should necessarily identify with this government. But I was trained as an anthropologist; this means I think it’s important to understand why people – and institutions – act in particular contexts the way that they do. So, for the sake of the story, let’s pretend we are the government of Cimmeria.

Imagine you, the Cimmerian government, are intent on doing something really, really stupid, with possibly detrimental consequences. Imagine you were aware that there is no chance you can get away with this and still hold on to power. Somehow, however, you’re still hanging on, and it’s in your interest to go on doing that for as long as possible, until you come up with something better.

There is one problem. Incidentally, sometime in your long past, you developed places where people can learn, talk, and – among many other things – reflect critically on what you are doing. Let’s, for the sake of the story, call these places universities. Of course, universities are not the only places where people can criticise what you are doing. But they are plentiful, and people in them are many, and vocal. So it’s in your interest to make sure these places don’t stir trouble.

At this point, we require a little historical digression.

How did we get so many universities in the first place?

Initially, it wasn’t you who developed universities at all, they mostly started on their own. But you tolerated them, then grew to like them, and even started a programme of patronage. At times, you struggled with the church – churches, in fact – over influence on universities. Then you got yourself a Church, so you didn’t have to fight any longer.

Universities educated the people you could trust to rule with you: not all of them specializing in the art of government, of course, but skilled in polite conversation and, above all, understanding of the division of power in Cimmeria. You trusted these people so much that, even when you had to set up an institution to mediate your power – the Parliament – you gave them special representation.* Even when this institution had to set up a further body to mediate its relationship with the universities – the University Grants Committee, later to become the funding councils – these discussions were frequently described as an ‘in-house conversation’.

Some time later, you extended this favour to more people. You thought that, since education made them more fit to rule with you, the more educated they were, the more they should see the value of your actions. The form you extended was a cheaper, more practical version of it: obviously, not everyone was fit to rule. Eventually, however, even these institutions started conforming to the original model, a curious phenomenon known as ‘academic drift’. You thought this was strange, but since they seemed intent on emulating each other, you did away with the binary model and brought in the Market. That’ll sort them out, you thought.

You occasionally asked them to work for you. You were always surprised, even hurt, when you found out they didn’t want to. You thought they were ridiculous, spoiled, ungrateful. Yet you carried on. They didn’t really matter.

Over the years, their numbers grew. Every once in a while, they would throw some sort of a fuss. They were very political. You didn’t really care; at the end of the day, all their students went on to become decent, tax-paying subjects, leaving days of rioting safely behind.

Until, one day, there were no more jobs. There was no more safety. Remember, you had cocked up, badly. Now you’ve got all of these educated people, disappointed, and angry, exactly at the time you need it least. You’ve got 99 problems but, by golly, you want academia not to be one.

So, if on a winter’s night a government should think about how to keep universities at bay while driving the country further into disarray…

Obviously, your first task is to make sure they are silent. God forbid all of those educated people would start holding you to account, especially at the same time! Historically, there are a few techniques at your disposal, but they don’t seem to fit very well. Rounding academics up and shipping them off into gulags seems a bit excessive. Throwing them in prison is bound not to prove popular – after all, you’re not Turkey. In fact, you’re so intent on communicating that you are not Turkey that you campaigned for leaving the Cimmeropean Union on the (fabricated) pretext that Turkey is about to join it.

Luckily, there is a strategy more effective than silencing. The exact opposite: making sure they talk. Not about Brexi–elephant in the room, of course; not about how you are systematically depriving the poor and the vulnerable of any source of support. Certainly not, by any chance, how you have absolutely no strategy, idea, or, for that matter, procedural skill, for the most important political transition in the last half-century Cimmeria is about to undergo. No, you have something much better at your disposal: make them talk about themselves.

One of the sure-fire ways to get them to focus on what happens within universities (rather than the outside) is to point to the enemy within their own ranks. Their own management seems like the ideal object for this. Not that anyone likes their bosses anyway, but the problem here is particularly exacerbated by the fact that their bosses are overpaid, and some of academics underpaid. Not all, of course; many academics get very decent sums. Yet questions of money or material security are traditionally snubbed in the academia. For a set of convoluted historical and cultural reasons that we unfortunately do not have time to go into here, academics like to pretend they work for love, rather than money, so much that when neophytes are recruited, they often indeed work for meagre sums, and can go on doing that for years. Resilience is seen as a sign of value; there is more than a nod to Weber’s analysis of the doctrine of predestination here. This, of course, does not apply only to universities, but to capitalism as a whole: but then again, universities have always been integrated into capitalism. They, however, like to imagine they are not. Because of this, the easiest way to keep them busy is to make them believe that they can get rid of capitalism by purging its representatives (ideally, some that embody the most hateful elements – e.g. Big Pharma) from the university. It is exactly by convincing them that capitalism can be expunged by getting rid of a person, a position, or even a salary figure, that you ensure it remains alive and well (you like capitalism, also for a set of historical reasons we cannot go into at this point).

The other way to keep them occupied is to poke at the principles of university autonomy and academic freedom. You know these principles well; you defined them and enshrined them in law, not necessarily because you trusted universities (you did, but not for too long), but because you knew that they will forever be a reminder to scholars that their very independence from the state is predicated on the dependence on the state. Now, obviously, you do not want to poke at these principles too much: as we mentioned above, such gestures tend not to be very popular. However, they are so effective that even a superficially threatening act is guaranteed to get academics up in arms. A clumsily written, badly (or: ideally) timed letter, for instance. An injunction to ‘protect free speech’ can go a very long way. Even better, on top of all that, you’ve got Prevent, which doubles as an actual tool for securitization and surveillance, making sure academics are focused on what’s going on inside, rather than looking outside.

They often criticize you. They say you do not understand how universities work. Truth is, you don’t. You don’t have to; you never cared about the process, only about the outcome.

What you do understand, however, is politics – the subtle art of making people do what you want them to, or, in the absence of that, making sure they do not do something that could really unsettle you. Like organize. Or strike. Oops.

* The constituency of Combined English Universities existed until 1950.