Does academic freedom extend to social media?

There is a longer discussion about this that has been going on in the US, continental European, and many other parts of the academic/policy/legal/media complexes and their intersection. Useful points of reference are Magna Charta Universitatum (1988), in part developed to stimulate ‘transition’ of Central/Eastern European universities away from communism, and European University Association’s Autonomy Scorecard, which represents an interesting case study for thinking through tensions between publicly (state) funded higher education and principles of freedom and autonomy (Terhi Nokkala and I have analyzed it here). Discussions in the UK, however, predictably (though hardly always justifiably) transpose most of the elements, political/ideological categories, and dynamics from the US; in this sense, I thought an article I wrote a few years back – mostly about theorising complex objects and their transformation, but with extensive analysis of 2 (and a half) case studies of ‘controversies’ involving academics’ use of social media – could offer a good reference point. The article is available (Open Access!) here; the subheadings that engage with social media in particular are pasted below. If citing, please refer to the following:

Bacevic, J. (2018). With or without U? Assemblage theory and (de)territorialising the university, Globalisation, Societies and Education, 17:1, 78-91, DOI: 10.1080/14767724.2018.1498323

——————————————————————————————————–

Boundary disputes: intellectuals and social media

In an analogy for a Cartesian philosophy of mind, Gilbert Ryle famously described a hypothetical visitor to Oxford (Ryle 1949). This astonished visitor, Ryle argued, would go around asking whether the University was in the Bodleian library? The Sheldonian Theatre? The colleges? and so forth, all the while failing to understand that the University was not in any of these buildings per se. Rather, it was all of these combined, but also the visible and invisible threads between them: people, relations, books, ideas, feelings, grass; colleges and Formal Halls; sub fusc and port. It also makes sense to acknowledge that these components can also be parts of other assemblages: for instance, someone can equally be an Oxford student and a member of the Communist Party, for instance. ‘The University’ assembles these and agentifies them in specific contexts, but they exist beyond those contexts: port is produced and shipped before it becomes College port served at a Formal Hall. And while it is possible to conceive of boundary disputes revolving around port, more often they involve people.

The cases analysed below involve ‘boundary disputes’ that applied to intellectuals using social media. In both cases, the intellectuals were employed at universities; and, in both, their employment ceased because of their activity online. While in the press these disputes were usually framed around issues of academic freedom, they can rather be seen as instances of reterritorialization: redrawing of the boundaries of the university, and reassertion of its agency, in relation to digital technologies. This challenges the assumption that digital technologies serve uniquely to deterritorialise, or ‘unbundle’, the university as traditionally conceived.

The public engagement of those who authoritatively produce knowledge – in sociological theory traditionally referred to as ‘intellectuals’ – has an interesting history (e.g. Small 2002). It was only in the second half of the twentieth century that intellectuals became en masse employed by universities: with the massification of higher education and the rise of the ‘campus university’, in particular in the US, came what some saw as the ‘decline’ of the traditional, bohemian ‘public intellectual’ reflected in Mannheim’s (1936) concept of ‘free-floating’ intelligentsia. Russell Jacoby’s The Last Intellectuals (1987) argues that this process of ‘universitisation’ has led to the disappearance of the intellectual ferment that once characterised the American public sphere. With tenure, he claimed, came the loss of critical edge; intellectuals became tame and complacent, too used to the comfort of a regular salary and an office job. Today, however, the source of the decline is no longer the employment of intellectuals at universities, but its absence: precarity, that is, the insecurity and impermanence of employment, are seen as the major threat not only to public intellectualism, but to universities – or at least the notion of knowledge as public good – as a whole.

This suggests that there has been a shift in the coding of the relationship between intellectuals, critique and universities. In the first part of the twentieth century, the function of social critique was predominantly framed as independent of universities; in this sense, ‘public intellectuals’ were if not more than equally likely to be writers, journalists, and other men (since they were predominantly men) of ‘independent means’ than academic workers. This changed in the second half of the twentieth century, with both the massification of higher education and diversification of the social strata intellectuals were likely to come from. The desirability of university employment increased with the decreasing availability of permanent positions. In part because of this, precarity was framed as one of the main elements of the neoliberal transformation of higher education and research: insecurity of employment, in this sense, became the ‘new normal’ for people entering the academic profession in the twenty-first century.

Some elements of precarity can be directly correlated with processes of ‘unbundling’ (see Gehrke and Kezar 2015; Macfarlane 2011). In the UK, for instance, certain universities rely on platforms such as Teach Higher to provide the service of employing teaching staff, who deliver an increasing portion of courses. In this case, teaching associates and lecturers are no longer employees of the university; they are employed by the platform. Yet even when this is not the case, we can talk about processes of deterritorializing, in the sense in which the practice is part of the broader weakening of the link between teaching staff and the university (cf. Hall 2016). It is not only the security of employment that is changed in the process; universities, in this case, also own the products of teaching as practice, for instance, course materials, so that when staff depart, they can continue to use this material for teaching with someone else in charge of ‘delivery’.

A similar process is observable when it comes to ownership of the products of research. In the context of periodic research assessment and competitive funding, some universities have resorted to ‘buying’, that is, offering highly competitive packages to staff with a high volume of publications, in order to boost their REF scores. The UK research councils and particularly the Stern Review (2016) include measures explicitly aimed to counter this practice, but these, in turn, harm early career researchers who fear that institutional ‘ownership’ of their research output would create a problem for their employability in other institutions. What we can observe, then, is a disassembling of knowledge production, where the relationship between universities, academics, and the products of their labour – whether teaching or research – is increasingly weakened, challenged, and reconstructed.

Possibly the most tenuous link, however, applies to neither teaching nor research, but to what is referred to as universities’ ‘Third mission’: public engagement (e.g. Bacevic 2017). While academics have to some degree always been engaged with the public – most visibly those who have earned the label of ‘public intellectual’ – the beginning of the twenty-first century has, among other things, seen a rise in the demand for the formalisation of universities’ contribution to society. In the UK, this contribution is measured as ‘impact’, which includes any application of academic knowledge outside of the academia. While appearances in the media constitute only one of the possible ‘pathways to impact’, they have remained a relatively frequent form of engaging with the public. They offer the opportunity for universities to promote and strengthen their ‘brand’, but they also help academics gain reputation and recognition. In this sense, they can be seen as a form of extension; they position the universities in the public arena, and forge links with communities outside of its ‘traditional’ boundaries. Yet, this form of engagement can also provoke rather bitter boundary disputes when things go wrong.

In the recent years, the case of Steven Salaita, professor of Native American studies and American literature became one of the most widely publicised disputes between academics and universities. In 2013, Salaita was offered a tenured position at the University of Illinois. However, in 2014 the Board of Trustees withdrew the offer, citing Salaita’s ‘incendiary’ posts on Twitter (Dorf 2014; Flaherty 2015). At the time, Israel was conducting one of its campaigns of daily shelling in the Gaza Strip. Salaita tweeted: ‘Zionists, take responsibility: if your dream of an ethnocratic Israel is worth the murder of children, just fucking own it already. #Gaza’ (Steven Salaita on Twitter, 19 July 2014). Salaita’s appointment was made public and was awaiting formal approval by the Board of Trustees of the University of Illinois, usually a matter of pure technicality once it had been recommended by academic committees. Yet, in August Salaita was informed by the Chancellor that the University was withdrawing the offer.

Scandal erupted in the media shortly afterwards. It turned out that several of university’s wealthy donors, as well as a few students, had contacted members of the Board demanding that Salaita’s offer be revoked. The Chancellor justified her decision by saying that the objection to Salaita’s tweets concerned standards of ‘civility’, not the political opinion they expressed, but the discussions inevitably revolved around questions of identity, campus politics, and the degree to which they can be kept separate. This was exacerbated by a split within the American Association of University Professors, which is the closest the professoriate in the US has to a union: while the AAUP issued a statement of support to Salaita as soon as the news broke, Cary Nelson, the association’s former president and a prolific writer on issues of university autonomy and academic freedom, defended the Board’s decision. The reason? The protections awarded by the principle of academic freedom, Nelson claimed, extends only to tenured professors.

Very few people agreed with Nelson’s definition: eventually, the courts upheld Salaita’s case that the University of Illinois Board’s decision constituted breach of contract. He was awarded a hefty settlement (ten times the annual salary he would be earning at Illinois), but was not reinstated. This points to serious limitations of the using ‘academic freedom’ as an analytical concept. While university autonomy and academic freedom are principles invoked by academics in order to protect their activity, their application in academic and legal practice is, at best, open to interpretation. A detailed report by Karran and Malinson (2017), for instance, shows that both the understanding and the legal level of protection of academic freedom vary widely within European countries. In the US, the principle is often framed as part of freedom of speech and thus protected under the First Amendment (Karran 2009); but, as we could see, this does not in any way insulate it against widely differing interpretations of how it should be applied in practice.

While the Salaita case can be considered foundational in terms of making these questions central to a prolonged public controversy as well as a legal dispute, navigating the terrain in which these controversies arise has progressively become more complicated. Carrigan (2016) and Lupton (2014) note that almost everyone, to some degree, is already a ‘digital scholar’. While most human resources departments as well as graduate programmes increasingly offer workshops or courses on ‘using social media’ or ‘managing your identity online’ the issue is clearly not just one of the right tool or skill. Inevitably, it comes down to the question of boundaries, that is, what ‘counts as’ public engagement in the ‘digital university’, and why? How is academic work seen, evaluated, and recognised? Last, but not least, who decides?

Rather than questions of accountability or definitions of academic freedom, these controversies cannot be seen separately from questions of ontology, that is, questions about what entities are composed of, as well as how they act. This brings us back to assemblages: what counts as being a part of the university – and to what degree – and what does not? Does an academic’s activity on social media count as part of their ‘public’ engagement? Does it count as academic work, and should it be valued – or, alternatively, judged – as such? Do the rights (and protections) of academic freedom extend beyond the walls of the university, and in what cases? Last, but not least, which elements of the university exercise these rights, and which parts can refuse to extend them?

The case of George Ciccariello-Maher, until recently a Professor of English at Drexel University, offers an illustration of how these questions impact practice. On Christmas Day 2016, Ciccariello-Maher tweeted ‘All I want for Christmas is white genocide’, an ironic take on certain forms of right-wing critique of racial equality. Drexel University, which had been closed over Christmas vacation, belatedly caught up with the ire that the tweet had provoked among conservative users of Twitter, and issued a statement saying that ‘While the university recognises the right of its faculty to freely express their thoughts and opinions in public debate, Professor Ciccariello-Maher’s comments are utterly reprehensible, deeply disturbing and do not in any way reflect the values of the university’. After the ironic nature of the concept of ‘white genocide’ was repeatedly pointed out both by Ciccariello-Maher himself and some of his colleagues, the university apologised, but did not withdraw its statement.

In October 2017, the University placed Ciccariello-Maher on administrative leave, after his tweets about white supremacy as the cause of the Las Vegas shooting provoked a similar outcry among right-wing users of Twitter.1 Drexel cited safety concerns as the main reason for the decision – Ciccariello-Maher had been receiving racist abuse, including death threats – but it was obvious that his public profile was becoming too much to handle. Ciccariello-Maher resigned on 31st December 2017. His statement read: ‘After nearly a year of harassment by right-wing, white supremacist media and internet trolls, after threats of violence against me and my family, my situation has become unsustainable’.2 However, it indirectly contained a criticism of the university’s failure to protect him: in an earlier opinion piece published right after the Las Vegas controversy, Cicariello-Maher wrote that ‘[b]y bowing to pressure from racist internet trolls, Drexel has sent the wrong signal: That you can control a university’s curriculum with anonymous threats of violence. Such cowardice notwithstanding, I am prepared to take all necessary legal action to protect my academic freedom, tenure rights and most importantly, the rights of my students to learn in a safe environment where threats don’t hold sway over intellectual debate.’.3 The fact that, three months later, he no longer deemed it safe to continue doing that from within the university suggests that something had changed in the positioning of the university – in this case, Drexel – as a ‘bulwark’ against attacks on academic freedom.

Forms of capital and lines of flight

What do these cases suggest? In a deterritorialised university, the link between academics, their actions, and the institution becomes weaker. In the US, tenure is supposed to codify a stronger version of this link: hence, Nelson’s attempt to justify Salaita’s dismissal as a consequence of the fact that he did not have tenure at the University of Illinois, and thus the institutional protection of academic freedom did not extend to his actions. Yet there is a clear sense of ‘stretching’ nature of universities’ responsibilities or jurisdiction. Before the widespread use of social media, it was easier to distinguish between utterances made in the context of teaching or research, and others, often quite literally, off-campus. This doesn’t mean that there were no controversies: however, the concept of academic freedom could be applied as a ‘rule of thumb’ to discriminate between forms of engagement that counted as ‘academic work’ and those that did not. In a fragmented and pluralised public sphere, and the growing insecurity of academic employment, this concept is clearly no longer sufficient, if it ever was.

Of course, one might claim in this particular case it would suffice to define the boundaries of academic freedom by conclusively limiting it to tenured academics. But that would not answer questions about the form or method of those encounters. Do academics tweet in a personal, or in a professional, capacity? Is it easy to distinguish between the two? While some academics have taken to disclaimers specifying the capacity in which they are engaging (e.g. ‘tweeting in a personal capacity’ or ‘personal views/ do not express the views of the employer’), this only obscures the complex entanglement of individual, institution, and forms of engagement. This means that, in thinking about the relationship between individuals, institutions, and their activities, we have to take account the direction in which capital travels. This brings us back to lines of flight.

The most obvious form of capital in motion here is symbolic. Intellectuals such as Salaita and Ciccariello-Maher in part gain large numbers of followers and visibility on social media because of their institutional position; in turn, universities encourage (and may even require) staff to list their public engagement activities and media appearances on their profile pages, as this increases visibility of the institution. Salaita has been a respected and vocal critic of Israel’s policy and politics in the Middle East for almost a decade before being offered a job at the University of Illinois. Ciccariello-Maher’s Drexel profile page listed his involvement as

 … a media commentator for such outlets as The New York Times, Al Jazeera, CNN Español, NPR, the Wall Street Journal, Washington PostLos Angeles Times and the Christian Science Monitor, and his opinion pieces have run in the New York Times’ Room for Debate, The NationThe Philadelphia Inquirer and Fox News Latino.4

One would be forgiven for thinking that, until the unfortunate Tweet, the university supported and even actively promoted Ciccariello-Maher’s public profile.

The ambiguous nature of symbolic capital is illustrated by the case of another controversial public intellectual, Slavoj Žižek. Renowned ‘Elvis of philosophy’ is not readily associated with an institution; however, he in fact has three institutional positions. Žižek is a fellow of the Institute of Philosophy and Social Theory of the University of Ljubljana, teaches at the European Graduate School, and, most recently has been appointed International Director of the Birkbeck Institute of the Humanities. The Institute’s web page describes his appointment:

Although courted by many universities in the US, he resisted offers until the International Directorship of Birkbeck’s Centre came up. Believing that ‘Political issues are too serious to be left only to politicians’, Žižek aims to promote the role of the public intellectual, to be intellectually active and to address the larger public.5

Yet, Žižek quite openly boasts what comes across as a principled anti-institutional stance. Not long ago, a YouTube video in which he dismisses having to read students’ essays as ‘stupid’ attracted quite a degree of opprobrium.6 On the one hand, of course, what Žižek says in the video can be seen as yet another form of attention-seeking, or a testimony to the capacity of new social media to make everything and anything go ‘viral’. Yet, what makes it exceptional is exactly its unexceptionality: Žižek is known for voicing opinions that are bound to prove controversial or at least thread on the boundary of political correctness, and it is not a big secret that most academics do not find the work of essay-reading and marking particularly rewarding. But, unlike Žižek, they are not in a position to say it. Trumpeting disregard for one’s job on social media would, probably, seriously endanger it for most academics. As we could see in examples of Salaita and Ciccariello-Maher, universities were quick to sanction opinions that were far less directly linked to teaching. The fact that Birkbeck was not bothered by this – in fact, it could be argued that this attitude contributed to the appeal of having Žižek, who previously resisted ‘courting’ by universities in the US – serves as a reminder that symbolic capital has to be seen within other possible ‘lines of flight’.

These processes cannot be seen as simply arising from tensions between individual freedom on the one, and institutional regulation on the other side. The tenuous boundaries of the university became more visible in relation to lines of flight that combine persons and different forms of capital: economic, political, and symbolic. The Salaita controversy, for instance, is a good illustration of the ‘entanglement’ of the three. Within the political context – that is, the longer Israeli-Palestinian conflict, and especially the role of the US within it – and within the specific set of economic relationships, that is, the fact US universities are to a great degree reliant on funds from their donors – Salaita’s statement becomes coded as a symbolic liability, rather than an asset. This runs counter to the way his previous statements were coded: so, instead of channelling symbolic capital towards the university, it resulted in the threat of economic capital ‘fleeing’ in the opposite direction, in the sense of donors withholding it from the university. When it came to Ciccariello-Maher, from the standpoint of the university, the individual literally acts as a nodal point of intersection between different ‘lines of flight’: on the one hand, the channelling of symbolic capital generated through his involvement as an influential political commentator towards the institution; on the other, the possible ‘breach’ of the integrity (and physical safety) or staff and students as its constituent parts via threats of physical violence against Ciccariello-Maher.

All of this suggests that deterritorialization can be seen as positive and even actively supported; until, of course, the boundaries of the institution become too porous, in which case the university swiftly reterritorialises. In the case of the University of Illinois, the threat of withdrawn support from donors was sufficient to trigger the reterritorialization process by redrawing the boundaries of the university, symbolically leaving Salaita outside them. In the case of Ciccariello-Maher, it would be possible to claim that agency was distributed in the sense in which it was his decision to leave; yet, a second look suggests that it was also a case of reterritorialization inasmuch as the university refused to guarantee his safety, or that of his students, in the face of threats of white supremacist violence or disruption.

This also serves to illustrate why ‘unbundling’ as a concept is not sufficient to theorise the processes of assembling and disassembling that take place in (or on the same plane as) contemporary university. Public engagement sits on a boundary: it is neither fully inside the university, nor is it ‘outside’ by the virtue of taking place in the environment of traditional or social media. This impossibility to conclusively situate it ‘within’ or ‘without’ is precisely what hints at the arbitrary nature of boundaries. The contours of an assemblage, thus, become visible in such ‘boundary disputes’ as the controversies surrounding Salaita and Ciccariello-Maher or, alternatively, their relative absence in the case of Žižek. While unbundling starts from the assumption that these boundaries are relatively fixed, and it is only components that change (more specifically, are included or excluded), assemblage theory allows us to reframe entities as instantiated through processes of territorialisation and deterritorialization, thus challenging the degree to which specific elements are framed (or, coded) as elements of an assemblage.

Conclusion: towards a new political economy of assemblages

Reframing universities (and, by extension, other organisations) as assemblages, thus, allows us to shift attention to the relational nature of the processes of knowledge production. Contrary to the narratives of university’s ‘decline’, we can rather talk about a more variegated ecology of knowledge and expertise, in which the identity of particular agents (or actors) is not exhausted in their position with(in) or without the university, but rather performed through a process of generating, framing, and converting capitals. This calls for longer and more elaborate study of the contemporary political economy (and ecology) of knowledge production, which would need to take into account multiple other actors and networks – from the more obvious, such as Twitter, to less ‘tangible’ ones that these afford – such as differently imagined audiences for intellectual products.

This also brings attention back to the question of economies of scale. Certainly, not all assemblages exist on the same plane. The university is a product of multiple forces, political and economic, global and local, but they do not necessarily operate on the same scale. For instance, we can talk about the relative importance of geopolitics in a changing financial landscape, but not about the impact of, say, digital technologies on ‘The University’ in absolute terms. Similarly, talking about effects of ‘neoliberalism’ makes sense only insofar as we recognise that ‘neoliberalism’ itself stands for a confluence of different and frequently contradictory forces. Some of these ‘lines of flight’ may operate in ways that run counter to the prior states of the object in question – for instance, by channelling funds, prestige, or ideas away from the institution. The question of (re)territorialisation, thus, inevitably becomes the question of the imaginable as well as actualised boundaries of the object; in other words, when is an object no longer an object? How can we make boundary-work integral to the study of the social world, and of the ways we go about knowing it?

This line of inquiry connects with a broader sociological tradition of the study of boundaries, as the social process of delineation between fields, disciplines, and their objects (e.g. Abbott 2001; Lamont 2009; Lamont and Molnár 2002). But it also brings in another philosophical, or, more precisely, ontological, question: how do we know when a thing is no longer the same thing? This applies not only to universities, but also to other social entities – states, regimes, companies, relationships, political parties, and social movements. The social definition of entities is always community-specific and thus in a sense arbitrary; similarly, how the boundaries of entities are conceived and negotiated has to draw on a socially-defined vocabulary that conceptualises certain forms of (dis-)assembling as potentially destructive to the entity as a whole. From this perspective, understanding how entities come to be drawn together (assembled), how their components gain significance (coding), and how their relations are strengthened or weakened (territorialisation) is a useful tool in thinking about beginnings, endings, and resilience – all of which become increasingly important in the current political and historical moment.

The transformation of processes of knowledge production intensifies all of these dynamics, and the ways in which they play out in universities. While certainly contributing to the unbundling of its different functions, the analysis presented in this article shows that the university remains a potent agent in the social world – though what the university is composed of can certainly differ. In this sense, while the pronouncement of the ‘death’ of universities should be seen as premature, this serves as a potent reminder that understanding change, to a great deal, depends not only on how we conceptualise the mechanisms that drive it, but also on how we view elements that make up the social world. The tendency to posit fixed and durable boundaries of objects – that I have elsewhere referred to as ‘ontological bias’7 – has, therefore, important implications for both scholarship and practice. This article hopes to have made a contribution towards questioning the boundaries of the university as one among these objects.

——————–

If you’re interested in reading more about these tensions, I also recommend Mark Carrigan’s ‘Social Media for Academics’ (Sage).

How to think about theory (interview with Mark Carrigan, 28 June 2018)

In June 2018, Mark Carrigan interviewed me and a few other people on what is social theory. The original interview was published on the Social Theory Applied website. I’m sharing it here only lightly edited, as it still reflects to a good degree what I think about the labour of theorizing as well as what I call ‘the social life of concepts’, which is my approach to doing and teaching theory.

MC: What is theory?

JB: The million dollar question, isn’t it? I think theory can mean quite a few things – Abend (2008) has listed a few – but rather than reiterate that, I’d focus on two interpretations of the concept that are crucial to my work. One is that it is a language, or a vocabulary, for making sense of social reality; as all languages, it allows for improvisation, but also has rules and procedures that regulate how and under what conditions specific statements make sense. It is also a practice: that is, the practice of growing, developing, and engaging with concepts in that language. This is why Arendt’s discussion of the concept as theorein is not opposed to practice as a whole, but rather comprises action, though one that entails a different idea of engagement.

MC: Why is it important?

JB: This is also why I believe theory is important – I think that a meta-language, and a language about that language, is necessary in order to ensure we can have a meaningful conversation about social matters – and when I say “we”, I do not mean only scientists. Social life by definition involves some level of reduction of complexity: how we go about reducing that complexity has direct implications for how we go about dealing with other people and our environment. This is also why I think it is fruitless to separate social and political theory. Take the concept of class, for instance: it can – it does – mean different things to different people. We need to have both a language in which to make these concepts meaningfully talk to each other, and a routinized social practice for doing so.

MC: What role does it play in your work?

JB: One of the corollaries of my training in both sociology and anthropology is that I find it difficult to sustain discussions about theory that do not engage with how actual people go about using these concepts – the exegetic tone of “did Marx really mean to say this…” or “why Bourdieu’s concept of social capital is that….” is, in my view, both too canonical and insufficiently exciting.

Theory need not be scholastic. One of the elements I got interested in when I was doing my first PhD, for instance, was the concept of ‘romantic relationship’- there were different attempts to theorise it (inversion of historical abstraction of property/inheritance rights, subjugation of women, emancipation from gender roles, cultural expression of ‘hard-wired’ preferences, and so on), but fewer attempts to see how these interpretations ‘sit’ with people’s ideas and practices. Reality does not ‘naturally’ fit into a specific theoretical framework. Rather than trying to make it do so, I decided to put these different theoretical lenses into conversation, to see a particular empirical case could illuminate their commonalities, differences, and possible overlaps.

My second PhD – which is on the role of critique in and of higher education – pretty much repeated this movement, but took it one step further. It asks what difference people’s knowledge makes in how they go about approaching things (including their own situation). Some of this knowledge is theoretical, both in the sense in which it is imbued by concepts derived from theory (as in Giddens’ double hermeneutic), and in the sense in which the question of the link between knowledge and action is in itself theoretically informed. There’s a whole bunch of nested epistemic double binds in there, and that’s what I find so attractive! In philosophical terms, I am aiming to bridge the gap between [critical] realist and pragmatist accounts of the production of knowledge and its role in social reality. I’ve found speculative realism to be a potentially useful tool in doing so, but it is a signpost rather than a church.

I always strive to work simultaneously *on* and *with* theory; someone recently described this as “theoretically hybrid”, which I think was a nice way of putting that I was inclined to bastardize every and one concept I ever came across. But I think this is what the job of the theorist is about. I understand some people prefer to work within the confines of a single theoretical tradition, sometimes dogmatically so; but this has never been my choice. I have very little reverence for principled fidelity to specific theoretical frameworks. Theories are worldviews; this means they need to be challenged.

MC: What would this routinised social practice look like? Is this something social theorists are uniquely qualified to do? How do we ensure this challenge happens? There are lots of obvious mechanisms within the academy which militate against this.

JB: Well, I think routinised social practice is what happens in teaching of sociology and other social science disciplines; it also happens at conferences, reading groups, etc. – such as the Theory stream at the British Sociological Association’s annual conference. The problem is these practices are often sequestered from other bits of theorising. For instance, feminist theory is rarely treated as part of ‘mainstream’ social theory; same goes for postcolonial theory and theories of race, though it seems this is finally beginning to change.

This reproduces, as your question suggests, one of the worst tendencies in the academia (and beyond): theories about and by educated Western white men are treated as ‘theory’, while almost all theory that falls short of even if just one of these categories is automatically a ‘special case’ – as if feminist theory applied only to women, and theories of race only to people of colour. As someone whose induction into theory happened initially through the combination of social anthropology (where questions of identity and difference are pretty much front and centre) and philosophy of science (which acknowledged quite a while ago that all claims to knowledge – including theoretical knowledge – are socially grounded), I find this almost incomprehensible – or, rather, I find that explanations for this go back to the elements in the academia we do not particularly like: racism, sexism, Euro- or (not always mutually exclusively) Anglo-centrism, etc.

Social theorists, on the whole, have not been very good at talking about this. This means that this challenge tends to happen in isolated contexts – and a lot of mainstream social theory carries on with ‘business as usual’. Making it more central requires, I think, a lot of concerted effort. Some of this is personal – for instance, I make a point of always calling out these practices when I spot them, and very adamantly resist ‘pigeonholing’ in which, eg, women’s theoretical claims are routinely repackaged or treated as empirical. For example: a man writing on privatisation of enterprises is seen as contributing to Marxist theory, but a woman writing on the gendered division of labour is either writing ‘about women’ (sic!) or about household labour.

When I was writing my first PhD, about relationships, people often said ‘oh’, as it was a ‘light’ topic, or as if it pertained only to practices of social mobility in a post-socialist context, where my fieldwork was. Giddens’ ‘pure relationship’, on the other hand – which, incidentally, is a concept I did my best to write against – was not taken as only representative of the lived experience of transnational bourgeois mobile academics. This will sound a bit Gramscian, but a lot of theoretical claims made by ‘academic celebrities’ that are routinely taken seriously are often little but the extrapolation of their privilege. Yet, clearly, that is not the problem in and of itself – everyone writes themselves into theories they develop. It’s treating some of these as reflections of universal, God-given truth, and some as ‘about women’ or ‘about race’.  It’s the culture of condescension towards women and minorities that really needs to change.

Obviously, calling it out is not enough: I think we need a strong organisational and institutional support for this. One of academia’s performative contradictions – that I am particularly dedicated to exploring – is that often collective practices work precisely against this. So, we can have a workshop or panel on sexism, racism, or colonialism in social theory, but actually challenging these practices – including in their ‘everyday’ guises – takes a lot of courage, but also a lot of solidarity. It cannot happen outside of challenging the whole culture of fear that currently pervades the academia but which, I hope, the UCU strikes have started chipping away at.

The other thing we can do is provide spaces where these conversations can take place. For instance, the Social Theory summer school we ran at the University of Cambridge in 2016 was developed exactly to surmount this tendency towards ‘cloistered’ (well, of all words!) theorising. To step outside of the retreat of academic positions, seminars, self-rewarding research grants panels, etc., and ask: what is it that doing theory actually entails? Is it anything other than an attempt to justify our own (academic and non-academic) privilege by casually namedropping Foucault or Durkheim? I think this is the question we really need to answer.

How to revise theory

These are some of the slides I have developed for this year’s revision lecture for my students on Modern and Contemporary Sociological Theory at Durham. I am posting them here as they may be a useful pedagogical resource for thinking through teaching – not only social (or sociological) theory but also other kinds of social and political thought.

These slides are meant to help students revise and prepare for exams – note that this is not the extensive engagement we seek to encourage in essays, and does not represent the way teaching or revising theory is approached in other modules (or the other half of this module) at Durham. If you are using these (or similar) slides in your own teaching I’d be keen to hear from you!

This is the introductory slide that describes the ‘4C’ approach to revision:

(1) Specify the social, historical and political context of theories;

(2) Discuss their content (and how they approach different elements of social ontology and epistemology – note that this is a longer discussion);

(3) Contribution: discuss how they contributed to sociologcal knowledge, and addressed and challenged preceding/existing theories;

(4) Critique: how have other (or later) theories challenged or deconstructed the theories you are summarizing?

This is an example of how to do this for Critical Race Theory and theories of intersectionality (as difficult as it is to reduce all of this to one slide!)

And here are two more…decolonial and postcolonial theory and (some of the) contemporary feminist theories, performativity and affect

Night(mare) in Michaelmas*: or, an academic Halloween tale

Halloween, as the tradition goes, is the time when the curtain between the two worlds opens. Of course, in anthropology you learn that this is not a tradition at all – they are all invented, it just depends how long ago. This Halloween, however, I would like to tell you a story about boundaries between worlds, and about those who stand, simultaneously, on both sides.

  1. Straw (wo)men

Scarecrow, effigy, straw man: they are remarkably similar. Made of dried grass, leaves, and branches, sometimes dressed in rags, but rarely with recognizable personal characteristics. Personalizing is the providence of Voodoo dolls, or those who use them, dark magic, and violence, which can sometimes be serious and political. Yet, they are all unmistakeably human: in this sense, they serve to attune us to the ordinariness – the unremarkability – of everyday violence.  

Scarecrows stand on ‘our’ side, and guard our world – that is, the world that relies on agricultural production – against ‘theirs’ (of crows, other birds, and non-human animals: they are, we are told, enemies). The sympathy and even pity we feel for scarecrows (witness The Wizard of Oz) shields us from knowledge that scarecrows bear the disproportionate brunt of the violence we do to Others, and to other worlds. We made it the object of crows’ fear and hatred, so that it protects us from what we do not want to acknowledge: that our well-being, and our food, comes only at the cost of destroying others’.

Effigies are less unambiguously ‘ours’. Regardless of whether they are remnants of *actual* human sacrifice (evidence for this is somewhat thin), they belong both to ‘their’ world and ‘ours’. ‘Theirs’ is the non-human world of fire, ash, and whatever remains once human artifices burn down. ‘Ours’ is the world of ritual, collectivity, of the safe reinstatement of order. Effigies are thus simultaneously dead and alive. We construct them, but not to keep the violence – of Others, and towards Others, like with scarecrows – at bay; we construct them in order to restrain and absorb the violence that is towards our own kind. When we burn effigies, we aim to destroy what is evil, rotten, and polluting amongst ourselves. This is why effigies are such a threatening political symbol: they always herald violence in our midst.

Straw men, by contrast, are neither scarecrows nor effigies: we construct them so that we may – selfishly – live. A ‘straw man’ argument is one we use in order to make it easier to win. We do not engage with actual critique, or possible shortfalls, of our own reasoning: instead, we construct an imaginary opponent to make ourselves appear stronger. This is why it makes no sense to fear straw men, though there are good reasons to be suspicious of those who fashion them all too often. They do not cross boundaries between worlds: they belong fully, and exclusively, to this one.

Straw men are not the stuff of horror. Similarly, there is no reason to fear the scarecrow, unless you are a crow. Effigies, however, are different.

2. Face(mask) to face(mask)

Universities in the UK insist on face-to-face teaching, despite the legal challenge from the University and College Union, protests from individual academics, as well as by now overwhelming evidence that there is no way to make classrooms fully ‘Covid-secure’. The justification for this has usually taken the form ‘students expect *some* face-to-face teaching’. This, I believe, means university leadership fears that students (or, more likely, their parents, possibly encouraged by the OfS and/or The Daily Mail) would request tuition fee reimbursements in case all teaching were to shift online. A more coherent interpretation of the stubborn insistence on f2f teaching is that shifting teaching online would mean many students would elect not to live in student accommodation. Student accommodation, in turn, is not only a major source of profit (and employment) for universities, but also for private landlords, businesses, and different kinds of services in cities that happen to have a significant student population.

In essence, then, f2f teaching serves to secure two sources of income, both disproportionately benefitting the propertied class. In this sense, it remains completely irrelevant who teaches face-to-face or, indeed, what is taught. This is obvious from the logic of guaranteeing face-to-face provision in all disciplines, not only those that might have demonstrable need for some degree of physical co-presence (I’m thinking those that use laboratories, or work with physical material). The content, delivery, and, supremely, rationale for maintaining face-to-face teaching remain unjustified. “They” (students?) expect to see “us” (teachers?) in flesh, blood, and, of course, facemask – which we hope will prevent the airborne particles of Coronavirus from infecting us, and thus from getting ill, suffering consequences, and potentially dying.

That this kind of risk would be an acceptable price for perfunctorily parading behind Perspex screens can only seem odd if we believe that what is being involved in face-to-face teaching is us as human beings and individuals. But it is not: when we walk into the classroom, we are not individual academics, teachers, thinkers, writers, or whatever else we may be. We are the ‘face’ of ‘face-to-face’ teaching. We are the effigies.

3. On institutional violence

On Monday, I am teaching a seminar in social theory. Under ‘normal’ circumstances, this would mean leading small group discussions on activities, and readings, that students have engaged with. Under these circumstances, it will mean groupings of socially distant students trying to have a discussion about readings struggling to hear each other through face masks. Given that I struggle to communicate ‘oat milk flat white’ from behind a mask, I have serious doubts that I will manage to convey particularly sophisticated insights into social theory.

But this does not matter: I am not there as a lecturer, as a human being, as a theorist. I am there to sublimate the violence that we are all complicit in. This violence concerns not only the systematic exposure to harm created by the refusal to acknowledge the risks of cramming human beings unnecessarily into closed spaces during the pandemic of an airborne disease, but also forms of violence specific to higher education. The sporadic violence of the curriculum, still overwhelmingly white, male, and colonial (incidentally, I am teaching exactly such a session). More importantly, it includes the violence that we tacitly accept when we overlook the fact that ‘our’ universities subsist on student fees, and that fees are themselves products of violence. The capital that fees depend on are either a product of exploitation in the past, or of student debt, and thus exploitation in the future.

When I walk into the classroom on Monday, I will want my students to remember that every lecturer stands on the boundary between two worlds, simultaneously dead and alive. Sure, we all hope everyone makes it out of there alive, but that’s not the point: the point is how close to the boundary we get. When I walk into the classroom on Monday, I will remind my students that what they see is not me, but the effigy constructed to obscure the violence of the intersection between academic and financial capital. When I walk into the classroom on Monday, I will want my students to know that the boundary between two worlds is very, very thin, and not only on Halloween.

  • Michaelmas, for those who do not know, is the name of Autumn (first) term of academic year at Oxford, Cambridge, and, incidentally, Durham.

Women and space

Mum in space

Recently, I saw two portrayals of women* in space, Proxima – starring Eva Green as the female member of the crew training for the first mission to Mars; and Away, starring Hilary Swank as the commander of the crew on the first mission to Mars (disclaimer: I have only seen the first two episodes of Away, so I’m not sure what happens in the rest of the series). Both would have been on my to-watch list even under normal circumstances; I grew up on science fiction, and, as any woman who, in Rebecca West’s unsurpassed formulation, expresses opinions that distinguish her from a doormat, have spent a fair bit of time thinking about gender, achievement, and leadership. This time, an event coloured my perception of both: my mum’s death in October.

My mother was 80; she died of complications related to metastatic cancer, which had started as breast cancer but had at this point spread to her liver. She had dealt with cancer intermittently since 2009; had had a double mastectomy and repeated chemotherapy/radiation at relatively regular intervals since – in 2011, 2015, 2018 and, finally, 2020 – the last one stopping shortly after it started, as it became evident that it could not reverse the course of Mum’s illness and was, effectively, making it worse.

As anyone living with this kind of illness knows, it’s always a long game of predicting and testing, waiting when the next one will come up; it’s possible that the cancer that eventually killed my mum was missed because of flaky screening in November, or because of delays at the height of the pandemic. What matters is that, by the time they discovered it during a regular screening in June, it was already too late.

What matters is that, because living with this sort of illness entails living in segments of time between two appointments, two screenings, two test results, we had kind of expected this. We had time to prepare. My mother had time to prepare. I had time to prepare. What also matters is that I was able to travel, to leave the country in time to see my mother still alive, despite the fact that at that time the Home Office had been sitting on my Tier 2 visa application since the start of July, and on the request for expedition due to compassionate circumstances for three weeks. This matters, because many other women are not so lucky as to have the determination to call the Home Office visa processing centre three times, the cultural capital to contact their MP when it seemed like time was running out, nor, for that matter, an MP (also a woman) who took on the case. It matters, because I was able to be there for the last two weeks of my mother’s life. I was there when she died.

But this essay isn’t about me, or my mum. It’s about women, and the stars.

Women and the stars

Every story about the stars is, in essence, a story of departure from Earth, and thus a story of separation, and thus a story of leaving, and what’s left behind. This doesn’t mean that these themes need to be parsed via the tired dichotomy of the ‘masculine-proactive-transcendent’ principle pitted against the ‘feminine-grounded-immanent’, but they often are, and both Proxima and Away play out this tension.

For those who had not seen either or both, Proxima and Away are about women who are travelling into space. Proxima’s Sarah (Eva Green) is the French member of the international crew of astronauts spending a year at the International Space Station in preparation for the first mission to Mars.  

The central tension develops along two vectors: the characters’ relation to their male partners (Sarah’s – ex – Thomas, Emma’s Matt); and their relationship to their daughters – Sarah’s Stella, and Emma’s Alexis (‘Lex’). While the relationship to their partners is not irrelevant, it is obvious that the mother-daughter relationship is central to the plot. Neither is it accidental that both (and only) children are girls: in this sense, the characters’ relationship to their daughters is not only the relationship to the next generation of women, it is also the relationship to their ‘little’ selves. In this sense, the daughters’ desire for their mother’s to return – or to stay, to never leave for the stars – is also a reflection of their own desire to give up, to stay in the comfort of the ground, the Earth, the safe (if suffocating) embrace of family relations and gender roles, in which ‘She’ is primarily, after all, a Mother.

It is interesting that both characters, in Proxima and in Away, find similar ‘solutions’ – or workarounds – for this central tension. In Proxima, Sarah leaves her daughter, but betrays her own commitment by violating pre-flight quarantine regulations, sneaking out the night before departure to take her daughter to see the rocket from up close. In Away, Emma decides to return from pre-flight Lunar base after her husband has a heart attack, only to be persuaded to stay, both by the (slowly recovering) husband and, more importantly, by the daughter, who – at the last minute – realizes the importance of the mission and says she wants her mum to stay, rather than return to Earth. The guilt both women feel over ‘abandoning’ their daughters (and thus their own traditional roles) is thus compensated or resolved by inspiring the next generation of women to ‘look at the stars’: to aim higher, and to prioritize transcendence at the cost of immanence, even when the price is pain.     

We might scoff at the simple(ish) juxtaposition of Earth and the stars, but the essence of that tension is still there, no matter how we choose to frame it. It is the basic tension explored in Simone de Beauvoir’s existentialist philosophy – the tension between being-for-themselves and being-in-relation to. It’s the unforgiving push and pull that leads so many women to take on disproportionate amounts of emotional, care, and organizational labour. It’s a tension you can’t resolve, no matter how queer, trans, or childless. Even outside of ‘traditional’ gender roles, women are still judged first and foremost on their ability to conceive and retain relationships; research on women leaders, for instance, shows they are required to consistently demonstrate a ‘collective’ spirit of the sort not expected of their male counterparts.

A particularly brutal version of this tension presented itself in the months before my mother died. I was stuck in England, not being able to leave before my Tier 2 visa was approved, and her condition was getting worse. Home Office was already behind their 8-week timeframe due to the pandemic; the official guidance – confirmed by the University – was that, if I chose to leave the country before the decision had been made, not only would I automatically forfeit my application, I would also be banned for a year from re-entering the country, and for a further year from re-applying for the same sort of permit. In essence, this meant I was choosing between my job – which I love – and my mother, which I loved too.

Luckily, I never had to make this choice; after a lot of intervention, my visa came through, and I was able to travel. I am not sure what kind of decision I would have made.

Mum and daughter in space

I saw Proxima in August, shortly after moving from Cambridge to Durham to start my job at the University. It was only the second time I was able to cry after having learned of my mum’s most recent, and final, diagnosis. I saw Away after returning from the funeral in early October, having acquired a Netflix account in a vague attempt at ‘self-care’ that didn’t involve reading analytic philosophy.

My mum saw neither, and I am not sure if she would have recognized herself in them. Hers was a generation of transcendence, buttressed by post-war recovery and socialism’s early successes in eradicating gender inequality. She introduced me to science fiction, but it was primarily Arthur Clarke, Isaac Assimov, and Stanislav Lem, my mum having no problems recognizing herself in the characters of Dave Bowman, Hari Seldon, or Rohan from ‘The Invincible’. Of course, as I was growing up, neither did I: it was only after I had already reached a relatively advanced career stage – and, it warrants mentioning, in particular after I began full-time living in the UK – that I started realizing how resolute the steel grip of patriarchy is in trying to make sure we never reach for the stars.

My mother famously said that she never considered herself a feminist, but had led a feminist life. By this, she meant that she had an exceptional career including a range of leadership positions, first in research, then in political advising, and finally in diplomacy; and that she had a child – me – as a single mother, without a partner involved. What she didn’t stop to think about was that, throughout this process, she had the support not only of two loving parents (both of my grandparents had already retired when I was born), but also of socialist housing, childcare, and education policies.I would point this fact out to her on the rare occasions when she would bring up her one remaining regret, which was that I choose not to have children. Though certainly aided by the fact I never felt the desire to, this decision was buttressed by my belief (and observation) that, no matter how dedicated, egalitarian, etc. etc. a partner can be, it is always mothers who end up carrying a greater burden of childcare, organization, and planning. I hope that she, in the end, understood this decision.

In one of the loveliest messages** I got after my mum died, a friend wrote that he believed my mum was now a star watching over me. As much as I would like to think that, if anything, the experience of a death has resolutely convinced me there is no ‘thereafter’, no space, place, or plane where we go after we die.

But I’m still watching the stars.  

This image is, sadly, a pun that’s untraslateable into English; sorry

* For avoidance of doubt, trans women are women

**Throughout the period, I’ve received absolutely stellar messages of love and support. Among these, it warrants saying, quite a few came from men, but those that came from women were exceptional in striking the balance between giving me space to think my own thoughts and sit with my own grief, while also making sure I knew I could rely on their support if I wanted to. This kind of balance, I think, comes partly out of having to always negotiate being-for-oneself and being-for-others, but there is a massive lesson in solidarity right in there.

The King’s Two(ish) Bodies

Contemporary societies, as we know, rest on calculation. From the establishment of statistics, which was essential to the construction of the modern state, to double-entry bookkeeping as the key accounting technique for ‘rationalizing’ capitalism and colonial trade, the capacity to express quality (or qualities, to be more precise) through numbers is at the core of the modern world.

From a sociological perspective, this capacity involves a set of connected operations. One is valuation, the social process through which entities (things, beings) come to (be) count(ed); the other is commensuration, or the establishment of equivalence: what counts as or for what, and under what circumstances. Marion Fourcade specifies three steps in this process: nominalization, the establishment of ‘essence’ (properties); cardinalization, the establishment of quantity (magnitude); and ordinalization, the establishment of relative position (e.g. position on a scale defined by distance from other values). While, as Mauss has demonstrated, none of these processes are unique to contemporary capitalism – barter, for instance, involves both cardinalization and commensuration – they are both amplified by and central to the operation of global economies.

Given how central the establishment of equivalence is to contemporary capitalism, it is not a little surprising that we seem so palpably bad at it. How else to explain the fact that, on the day when 980 people died from Coronavirus, the majority of UK media focused on the fact that Boris Johnson was recovering in hospital, reporting in excruciating detail the films he would be watching. While some joked about excessive concern for the health of the (secular) leader as reminiscent of the doctrine of ‘The King’s Two Bodies’, others seized the metaphor and ran along with it – unironically.

Briefly (and somewhat reductively – please go somewhere else if you want to quibble, political theory bros), ‘King’s Two Bodies’ is a concept in political theology by which the state is composed of two ‘corporeal’ entities – the ‘body politic’ (the population) and the ‘body natural’ (the ruler)*. This principle allows the succession of political power even after the death of the ruler, reflected in the pronouncement ‘The King is Dead, Long Live the King’. From this perspective, the claim that 980 < 1 may seem justified. Yet, there is something troubling about this, even beyond basic principles of decency. Is there a large enough number that would disturb this balance? Is it irrelevant whose lives are those?

Formally, most liberal democratic societies forbid the operation of a principle of equivalence that values some human beings as lesser than others. This is most clearly expressed in universal suffrage, where one person (or, more specifically, one political subject) equals one vote; on the global level, it is reflected in the principle of human rights, which assert that all humans have a certain set of fundamental and unalienable rights simply as a consequence of being human. All members of the set ‘human’ have equal value, just by being members of that set: in Badiou’s terms, they ‘count for one.

Yet, liberal democratic societies also regularly violate these principles. Sometimes, unproblematically so: for instance, we limit the political and some other rights of children and young people until they become of ‘legal age’, which is usually the age at which they can vote; until that point, they count as ‘less than one’. Sometimes, however, the consequences of differential valuation of human beings are much darker. Take, for instance, the migrants who are regularly left to drown in the Mediterranean or treated as less-than-human in detention centres; or the NHS doctors and nurses – especially BAME doctors and nurses – whose exposure to Coronavirus gets less coverage than that of politicians, celebrities, or royalty. In the political ontology of contemporary Britain, some lives are clearly worth less than others.

The most troubling implication of the principle by which the body of the ruler is worth more than a thousand (ten thousand? forty thousand?) of ‘his’ subjects, then, is not its ‘throwback’ to mediaeval political theology: it is its meaning for politics here and now. The King’s Two Bodies, after all, is a doctrine of equivalence: the totality of the body politic (state) is worth as much as the body of the ruler. The underlying operation is 1 = 1. This is horribly disproportionate, but it is an equivalence nonetheless: both the ruler and the population, in this sense, ‘count for one’. From this perspective, the death of a sizeable portion of that population cannot be irrelevant: if the body politic is somewhat diminished, the doctrine of King’s Two Bodies suggests that the power of the ‘ruler’ is somewhat diminished too. By implication, the current political ontology of the British state currently rests not on the principle of equivalence, but on a zero-sum game: losses in population do not diminish the power of the ruler, but rather enlarge it. And that is a dangerous, dangerous form of political ontology.

*Hobbes’ Leviathan is often seen as the perfect depiction of this principle; it is possible to quibble with this reading, but the cover image for this post – here’s the credit to its creator on Twitter – is certainly the best possible reflection on the shift in contemporary forms of political power in the aftermath of the Covid-19 pandemic.

Why you’re never working to contract

During the last #USSstrike, on non-picketing days, I practiced working to contract. Working to contract is part of the broader strategy known as ASOS – action short of a strike – and it means fulfilling your contractual obligations, but not more than that. Together with many other UCU members, I will be moving to ASOS from Thursday. But how does one actually practice ASOS in the neoliberal academia?

 

I am currently paid to work 2.5 days a week. Normally, I am in the office on Thursdays and Fridays, and sometimes half a Monday or Tuesday. The rest of the time, I write and plan my own research, supervise (that’s Cambridgish for ‘teaching’), or attend seminars and reading groups. Last year, I was mostly writing my dissertation; this year, I am mostly panickedly filling out research grant and job applications, for fear of being without a position when my contract ends in August.

Yet I am also, obviously, not ‘working’ only when I do these things. Books that I read are, more often than not, related to what I am writing, teaching, or just thinking about. Often, I will read ‘theory’ books at all times of day (a former partner once raised the issue of the excess of Marx on the bedside table), but the same can apply to science fiction (or any fiction, for that matter). Films I watch will make it into courses. Even time spent on Twitter occasionally yields important insights, including links to articles, events, or just generic mood of a certain category of people.

I am hardly exceptional in this sense. Most academics work much more than the contracted hours. Estimates vary from 45 to as much as 100 hours/week; regardless of what is a ‘realistic’ assessment, the majority of academics report not being able to finish their expected workload within a 37.5-40hr working week. Working on weekends is ‘industry standard’; there is even a dangerous overwork ethic. Yet increasingly, academics have begun to unite around the unsustainability of the system in which we are increasingly feeling overwhelmed, underpaid, and with mental and other health issues on the rise. This is why rising workloads are one of the key elements of the current wave of UCU strikes. It also led to coining of a parallel hashtag: #ExhaustionRebellion. It seems like the culture is slowly beginning to shift.

From Thursday onwards, I will be on ASOS. I look forward to it: being precarious makes not working sometimes almost as exhausting as working. Yet, the problem with the ethic of overwork is not only that is is unsustainable, or that is directly harmful to the health and well-being of individuals, institutions, and the environment. It is also that it is remarkably resilient: and it is resilient precisely because it relies on some of the things academics value the most.

Marx’s theory of value* tells us that the origins of exploitation in industrial capitalism lie in the fact workers do not have ownership over means of production; thus, they are forced to sell their labour. Those who own means of production, on the other hand, are driven by the need to keep capital flowing, for which they need profit. Thus, they are naturally inclined to pay their workers as little as possible, as long as that is sufficient to actually keep them working. For most universities, a steady supply of newly minted graduate students, coupled with seemingly unpalatable working conditions in most other branches of employment, means they are well positioned to drive wages further down (in the UK, 17.5% in real terms since 2009).

This, however, is where the usefulness of classical Marxist theory stops. It is immediately obvious that many of the conditions the late 19th-century industrial capitalism no longer apply. To begin with, most academics own the most important means of production: their minds. Of course, many academics use and require relatively expensive equipment, or work in teams where skills are relatively distributed. Yet, even in the most collective of research teams and the most collaborative of labs, the one ingredient that is absolutely necessary is precisely human thoughts. In social sciences and humanities, this is even more the case: while a lot of the work we do is in libraries, or in seminars, or through conversations, ultimately – what we know and do rests within us**.

Neither, for that matter, can academics simply written off as unwitting victims of ‘false consciousness’. Even if the majority could have conceivably been unaware of the direction or speed of the transformation of the sector in the 1990s or in the early 2000s, after the last year’s industrial action this is certainly no longer the case. Nor is this true only of those who are certainly disproportionately affected by its dual face of exploitation and precarity: even academics on secure contracts and in senior positions are increasingly viewing changes to the sector as harmful not only to their younger colleagues, but to themselves. If nothing else, what USS strikes achieved was to help the critique of neoliberalism, marketization and precarity migrate from the pages of left-leaning political periodicals and critical theory seminars into mainstream media discourse. Knowing that current conditions of knowledge production are exploitative, however, does not necessarily translate into knowing what to do about them.

This is why contemporary academic knowledge production is better characterized as extractive or rentier capitalism. Employers, in most cases, do not own – certainly not exclusively – the means of production of knowledge. What they do instead is provide the setting or platform through which knowledge can be valorized, certified, and exchanged; and charge a hefty rent in the process (this is one part of what tuition fees are about). This ‘platform’ can include anything from degrees to learning spaces; from labs and equipment to email servers and libraries. It can also be adjusted, improved, fitted to suit the interests of users (or consumers – in this case, students); this is what endless investment in buildings is about.

The cunning of extractive capitalism lies in the fact that it does not, in fact, require workers to do very much. You are a resource: in industrial capitalism, your body is a resource; in cognitive capitalism, your mind is a resource too. In extractive capitalism, it gets even better: there is almost nothing you do, a single aspect of your thoughts, feelings, or actions, that the university cannot turn into profit. Reading Marxist theory on the side? It will make it into your courses. Interested in politics? Your awareness of social inequalities will be reflected in your teaching philosophy. Involved in community action? It will be listed in your online profile under ‘public engagement and impact’. It gets better still: even your critique of extractive, neoliberal conditions of knowledge production can be used to generate value for your employer – just make sure it is published in the appropriate journals, and before the REF deadline.

This is the secret to the remarkable resilience of extractive capitalism. It feeds on exactly what academics love most: on the desire to know more, to explore, to learn. This is, possibly, one of the most basic human needs past the point of food, shelter, and warmth. The fact that the system is designed to make access to all of the latter dependent on being exploited for the former speaks, I think, volumes (it also makes The Matrix look like less of a metaphor and more of an early blueprint, with technology just waiting to catch up). This makes ‘working to contract’ quite tricky: even if you pack up and leave your office at 16.38 on the dot, Monday to Friday, your employer will still be monetizing your labour. You are probably, even if unwittingly, helping them do so.

What, then, are we to do? It would be obviously easy to end with a vague call a las barricadas, conveniently positioned so as to boost one’s political cred. Not infrequently, my own work’s been read in this way: as if it ‘reminds academics of the necessity of activism’ or (worse) ‘invites to concrete action’ (bleurgh). Nothing could be farther from the truth: I absolutely disagree with the idea that critical analysis somehow magically transmigrates into political action. (In fact, why we are prone to mistaking one for the other is one of the key topics of my work, but this is an ASOS post, so I will not be writing about it). In other words, what you will do – tomorrow, on (or off?) the picket line, in a bit over a week, in the polling booth, in the next few months, when you are asked to join that and that committee or to a review a junior colleague’s tenure/promotion folder – is your problem and yours alone. What this post is about, however, is what to do when you’re on ASOS.

Therefore, I want to propose a collective reclaiming of the life of the mind. Too much of our collective capacity – for thinking, for listening, for learning, for teaching – is currently absorbed by institutions that turn it, willy-nilly, into capital. We need to re-learn to draw boundaries. We need thinking, learning, and caring to become independent of process that turns them into profit. There are many ways to do it – and many have been tried before: workers and cooperative universities; social science centres; summer schools; and, last but not least, our own teach-outs and picket line pedagogy. But even when these are not happening, we need to seriously rethink how we use the one resource that universities cannot replace: our own thoughts.

So from Thursday next week, I am going to be reclaiming my own. I will do the things I usually do – read; research; write; teach and supervise students; plan and attend meetings; analyse data; attend seminars; and so on – until 4.40. After that, however, my mind is mine – and mine alone.

 

*Rest assured that the students I teach get treated to a much more sophisticated version of the labour theory of value (Soc1), together with variations and critiques of Marxism (Soc2), as well as ontological assumptions of heterodox vs. ‘neoclassical’ economics (Econ8). If you are an academic bro, please resist the urge to try to ‘explain’ any of these as you will both waste my time and not like the result. Meanwhile, I strongly encourage you to read the *academic* work I have published on these questions over the past decade, which you can find under Publications.

**This is one of the reasons why some of the most interesting debates about knowledge production today concern ownership, copyright, or legal access. I do not have time to enter into these debates in this post; for a relatively recent take, see here.

Knowing neoliberalism

(This is a companion/’explainer’ piece to my article, ‘Knowing Neoliberalism‘, published in July 2019 in Social Epistemology. While it does include a few excerpts from the article, if using it, please cite and refer to the original publication. The very end of this post explains why).

What does it mean to ‘know’ neoliberalism?

What does it mean to know something from within that something? This question formed the starting point of my (recently defended) PhD thesis. ‘Knowing neoliberalism’ summarizes some of its key points. In this sense, the main argument of the article is epistemological — that is, it is concerned with the conditions (and possibilities, and limitations) of (human) knowledge — in particular when produced and mediated through (social) institutions and networks (which, as some of us would argue, is always). More specifically, it is interested in a special case of that knowledge — that is, what happens when we produce knowledge about the conditions of the production of our own knowledge (in this sense, it’s not ‘about universities’ any more than, say, Bourdieu’s work was ‘about universities’ and it’s not ‘on education’ any more than Latour’s was on geology or mining. Sorry to disappoint).

The question itself, of course, is not new – it appears, in various guises, throughout the history of Western philosophy, particularly in the second half of the 20th century with the rise (and institutionalisation) of different forms of theory that earned the epithet ‘critical’ (including the eponymous work of philosophers associated with the Frankfurt School, but also other branches of Marxism, feminism, postcolonial studies, and so on). My own theoretical ‘entry points’ came from a longer engagement with Bourdieu’s work on sociological reflexivity and Boltanski’s work on critique, mediated through Arendt’s analysis of the dichotomy between thinking and acting and De Beauvoir’s ethics of ambiguity; a bit more about that here. However, the critique of neoliberalism that originated in universities in the UK and the US in the last two decades – including intellectual interventions I analysed in the thesis – lends itself as a particularly interesting case to explore this question.

Why study the critique of neoliberalism?

  • Critique of neoliberalism in the academia is an enormously productive genre. The number of books, journal articles, special issues, not to mention ‘grey’ academic literature such as reviews or blogs (in the ‘Anglosphere’ alone) has grown exponentially since mid-2000s. Originating in anthropological studies of ‘audit culture’, the genre now includes at least one dedicated book series (Palgrave’s ‘Critical University Studies’, which I’ve mentioned in this book review), as well as people dedicated to establishing ‘critical university studies‘ as a field of its own (for the avoidance of doubt, I do not associate my work within this strand, and while I find the delineation of academic ‘fields’ interesting as a sociological phenomenon, I have serious doubts about the value and validity of field proliferation — which I’ve shared in many amicable discussions with colleagues in the network). At the start of my research, I referred to this as the paradox of the proliferation of critique and relative absence of resistance; the article, in part, tries to explain this paradox through the examination of what happens if and when we frame neoliberalism as an object of knowledge — or, in formal terms, epistemic object.
  • This genre of critique is, and has been, highly influential: the tropes of the ‘death’ of the university or the ‘assault’ on the academia are regularly reproduced in and through intellectual interventions (both within and outside of the university ‘proper’), including far beyond academic neoliberalism’s ‘native’ context (Australia, UK, US, New Zealand). Authors who present this kind of critique, while most frequently coming from (or being employed at) Anglophone universities in the ‘Global North’, are often invited to speak to audiences in the ‘Global South’. Some of this, obviously, has to do with the lasting influence of colonial networks and hierarchies of ‘global’ knowledge production, and, in particular, with the durability of ‘White’ theory. But it illustrates the broader point that the production of critique needs to be studied from the same perspective as the production of any sort of knowledge – rather than as, somehow, exempt from it. My work takes Boltanski’s critique of ‘critical sociology’ as a starting point, but extends it towards a different epistemic position:

Boltanski primarily took issue with what he believed was the unjustified reduction of critical properties of ‘lay actors’ in Bourdieu’s critical sociology. However, I start from the assumption that professional producers of knowledge are not immune to the epistemic biases to which they suspect their research subjects to be susceptible…what happens when we take forms and techniques of sociological knowledge – including those we label ‘critical’ and ‘reflexive’ – to be part and parcel of, rather than opposed to or in any way separate from, the same social factors that we assume are shaping epistemic dispositions of our research subjects? In this sense, recognising that forms of knowledge produced in and through academic structures, even if and when they address issues of exploitation and social (in)justice, are not necessarily devoid of power relations and epistemic biases, seems a necessary step in situating epistemology in present-day debates about neoliberalism. (KN, p. 4)

  • This, at the same time, is what most of the sources I analysed in my thesis have in common: by and large, they locate sources of power – including neoliberal power – always outside of their own scope of influence. As I’ve pointed out in my earlier work, this means ‘universities’ – which, in practice, often means ‘us’, academics – are almost always portrayed as being on the receiving end of these changes. Not only is this profoundly unsociological – literally every single take on human agency in the past 50-odd years, from Foucault through to Latour and from Giddens through to Archer – recognizes ‘we’ (including as epistemic agents) have some degree of influence over what happens; it is also profoundly unpolitical, as it outsources agency to variously conceived ‘others’ (as I’ve agued here) while avoiding the tricky elements of own participation in the process. This is not to repeat the tired dichotomy of complicity vs. resistance, which is another not particularly innovative reading of the problem. What the article asks, instead, is: What kind of ‘purpose’ does systematic avoidance of questions of ambiguity and ambivalence serve?

What does it aim to achieve?

The objective of the article is not, by the way, to say that the existing forms of critique (including other contributions to the special issue) are ‘bad’ or that they can somehow be ‘improved’. Least of all is it to say that if we just ‘corrected’ our theoretical (epistemological, conceptual) lens we would finally be able to ‘defeat neoliberalism’. The article, in fact, argues the very opposite: that as long as we assume that ‘knowing’ neoliberalism will somehow translate into ‘doing away’ with neoliberalism we remain committed to the (epistemologically and sociologically very limited) assumption that knowledge automatically translates into action.

(…) [the] politically soothing, yet epistemically limited assumption that knowledge automatically translates into action…not only omit(s) to engage with precisely the political, economic, and social elements of the production of knowledge elaborated above, [but] eschews questions of ambiguity and ambivalence generated by these contradictions…examples such as doctors who smoke, environmentalists who fly around the world, and critics of academic capitalism who nonetheless participate in the ‘academic rat race’ (Berliner 2016) remind us that knowledge of the negative effects of specific forms of behaviour is not sufficient to make them go away (KN, p. 10)

(If it did, there would be no critics of neoliberalism who exploit their junior colleagues, critics of sexism who nonetheless reproduce gendered stereotypes and dichotomies, or critics of academic hierarchy who evaluate other people on the basis of their future ‘networking’ potential. And yet, here we are).

What is it about?

The article approaches ‘neoliberalism’ from several angles:

Ontological: What is neoliberalism? It is quite common to see neoliberalism as an epistemic project. Yet, does the fact that neoliberalism changes the nature of the production of knowledge and even what counts as knowledge – and, eventually, becomes itself a subject of knowledge – give us grounds to infer that the way to ‘deal’ with neoliberalism is to frame it as an object (of knowledge)? Is the way to ‘destroy’ neoliberalism to ‘know it’ better? Does treating neoliberalism as an ideology – that is, as something that masses can be ‘enlightened’ about – translate into the possibility to wield political power against it?

(Plot spoiler: my answer to the above questions is no).

Epistemological: What does this mean for ways we can go about knowing neoliberalism (or, for that matter, any element of ‘the social’)? My work, which is predominantly in social theory and sociology of knowledge (no, I don’t work ‘on education’ and my research is not ‘about universities’), in many ways overlaps substantially with social epistemology – the study of the way social factors (regardless of how we conceive of them) shape the capacity to make knowledge claims. In this context, I am particularly interested in how they influence reflexivity, as the capacity to make knowledge claims about our own knowledge – including knowledge of ‘the social’. Enter neoliberalism.

What kind of epistemic position are we occupying when we produce an account of the neoliberal conditions of knowledge production in academia? Is one acting more like the ‘epistemic exemplar’ (Cruickshank 2010) of a ‘sociologist’, or a ‘lay subject’ engaged in practice? What does this tell us about the way in which we are able to conceive of the conditions of the production of our own knowledge about those conditions? (KN, p. 4)

(Yes, I know this is a bit ‘meta’, but that’s how I like it).

Sociological: How do specific conditions of our own production of knowledge about neoliberalism influence this? As a sociologist of knowledge, I am particularly interested in relations of power and privilege reproduced through institutions of knowledge production. As my work on the ‘moral economy’ of Open Access with Chris Muellerleile argued, the production of any type of knowledge cannot be analysed as external to its conditions, including when the knowledge aims to be about those conditions.

‘Knowing neoliberalism’ extends this line of argument by claiming we need to engage seriously with the political economy of critique. It offers some of the places we could look for such clues: for instance, the political economy of publishing. The same goes for networks of power and privilege: whose knowledge is seen as ‘translateable’ and ‘citeable’, and whose can be treated as an empirical illustration:

Neoliberalism offers an overarching diagnostic that can be applied to a variety of geographical and political contexts, on different scales. Whose knowledge is seen as central and ‘translatable’ in these networks is not independent from inequalities rooted in colonial exploitation, maintaining a ‘knowledge hierarchy’ between the Global North and the Global South…these forms of interaction reproduce what Connell (2007, 2014) has dubbed ‘metropolitan science’: sites and knowledge producers in the ‘periphery’ are framed as sources of ‘empirical’, ‘embodied’, and ‘lived’ resistance, while the production of theory, by and large, remains the work of intellectuals (still predominantly White and male) situated in prestigious univer- sities in the UK and the US. (KN, p. 9)

This, incidentally, is the only part of the article that deals with ‘higher education’. It is very short.

Political: What does this mean for different sorts of political agency (and actorhood) that can (and do) take place in neoliberalism? What happens when we assume that (more) knowledge leads to (more) action? (apart from a slew of often well-intended but misconceived policies, some of which I’ve analysed in my book, ‘From Class to Identity’). The article argues that affecting a cognitive slippage between two parts of Marx’s Eleventh Thesis – that is, assuming that interpreting the world will itself lead to changing it – is the thing that contributes to the ‘paradox’ of the overproduction of critique. In other words, we become more and more invested in ‘knowing’ neoliberalism – e.g. producing books and articles – and less invested in doing something about it. This, obviously, is neither a zero-sum game (and it shouldn’t be) nor an old-fashioned call on academics to drop laptops and start mounting barricades; rather, it is a reminder that acting as if there were an automatic link between knowledge of neoliberalism and resistance to neoliberalism tends to leave the latter in its place.

(Actually, maybe it is a call to start mounting barricades, just in case).

Moral: Is there an ethically correct or more just way of ‘knowing’ neoliberalism? Does answering these questions enable us to generate better knowledge? My work – especially the part that engages with the pragmatic sociology of critique – is particularly interested in the moral framing and justification of specific types of knowledge claims. Rather than aiming to provide the ‘true’ way forward, the article asks what kind of ideas of ‘good’ and ‘just’ are invoked/assumed through critique? What kind of moral stance does ‘gnossification’ entail? To steal the title of this conference, when does explaining become ‘explaining away’ – and, in particular, what is the relationship between ‘knowing’ something and framing our own moral responsibility in relation to something?

The full answer to the last question, unfortunately, will take more than one publication. The partial answer the article hints at is that, while having a ‘correct’ way of ‘knowing’ neoliberalism will not ‘do away’ with neoliberalism, we can and should invest in more just and ethical ways of ‘knowing’ altogether. It shouldn’t warrant reminding that the evidence of wide-spread sexual harrassment in the academia, not to mention deeply entrenched casual sexism, racism, ableism, ethnocentrism, and xenophobia, all suggest ‘we’ (as academics) are not as morally impeccable as we like to think we are. Thing is, no-one is. The article hopes to have made a small contribution towards giving us the tools to understand why, and how, this is the case.

I hope you enjoy the article!

——————————————————-

P.S. One of the rather straightforward implications of the article is that we need to come to terms with multiple reasons for why we do the work we do. Correspondingly, I thought I’d share a few that inspired me to do this ‘companion’ post. When I first started writing/blogging/Tweeting about the ‘paradox’ of neoliberalism and critique in 2015, this line of inquiry wasn’t very popular: most accounts smoothly reproduced the ‘evil neoliberalism vs. poor us little academics’ narrative. This has also been the case with most people I’ve met in workshops, conferences, and other contexts I have participated in (I went to quite a few as part of my fieldwork).

In the past few years, however, more analyses seem to converge with mine on quite a few analytical and theoretical points. My initial surprise at the fact that they seem not to directly engage with any of these arguments — in fact, were occasionally very happy to recite them back at me, without acknowledgement, attribution or citation — was somewhat clarified through reading the work on gendered citation practices. At the same time, it provided a very handy illustration for exactly the type of paradox described here: namely, while most academics are quick to decry the precarity and ‘awful’ culture of exploitation in the academia, almost as many are equally quick to ‘cite up’ or act strategically in ways that reproduce precisely these inequalities.

The other ‘handy’ way of appropriating the work of other people is to reduce the scope of their arguments, ideally representing it as an empirical illustration that has limited purchase in a specific domain (‘higher education’, ‘gender’, ‘religion’), while hijacking the broader theoretical point for yourself (I have heard a number of other people — most often, obviously, women and people of colour — describe a very similar thing happening to them).

This post is thus a way of clarifying exactly what the argument of the article is, in, I hope, language that is simple enough even if you’re not keen on social ontology, social epistemology, social theory, or, actually, anything social (couldn’t blame you).

PPS. In the meantime, I’ve also started writing an article on how precisely these forms of ‘epistemic positioning’ are used to limit and constrain the knowledge claims of ‘others’ (women, minorities) etc. in the academia: if you have any examples you would like to share, I’m keen to hear them!

Existing while female

 

Space

 

The most threatening spectacle to the patriarchy is a woman staring into space.

I do not mean in the metaphorical sense, as in a woman doing astronomy or astrophysics (or maths or philosophy), though all of these help, too. Just plainly sitting, looking into some vague mid-point of the horizon, for stretches of time.

I perform this little ‘experiment’ at least once per week (more often, if possible; I like staring into space). I wholly recommend it. There are a few simple rules:

  • You can look at the passers-by (a.k.a. ‘people-watching’), but try to avoid eye contact longer than a few seconds: people should not feel that they are particular objects of attention.
  • If you are sitting in a café, or a restaurant, you can have a drink, ideally a tea or coffee. That’s not saying you shouldn’t enjoy your Martini cocktails or glasses of Chardonnay, but images of women cradling tall glasses of alcoholic drink of choice have been very succesfully appropriated by both capitalism and patriarchy, for distinct though compatible purposes.
  • Don’t look at your phone. If you must check the time or messages it’s fine, but don’t start staring at it, texting, or browsing.
  • Don’t read (a book, a magazine, a newspaper). If you have a particularly interesting or important thought feel free to scribble it down, but don’t bury your gaze behind a notebook, book, or a laptop.

 

Try doing this for an hour.

What this ‘experiment’ achieves is that it renders visible the simple fact of existing. As a woman. Even worse, it renders visible the process of thinking. Simultaneously inhabiting an inner space (thinking) and public space (sitting), while doing little else to justify your existence.

NOT thinking-while-minding-children, as in ‘oh isn’t it admirrrrable that she manages being both an academic and a mom’.

NOT any other form of ‘thinking on our feet’ that, as Isabelle Stengers and Vinciane Despret (and Virginia Woolf) noted, was the constitutive condition for most thinking done by women throughout history.

The important thing is to claim space to think, unapologetically and in public.

Depending on place and context, this usually produces at least one of the following reactions:

  • Waiting staff, especially if male, will become increasingly attentive, repeatedly inquiring whether (a) I am alright (b) everything was alright (c) I would like anything else (yes, even if they are not trying to get you to leave, and yes, I have sat in the same place with friends, and this didn’t happen)
  • Men will try to catch my eye
  • Random strangers will start repeatedly glancing and sometimes staring in my direction.

 

I don’t think my experience in this regard is particularly exceptional. Yes, there are many places where women couldn’t even dream of sitting alone in public without risking things much worse than uncomfortable stares (I don’t advise attempting this experiment in such places). Yes, there are places where staring into a book/laptop/phone, ideally with headphones on, is the only way to avoid being approached, chatted up, or harassed by men. Yet, even in wealthy, white, urban, middle-class, ‘liberal’ contexts, women who display signs of being afflicted by ‘the life of the mind’ are still somehow suspect. For what this signals is that it is, actually, possible for women to have an inner life not defined by relation to men, if not particular men, then at least in the abstract.

 

Relations

‘Is it possible to not be in relation to white men?’, asks Sara Ahmed, in a brilliant essay on intellectual genealogies and institutional racism. The short answer is yes, of course, but not as long as men are in charge of drawing the family tree. Philosophy is a clear example. Two of my favourite philosophers, De Beauvoir and Arendt, are routinely positioned in relation to, respectively, Sartre and Heidegger (and, in Arendt’s case, to a lesser degree, Jaspers). While, in the case of De Beauvoir, this could be, to a degree, justified – after all, they were intellectual and writing partners for most of Sartre’s life – the narrative is hardly balanced: it is always Simone who is seen in relation to Jean-Paul, not the other way round*.

In a bit of an ironic twist, De Beauvoir’s argument in the Second Sex that a woman exists only in relation to a man seems to have been adopted as a stylistic prescription for narrating intellectual history (I recently downloaded an episode of In Our Time on De Beauvoir only to discover, in frustration, that it repeats exactly this pattern). Another example is the philosopher GEM Anscombe, whose work is almost uniquely described in terms of her interpretation of Wittgenstein (she was also married to the philosopher Peter Geach, which doesn’t help). A great deal of Anscombe’s writing does not deal with Wittgenstein, but that is, somehow, passed over, at least in non-specialist circles. What also gets passed over is that, in any intellectual partnership or friendship, ideas flow in both directions. In this case, the honesty and generosity of women’s acknowledgments (and occasional overstatements) of intellectual debt tends to be taken for evidence of incompleteness of female thinking; as if there couldn’t, possibly, be a thought in their ‘pretty heads’ that had not been placed there by a man.

Anscombe, incidentally, had a predilection for staring at things in public. Here’s an excerpt from the Introduction to the Vol. 2 of her collected philosophical papers, Metaphysics and the philosophy of mind:

“The other central philosophical topic which I got hooked on without realising it was philosophy, was perception (…) For years I would spend time in cafés, for instance, staring at objects saying to myself: ‘I see a packet. But what do I really see? How can I say that I see here anything more than a yellow expanse?’” (1981: viii).

 

But Wittgenstein, sure.

 

Nature

Nature abhors a vacuum, if by ‘nature’ we mean the rationalisation of patriarchy, and if by ‘vacuum’ we mean the horrifying prospect of women occupied by their own interiority, irrespectively of how mundane or elevated its contents. In Jane Austin’s novels, young women are regularly reminded that they should seem usefully occupied – embroidering, reading (but not too much, and ideally out loud, for everyone’s enjoyment), playing an instrument, singing – whenever young gentlemen came for a visit. The underlying message is that, of course, young gentlemen are not going to want to marry ‘idle’ women. The only justification for women’s existence, of course, is their value as (future) wives, and thus their reproductive capital: everything else – including forms of internal life that do not serve this purpose – is worthless.

Clearly, one should expect things to improve once women are no longer reduced to men’s property, or the function of wives and mothers. Clearly, they haven’t. In Motherhood, Sheila Heti offers a brilliant diagnosis of how the very question of having children bears down differently on women:

It suddenly seemed like a huge conspiracy to keep women in their thirties—when you finally have some brains and some skills and experience—from doing anything useful with them at all. It is hard to when such a large portion of your mind, at any given time, is preoccupied with the possibility—a question that didn’t seem to preoccupy the drunken men at all (2018: 98).

Rebecca Solnit points out the same problem in The Mother of All Questions: no matter what a woman does, she is still evaluated in relation to her performance as a reproductive engine. One of the messages of the insidious ‘lean-in’ kind of feminism is that it’s OK to not be a wife and a mother, as long as you are remarkably successful, as a businesswoman, a political leader, or an author. Obviously, ‘ideally’, both. This keeps women stressed, overworked, and so predictably willing to tolerate absolutely horrendous working conditions (hello, academia) and partnerships. Men can be mediocre and still successful (again, hello, academia); women, in order to succeed, have to be outstanding. Worse, they have to keep proving their oustandingness; ‘pure’ existence is never enough.

To refuse this – to refuse to justify one’s existence through a retrospective or prospective contribution to either particular men (wife of, mother of, daughter of), their institutions (corporation, family, country), or the vaguely defined ‘humankind’ (which, more often than not, is an extrapolation of these categories) – is thus to challenge the washed-out but seemingly undying assumption that a woman is somehow less-worthy version of a man. It is to subvert the myth that shaped and constrained so many, from Austin’s characters to Woolf’s Shakespeare’s sister: that to exist a woman has to be useful; that inhabiting an interiority is to be performed in secret (which meant away from the eyes of the patriarchy); that, ultimately, women’s existence needs to be justified. If not by providing sex, childbearing, and domestic labour, then at least indirectly, by consuming stuff and services that rely on underpaid (including domestic) labour of other women, from fashion to IPhones and from babysitting to nail salons. Sometimes, if necessary, also by writing Big Books: but only so they could be used by men who see in them the reflection of their own (imagined) glory.

 

Death

Heti recounts another story, about her maternal grandmother, Magda, imprisoned in a concentration camp during WWII. One day, Nazi soldiers came to the women’s barracks and asked for volunteers to help with cooking, cleaning and scrubbing in the officers’ kitchen. Magda stepped forward; as Heti writes, ‘they all did’. Magda was not selected; she was lucky, as it soon transpired that those women were not taken to the kitchen, but rather raped by the officers and then killed.

 

I lingered over the sentence ‘they all did’ for a long time. What would it mean for more women to not volunteer? To not accept endlessly proving one’s own usefulness, in cover letters, job interviews, student feedback forms? To simply exist, in space?

 

I think I’ll just sit and think about it for a while.

Screen Shot 2019-06-12 at 18.12.20.png

(The photo is by the British photographer Hannah Starkey, who has a particular penchant for capturing women inhabiting their own interiority. Thank you to my partner who first introduced me to her work, the slight irony being that he interrupted me in precisely one such moment of contemplation to tell me this).

 

 

*I used to make a point of asking the students taking Social Theory to change ‘Sartre’s partner Simone de Beauvoir’ in their essays to ‘de Beauvoir’s partner Jean-Paul Sartre’ and see if it begins to read differently.

Area Y: The Necropolitics of Post-Socialism

This summer, I spent almost a month in Serbia and Montenegro (yes, these are two different countries, despite New York Times still refusing to acknowledge this). This is about seven times as long as I normally would. The two principal reasons are that my mother, who lives in Belgrade, is ill, and that I was planning to get a bit of time to quietly sit and write my thesis on the Adriatic coast of Montenegro. How the latter turned out in light of the knowledge of the former I leave to imagination (tl;dr: not well). It did, however, give me ample time to reflect on the post-socialist condition, which I haven’t done in a while, and to get outside Belgrade, to which I normally confine my brief visits.

The way in which perverse necro/bio-politics of post-socialism obtain in my mother’s illness, in the landscape, and in the socio-material, fits almost too perfectly into what has been for years the dominant style of writing about places that used to be behind the Iron Curtain (or, in the case of Yugoslavia, on its borders). Social theory’s favourite ruins – the ruins of socialism – are repeatedly re-valorised through being dusted off and resurrected, as yet another alter-world to provide the mirror image to the here and the now (the here and the now, obviously, being capitalism). During the Cold War, the Left had its alter-image in the Soviet Union; now, the antidote to neoliberalism is provided not through the actual ruins of real socialism – that would be a tad too much to handle – but through the re-invention of the potential of socialism to provide, in a tellingly polysemic title of MoMA’s recently-opened exhibition on architecture in Yugoslavia, concrete utopias.

Don’t get me wrong: I would love to see the exhibition, and I am sure that it offers much to learn, especially for those who did not have the dubious privilege of having grown up on both sides of socialism. It’s not the absence of nuance that makes me nauseous in encounters with socialist nostalgia: a lot of it, as a form of cultural production, is made by well-meaning people and, in some cases, incredibly well-researched. It’s that  resurrecting hipsterified golems of post-socialism serves little purpose other than to underline their ontological status as a source of comparison for the West, cannon-fodder for imaginaries of the world so bereft of hope that it would rather replay its past dreams than face the potential waking nightmare of its future.

It’s precisely this process that leaves them unable to die, much like the ghosts/apparitions/copies in Lem’s (and Tarkovsky’s) Solaris, and in VanderMeer’s Southern Reach trilogy. In VanderMeer’s books, members of the eleventh expedition (or, rather, their copies) who return to the ‘real world’ after exposure to the Area X develop cancer and die pretty quickly. Life in post-socialism is very much this: shadows or copies of former people confusedly going about their daily business, or revisiting the places that once made sense to them, which, sometimes, they have to purchase as repackaged ‘post-socialism’; in this sense, the parable of Roadside Picnic/Stalker as the perennial museum of post-communism is really prophetic.

The necropolitical profile of these parts of former Yugoslavia, in fact, is pretty unexceptional. For years, research has shown that rapid privatisation increases mortality, even controlled for other factors. Obviously, the state still feigns perfunctory care for the elderly, but healthcare is cumbersome, inefficient and, in most cases, barely palliative. Smoking and heavy drinking are de rigueur: in winter, Belgrade cafés and pubs turn into proper smokehouses. Speaking of that, vegetarianism is still often, if benevolently, ridiculed. Fossil fuel extraction is ubiquitous. According to this report from 2014, Serbia had the second highest rate of premature deaths due to air pollution in Europe. That’s not even getting closer to the Thing That Can’t Be Talked About – the environmental effects of the NATO intervention in 1999.

An apt illustration comes as I travel to Western Serbia to give a talk at the anthropology seminar at Petnica Science Centre, where I used to work between 2000 and 2008. Petnica is a unique institution that developed in the 1980s and 1990s as part science camp, part extracurricular interdisciplinary  research institute, where electronics researchers would share tables in the canteen with geologists, and physicists would talk (arguably, not always agreeing) to anthropologists. Founded in part by the Young Researchers of Serbia (then Yugoslavia), a forward-looking environmental exploration and protection group, the place used to float its green credentials. Today, it is funded by the state – and fully branded by the Oil Industry of Serbia. The latter is Serbian only in its name, having become a subsidiary of the Russian fossil fuel giant Gazpromneft. What could arguably be dubbed Serbia’s future research elite, thus, is raised in view of full acceptance of the ubiquity of fossil fuels not only for providing energy, but, literally, for running the facilities they need to work.

These researchers can still consider themselves lucky. The other part of Serbian economy that is actually working are factories, or rather production facilities, of multinational companies. In these companies, workers are given 12-hour shifts, banned from unionising, and, as a series of relatively recent reports revealed, issued with adult diapers so as to render toilet breaks unnecessary.

As Elizabeth Povinelli argued, following Achille Mbembe, geontopower – the production of life and nonlife, and the creation of the distinction between them, including what is allowed to live and what is allowed to die – is the primary mode of exercise of power in late liberalism. Less frequently examined way of sustaining the late liberal order is the production of semi-dependent semi-peripheries. Precisely because they are not the world’s slums, and because they are not former colonies, they receive comparatively little attention. Instead, they are mined for resources (human and inhuman). That the interaction between the two regularly produces outcomes guaranteed to deplete the first is of little relevance. The reserves, unlike those of fossil fuels, are almost endless.

Serbian government does its share in ensuring that the supply of cheap labour force never runs out, by launching endless campaigns to stimulate reproduction. It seems to be working: babies are increasingly the ‘it’ accessory in cafés and bars. Officially, stimulating the birth rate is to offset the ‘cost’ of pensions, which IMF insists should not increase. Unofficially, of course, the easiest way to adjust for this is to make sure pensioners are left behind. Much like the current hype about its legacy, the necropolitics of post-socialism operates primarily through foregrounding its Instagrammable elements, and hiding the ugly, non-productive ones.

Much like in VanderMeer’s Area X, knowledge that the border is advancing could be a mixed blessing: as Danowski and Viveiros de Castro argued in a different context, end of the world comes more easily to those for whom the world has already ended, more than once. Not unlike what Scranton argued in Learning to Die in the Anthropocene – this, perhaps, rather than sanitised dreams of a utopian future, is one thing worth resurrecting from post-socialism.

The paradox of resistance: critique, neoliberalism, and the limits of performativity

The critique of neoliberalism in academia is almost as old as its object. Paradoxically, it is the only element of the ‘old’ academia that seems to be thriving amid steadily worsening conditions: as I’ve argued in this book review, hardly a week goes by without a new book, volume, or collection of articles denouncing the neoliberal onslaught or ‘war’ on universities and, not less frequently, announcing their (untimely) death.

What makes the proliferation of critique of the transformation of universities particularly striking is the relative absence – at least until recently – of sustained modes of resistance to the changes it describes. While the UCU strike in reaction to the changes to the universities’ pension scheme offers some hope, by and large, forms of resistance have much more often taken the form of a book or blog post than strike, demo, or occupation. Relatedly, given the level of agreement among academics about the general direction of these changes, engagement with developing long-term, sustainable alternatives to exploitative modes of knowledge production has been surprisingly scattered.

It was this relationship between the abundance of critique and paucity of political action that initially got me interested in arguments and forms of intellectual positioning in what is increasingly referred to as the ‘[culture] war on universities’. Of course, the question of the relationship between critique and resistance – or knowledge and political action – concerns much more than the future of English higher education, and reaches into the constitutive categories of Western political and social thought (I’ve addressed some of this in this talk). In this post, however, my intention is to focus on its implications for how we can conceive critique in and of neoliberal academia.

Varieties of neoliberalism, varieties of critique?

While critique of neoliberalism in the academia tends to converge around the causes as well as consequences of this transformation, this doesn’t mean that there is no theoretical variation. Marxist critique, for instance, tends to emphasise the changes in working conditions of academic staff, increased exploitation, and growing commodification of knowledge. It usually identifies precarity as the problem that prevents academics from exercising the form of political agency – labour organizing – that is seen as the primary source of potential resistance to these changes.

Poststructuralist critique, most of it drawing on Foucault, tends to focus on changing status of knowledge, which is increasingly portrayed as a private rather than a public good. The reframing of knowledge in terms of economic growth is further tied to measurement – reduction to a single, unitary, comparable standard – and competition, which is meant to ensure maximum productivity. This also gives rise to mechanisms of constant assessment, such as the TEF and the REF, captured in the phrase ‘audit culture‘. Academics, in this view, become undifferentiated objects of assessment, which is used to not only instill fear but also keep them in constant competition against each other in hope of eventual conferral of ‘tenure’ or permanent employment, through which they can be constituted as full subjects with political agency.

Last, but not least, the type of critique that can broadly be referred to as ‘new materialist’ shifts the source of political power directly to instruments for measurement and sorting, such as algorithms, metrics, and Big Data. In the neoliberal university, the argument goes, there is no need for anyone to even ‘push the button’; metrics run on their own, with the social world already so imbricated by them that it becomes difficult, if not entirely impossible, to resist. The source of political agency, in this sense, becomes the ‘humanity’ of academics, what Arendt called ‘mere’ and Agamben ‘bare’ life. A significant portion of new materialist critique, in this vein, focuses on emotions and affect in the neoliberal university, as if to underscore the contrast between lived and felt experiences of academics on the one hand, and the inhumanity of algorithms or their ‘human executioners’ on the other.

Despite possibly divergent theoretical genealogies, these forms of critique seem to move in the same direction. Namely, the object or target of critique becomes increasingly elusive, murky, and de-differentiated: but, strangely enough, so does the subject. As power grows opaque (or, in Foucault’s terms, ‘capillary’), the source of resistance shifts from a relatively defined position or identity (workers or members of the academic profession) into a relatively amorphous concept of humanity, or precarious humanity, as a whole.

Of course, there is nothing particularly original in the observation that neoliberalism has eroded traditional grounds for solidarity, such as union membership. Wendy Brown’s Undoing the Demos and Judith Butler’s Notes towards a performative theory of assembly, for instance, address the possibilities for political agency – including cross-sectional approaches such as that of the Occupy movement – in view of this broader transformation of the ‘public’. Here, however, I would like to engage with the implications of this shift in the specific context of academic resistance.

Nerdish subject? The absent centre of [academic] political ontology

The academic political subject, which is why the pun on Žižek, is profoundly haunted by its Cartesian legacy: the distinction between thinking and being, and, by extension, between subject and object. This is hardly surprising: critique is predicated on thinking about the world, which proceeds through ‘apprehending’ the world as distinct from the self; but the self  is also predicated on thinking about that world. Though they may have disagreed on many other things, Boltanski and Bourdieu – both  feature prominently in my work – converge on the importance of this element for understanding the academic predicament: Bourdieu calls it the scholastic fallacy, and Boltanski complex exteriority.

Nowhere is the Cartesian legacy of critique more evident than in its approach to neoliberalism. From Foucault onwards, academic critique has approached neoliberalism as an intellectual project: the product of a ‘thought collective’ or a small group of intellectuals, initially concentrated in the Mont Pelerin society, from which they went on to ‘conquer’ not only economics departments but also, more importantly, centres of political power. Critique, in other words, projects back onto neoliberalism its own way of coming to terms with the world: knowledge. From here, the Weberian assumption that ideas precede political action is transposed to forms of resistance: the more we know about how neoliberalism operates, the better we will be able to resist it. This is why, as neoliberalism proliferates, the books, journal articles, etc. that somehow seek to ‘denounce’ it multiply as well.

Speech acts: the lost hyphen

The fundamental notion of critique, in this sense, is (J.L Austin‘s and Searle’s) notion of speech acts: the assumption that words can have effects. What gets lost in dropping the hyphen in speech(-)acts is a very important bit in the theory of performativity: that is, the conditions under which speech does constitute effective action. This is why Butler in Performative agency draws attention to Austin’s emphasis on perlocution: speech-acts that are effective only under certain circumstances. In other words, it’s not enough to exclaim: “Universities are not for sale! Education is not a commodity! Students are not consumers!” for this to become the case. For this begs the question: “Who is going to bring this about? What are the conditions under which this can be realized?” In other words: who has the power to act in ways that can make this claim true?

What critique bounces against, thus, is thinking its own agency within these conditions, rather than trying to paint them as if they are somehow on the ‘outside’ of critique itself. Butler recognizes this:

“If this sort of world, what we might be compelled to call ‘the bad life’, fails to reflect back my value as a living being, then I must become critical of those categories and structures that produce that form of effacement and inequality. In other words, I cannot affirm my own life without critically evaluating those structures that differentially value life itself [my emphasis]. This practice of critique is one in which my own life is bound up with the objects that I think about” (2015: 199).

In simpler terms: my position as a political subject is predicated on the practice of critique, which entails reflecting on the conditions that make my life difficult (or unbearable). Yet, those conditions are in part what constitutes my capacity to engage in critique in the first place, as the practice of thinking (critically) is, especially in the case of academic critique, inextricably bound up in practices, institutions, and – not least importantly – economies of academic knowledge production. In formal terms, critique is a form of a Russell’s paradox: a set that at the same time both is and is not a member of itself.

Living with (Russell) paradoxes

This is why academic critique of neoliberalism has no problem with thinking about governing rationalities, exploitation of workers in Chinese factories, or VC’s salaries: practices that it perceives as outside of itself, or in which it can conceive of itself as an object. But it faces serious problems when it comes to thinking itself as a subject, and even more, acting in this context, as this – at least according to its own standards – means reflecting on all the practices that make it ‘complicit’ in exactly what it aims to expunge, or criticize.

This means coming to terms with the fact that neoliberalism is the Research Excellence Framework, but neoliberalism is also when you discuss ideas for a super-cool collaborative project. Neoliberalism is the requirement to submit all your research outputs to the faculty website, but neoliberalism is also the pride you feel when your most recent article is Tweeted about. Neoliberalism is the incessant corporate emails about ‘wellbeing’, but it is also the craft beer you have with your friends in the pub. This is why, in the seemingly interminable debates about the ‘validity’ of neoliberalism as an analytical term, both sides are right: yes, on the one hand, the term is vague and can seemingly be applied to any manifestation of power, but, on the other, it does cover everything, which means it cannot be avoided either.

This is exactly the sort of ambiguity – the fact that things can be two different things at the same time – that critique in neoliberalism needs to come to terms with. This could possibly help us move beyond the futile iconoclastic gesture of revealing the ‘true nature’ of things, expecting that action will naturally follow from this (Martijn Konings’ Capital and Time has a really good take on the limits of ‘ontological’ critique of neoliberalism). In this sense, if there is something critique can learn from neoliberalism, it is the art of speculation. If economic discourses are performative, then, by definition, critique can be performative too. This means that futures can be created – but the assumption that ‘voice’ is sufficient to create the conditions under which this can be the case needs to be dispensed with.

 

 

Theory as practice: for a politics of social theory, or how to get out of the theory zoo

 

[These are my thoughts/notes for the “Practice of Social Theory, which Mark Carrigan and I are running at the Department of Sociology of the University of Cambridge from 4 to 6 September, 2017].

 

Revival of theory?

 

It seems we are witnessing something akin to a revival of theory, or at least of an interest in it. In 2016, the British Journal of Sociology published Swedberg’s “Before theory comes theorizing, or how to make social sciences more interesting”, a longer version of its 2015 Annual public lecture, followed by responses from – among others – Krause, Schneiderhan, Tavory, and Karleheden. A string of recent books – including Matt Dawson’s Social Theory for Alternative Societies, Alex Law’s Social Theory for Today, and Craig Browne’s Critical Social Theory, to name but a few – set out to consider the relevance or contribution of social theory to understanding contemporary social problems. This is in addition to the renewal of interest in biography or contemporary relevance of social-philosophical schools such as Existentialism (1, 2) and the Frankfurt School [1, 2].

To a degree, this revival happens on the back of the challenges posed to the status of theory by the rise of data science, leading Lizardo and Hay to engage in defense of the value and contributions of theory to sociology and international relations, respectively. In broader terms, however, it addresses the question of the status of social sciences – and, by extension, academic knowledge – more generally; and, as such, it brings us back to the justification of expertise, a question of particular relevance in the current political context.

The meaning of theory

Surely enough, theory has many meanings (Abend, 2008), and consequently many forms in which it is practiced. However, one of the characteristics that seem to be shared across the board is that it is  part of (under)graduate training, after which it gets bracketed off in the form of “the theory chapter” of dissertations/theses. In this sense, theory is framed as foundational in terms of socialization into a particular discipline, but, at the same time, rarely revisited – at least not explicitly – after the initial demonstration of aptitude. In other words, rather than doing, theory becomes something that is ‘done with’. The exception, of course, are those who decide to make theory the centre of their intellectual pursuits; however, “doing theory” in this sense all too often becomes limited to the exegesis of existing texts (what Krause refers to as ‘theory a’ and Abend as ‘theory 4’) that leads to the competition among theorists for the best interpretation of “what theorist x really wanted to say”, or, alternatively, the application of existing concepts to new observations or ‘problems’ (‘theory b and c’, in Krause’s terms). Either way, the field of social theory resembles less the groves of Plato’s Academy, and more a zoo in which different species (‘Marxists’, ‘critical realists’, ‘Bourdieusians’, ‘rational-choice theorists’) delve in their respective enclosures or fight with members of the same species for dominance of a circumscribed domain.

 

Screen shot 2017-06-12 at 8.11.36 PM
Competitive behaviour among social theorists

 

This summer school started from the ambition to change that: to go beyond rivalries or allegiances to specific schools of thought, and think about what doing theory really means. I often told people that wanting to do social theory was a major reason why I decided to do a second PhD; but what was this about? I did not say ‘learn more’ about social theory (my previous education provided a good foundation), ‘teach’ social theory (though supervising students at Cambridge is really good practice for this), read, or even write social theory (though, obviously, this was going to be a major component). While all of these are essential elements of becoming a theorist, the practice of social theory certainly isn’t reducible to them. Here are some of the other aspects I think we need to bear in mind when we discuss the return, importance, or practice of theory.

Theory is performance

This may appear self-evident once the focus shifts to ‘doing’, but we rarely talk about what practicing theory is meant to convey – that is, about theorising as a performative act. Some elements of this are not difficult to establish: doing theory usually means  identification with a specific group, or form of professional or disciplinary association. Most professional societies have committees, groups, and specific conference sessions devoted to theory – but that does not mean theory is exclusively practiced within them. In addition to belonging, theory also signifies status. In many disciplines, theoretical work has for years been held in high esteem; the flipside, of course, is that ‘theoretical’ is often taken to mean too abstract or divorced from everyday life, something that became a more pressing problem with the decline of funding for social sciences and the concomitant expectation to make them socially relevant. While the status of theory is a longer (and separate) topic, one that has been discussed at length in the history of sociology and other social sciences, it bears repeating that asserting one’s work as theoretical is always a form of positioning: it serves to define the standing of both the speaker, and (sometimes implicitly) others contributors. This brings to mind that…

Theory is power

Not everyone gets to be treated as a theorist: it is also a question of recognition, and thus, a question of political (and other) forms of power. ‘Theoretical’ discussions are usually held between men (mostly, though not exclusively, white men); interventions from women, people of colour, and persons outside centres of epistemic power are often interpreted as empirical illustrations, or, at best, contributions to ‘feminist’ or ‘race’ theory*. Raewyn Connell wrote about this in Southern Theory, and initiatives such as Why is my curriculum white? and Decolonizing curriculum in theory and practice have brought it to the forefront of university struggles, but it speaks to the larger point made by Spivak: that the majority of mainstream theory treats the ‘subaltern’ as only empirical or ethnographic illustration of the theories developed in the metropolis.

The problem here is not only (or primarily) that of representation, in the sense in which theory thus generated fails to accurately depict the full scope of social reality, or experiences and ideas of different people who participate in it. The problem is in a fundamentally extractive approach to people and their problems: they exist primarily, if not exclusively, in order to be explained. This leads me to the next point, which is that…

Theory is predictive

A good illustration for this is offered by pundits and political commentators’ surprise at events in the last year: the outcome of the Brexit referendum (Leave!), US elections (Donald Trump!), and last but not least, the UK General Election (surge in votes for Corbyn!). Despite differences in how these events are interpreted, they in most cases convey that, as one pundit recently confessed, nobody has a clue about what is going on. Does this mean the rule of experts really is over, and, with it, the need for general theories that explain human action? Two things are worth taking into account.

To begin with, social-scientific theories enter the public sphere in a form that’s not only simplified, but also distilled into ‘soundbites’ or clickbait adapted to the presumed needs and preferences of the audience, usually omitting all the methodological or technical caveats they normally come with. For instance, the results of opinion polls or surveys are taken to presented clear predictions, rather than reflections of general statistical tendencies; reliability is rarely discussed. Nor are social scientists always innocent victims of this media spin: some actively work on increase their visibility or impact, and thus – perhaps unwittingly – contribute to the sensationalisation of social-scientific discourse. Second, and this can’t be put delicately, some of these theories are just not very good. ‘Nudgery’ and ‘wonkery’ often rest on not particularly sophisticated models of human behaviour; which is not saying that they do not work – they can – but rather that theoretical assumptions underlying these models are rarely accessible to scrutiny.

Of course, it doesn’t take a lot of imagination to figure out why this is the case: it is easier to believe that selling vegetables in attractive packaging can solve the problem of obesity than to invest in long-term policy planning and research on decision-making that has consequences for public health. It is also easier to believe that removing caps to tuition fees will result in universities charging fees distributed normally from lowest to highest, than to bother reading theories of organizational behaviour in different economic and political environments and try to understand how this maps onto the social structure and demographics of a rapidly changing society. In other words: theories are used to inform or predict human behaviour, but often in ways that reinforce existing divisions of power. So, just in case you didn’t see this coming…

Theory is political

All social theories are about constraints, including those that are self-imposed. From Marx to Freud and from Durkheim to Weber (and many non-white, non-male theorists who never made it into ‘the canon’), theories are about what humans can and cannot do; they are about how relatively durable relations (structures) limit and enable how they act (agency). Politics is, fundamentally, about the same thing: things we can and things we cannot change. We may denounce Bismarck’s definition of politics as the art of the possible as insufficiently progressive, but – at the risk of sounding obvious – understanding how (and why) things stay the same is fundamental to understanding how to go about changing them. The history of social theory, among other things, can be read as a story about shifting the boundaries of what was considered fixed and immutable, on the one hand, and constructed – and thus subject to change – on the other.

In this sense, all social theory is fundamentally political. This isn’t to license bickering over different historical materialisms, or to stimulate fantasies – so dear to intellectuals – of ‘speaking truth to power’. Nor should theories be understood as weapons in the ‘war of time’, despite Débord’s poetic formulation: this is but the flipside of intellectuals’ dream of domination, in which their thoughts (i.e. themselves) inspire masses to revolt, usually culminating in their own ascendance to a position of power (thus conveniently cutting out the middleman in ‘speaking truth to power’, as they become the prime bearers of both).

Theory is political in a much simpler sense, in which it is about society and elements that constitute it. As such, it has to be about understanding what is it that those we think of as society think, want, and do, even – and possibly, especially – when we do not agree with them. Rather than aiming to ‘explain away’ people, or fit their behaviour into pre-defined social models, social theory needs to learn to listen to – to borrow a term from politics – its constituents. This isn’t to argue for a (not particularly innovative) return to grounded theory, or ethnography (despite the fact both are relevant and useful). At the risk of sounding pathetic, perhaps the next step in the development of social theory is to really make it a form of social practice – that is, make it be with the people, rather than about the people. I am not sure what this would entail, or what it would look like; but I am pretty certain it would be a welcome element of building a progressive politics. In this sense, doing social theory could become less of the practice of endlessly revising a blueprint for a social theory zoo, and more of a project of getting out from behind its bars.

 

 

*The tendency to interpret women’s interventions as if they are inevitably about ‘feminist theory’ (or, more frequently, as if they always refer to empirical examples) is a trend I have been increasingly noticing since moving into sociology, and definitely want to spend more time studying. This is obviously not to say there aren’t women in the field of social theory, but rather that gender (and race, ethnicity, and age) influence the level of generality at which one’s claims are read, thus reflecting the broader tendency to see universality and Truth as coextensive with the figure of the male and white academic.

 

 

Zygmunt Bauman and the sociologies of end times

[This post was originally published at the Sociological Review blog’s Special Issue on Zygmunt Bauman, 13 April 2017]

“Morality, as it were, is a functional prerequisite of a world with an in-built finality and irreversibility of choices. Postmodern culture does not know of such a world.”

Zygmunt Bauman, Sociology and postmodernity

Getting reacquainted with Bauman’s 1988 essay “Sociology and postmodernity”, I accidentally misread the first word of this quote as “mortality”. In the context of the writing of this piece, it would be easy to interpret this as a Freudian slip – yet, as slips often do, it betrays a deeper unease. If it is true that morality is a functional prerequisite of a finite world, it is even truer that such a world calls for mortality – the ultimate human experience of irreversibility. In the context of trans- and post-humanism, as well as the growing awareness of the fact that the world, as the place inhabited (and inhabitable) by human beings, can end, what can Bauman teach us about both?

In Sociology and postmodernity, Bauman assumes the position at the crossroads of two historical (social, cultural) periods: modernity and postmodernity. Turning away from the past to look towards the future, he offers thoughts on what a sociology adapted to the study of postmodern condition would be like. Instead of a “postmodern sociology” as a mimetic representation of (even if a pragmatic response to) postmodernity, he argues for a sociology that attempts to give a comprehensive account of the “aggregate of aspects” that cohere into a new, consumer society: the sociology of postmodernity. This form of account eschews the observation of the new as a deterioration, or aberration, of the old, and instead aims to come to terms with the system whose contours Bauman will go on to develop in his later work: the system characterised by a plurality of possible worlds, and not necessarily a way to reconcile them.

The point in time in which he writes lends itself fortuitously to the argument of the essay. Not only did Legislators and interpreters, in which he reframes intellectuals as translators between different cultural worlds, come out a year earlier; the publication of Sociology and postmodernity briefly precedes 1989, the year that will indeed usher a wholly new period in the history of Europe, including in Bauman’s native Poland.

On the one hand, he takes the long view back to post-war Europe, built, as it was, on the legacy of Holocaust as a pathology of modernity, and two approaches to preventing its repetition – market liberalism and political freedoms in the West, and planned economies and more restrictive political regimes in Central and Eastern parts of the subcontinent. On the other, he engages with some of the dilemmas for the study of society that the approaching fall of Berlin Wall and eventual unification of those two hitherto separated worlds was going to open. In this sense, Bauman really has the privilege of a two-facing version of Benjamin’s Angel of History. This probably helped him recognize the false dichotomy of consumer freedom and dictatorship over needs, which, as he stated, was quickly becoming the only imaginable alternative to the system – at least as far as imagination was that of the system itself.

The present point of view is not all too dissimilar from the one in which Bauman was writing. We regularly encounter pronouncements of an end of a whole host of things, among them history, classical distribution of labour, standards of objectivity in reporting, nation-states, even – or so we hope – capitalism itself. While some of Bauman’s fears concerning postmodernity may, from the present perspective, seem overstated or even straightforwardly ridiculous, we are inhabiting a world of many posts – post-liberal, post-truth, post-human. Many think that this calls for a rethinking of how sociology can adapt itself to these new conditions: for instance, in a recent issue of International Sociological Association’s Global Dialogue, Leslie Sklair considers what a new radical sociology, developed in response to the collapse of global capitalism, would be like.

As if sociology and the zeitgeist are involved in some weird pas-de-deux: changes in any domain of life (technology, political regime, legislation) almost instantaneously trigger calls for, if not the invention of new, then a serious reconsideration of old paradigms and approaches to its study.

I would like to suggest that one of the sources of continued appeal of this – which Mike Savage brilliantly summarised as epochal theorising – is not so much the heralding of the new, as the promise that there is an end to the present state of affairs. In order for a new ‘epoch’ to succeed, the old one needs to end. What Bauman warns about in the passage cited at the beginning is that in a world without finality – without death – there can be no morality. In T.S. Eliot’s lines from Burnt Norton: If all time is eternally present, all time is irredeemable. What we may read as Bauman’s fear, therefore, is not that worlds as we know them can (and will) end: it is that, whatever name we give to the present condition, it may go on reproducing itself forever. In other words, it is a vision of the future that looks just like the present, only there is more of it.

Which is worse? It is hard to tell. A rarely discussed side of epochal theorising is that it imagines a world in which social sciences still have a role to play, if nothing else, in providing a theoretical framing or empirically-informed running commentary of its demise, and thus offers salvation from the existential anxiety of the present. The ‘ontological turn’ – from object-oriented ontology, to new materialisms, to post-humanism – reflects, in my view, the same tendency. If objects ‘exist’ in the same way as we do, if matter ‘matters’ in the same way (if not in the same degree) in which, for instance, black lives matter, this provides temporary respite from the confines of our choices. Expanding the concept of agency so as to involve non-human actors may seem more complicated as a model of social change, but at least it absolves humans from the unique burden of historical responsibility – including that for the fate of the world.

Human (re)discovery of the world, thus, conveys less a newfound awareness of the importance of the lived environment, as much as the desire to escape the solitude of thinking about the human (as Dawson also notes, all too human) condition. The fear of relativism that postmodern ‘plurality’ of worlds brought about appears to have been preferable to the possibility that there is, after all, just the one world. If the latter is the case, the only escape from it lies, to borrow from Hamlet, in the country from whose bourn no traveller has ever returned: in other words, in death.

This impasse is perhaps felt strongest in sociology and anthropology because excursions into other worlds have been both the gist of their method and the foundations of their critical potential (including their self-critique, which focused on how these two elements combine in the construction of epistemic authority). The figure of the traveller to other worlds was more pronounced in the case of anthropology, at least at the time when it developed as the study of exotic societies on the fringe of colonial empires, but sociology is no stranger to visitation either: its others, and their worlds, delineated by sometimes less tangible boundaries of class, gender, race, or just epistemic privilege. Bauman was among theorists who recognized the vital importance of this figure in the construction of the foundations of European modernity, and thus also sensitive to its transformations in the context of postmodernity – exemplified, as he argued, in contemporary human’s ambiguous position: between “a perfect tourist” and a “vagabond beyond remedy”.

In this sense, the awareness that every journey has an end can inform the practice of social theory in ways that go beyond the need to pronounce new beginnings. Rather than using eulogies in order to produce more of the same thing – more articles, more commentary, more symposia, more academic prestige – perhaps we can see them as an opportunity to reflect on the always-unfinished trajectory of human existence, including our existence as scholars, and the responsibility that it entails. The challenge, in this case, is to resist the attractive prospect of escaping the current condition by ‘exit’ into another period, or another world – postmodern, post-truth, post-human, whatever – and remember that, no matter how many diverse and wonderful entities they may be populated with, these worlds are also human, all too human. This can serve as a reminder that, as Bauman wrote in his famous essay on heroes and victims of postmodernity, “Our life struggles dissolve, on the contrary, in that unbearable lightness of being. We never know for sure when to laugh and when to cry. And there is hardly a moment in life to say without dark premonitions: ‘I have arrived’”.

Boundaries and barbarians: ontological (in)security and the [cyber?] war on universities

baradurPrologue

One Saturday in late January, I go to the PhD office at the Department of Sociology at the University of Cambridge’s New Museums site (yes, PhD students shouldn’t work on Saturdays, and yes, we do). I swipe my card at the main gate of the building. Nothing happens.

I try again, and again, and still nothing. The sensor stays red. An interaction with a security guard who seems to appear from nowhere conveys there is nothing wrong with my card; apparently, there has been a power outage and the whole system has been reset. A rather distraught-looking man from the Department History and Philosophy of Science appears around the corner, insisting to be let back inside the building, where he had left a computer on with, he claims, sensitive data. The very amicable security guard apologises. There’s nothing he can do to let us in. His card doesn’t work, either, and the system has to be manually reset from within the computers inside each departmental building.

You mean the building noone can currently access, I ask.

I walk away (after being assured the issue would be resolved on Monday) plotting sci-fi campus novels in which Skynet is not part of a Ministry of Defense, but of a university; rogue algorithms claim GCSE test results; and classes are rescheduled in a way that sends engineering undergrads to colloquia in feminist theory, and vice versa (the distances one’ s mind will go to avoid thinking about impending deadlines)*. Regretfully pushing prospective pitches to fiction publishers aside (temporarily)**, I find the incident particularly interesting for the perspective it offers on how we think about the university as an institution: its spatiality, its materiality, its boundaries, and the way its existence relates to these categories – in other words, its social ontology.

War on universities?

Critiques of the current transformation of higher education and research in the UK often frame it as an attack, or ‘war’, on universities (this is where the first part of the title of my thesis comes from). Exaggeration for rhetorical purposes notwithstanding, being ‘under attack’ suggests is that it is possible to distinguish the University (and the intellectual world more broadly) from its environment, in this case at least in part populated by forces that threaten its very existence. Notably, this distinction remains almost untouched even in policy narratives (including those that seek to promote public engagement and/or impact) that stress the need for universities to engage with the (‘surrounding’) society, which tend to frame this imperative as ‘going beyond the walls of the Ivory Tower’.

The distinction between universities and the society has a long history in the UK: the university’s built environment (buildings, campuses, gates) and rituals (dress, residence requirements/’keeping term’, conventions of language) were developed to reflect the separateness of education from ordinary experience, enshrined in the dichotomies of intellectual vs. manual labour, active life vs. ‘life of the mind’ and, not least, Town vs. Gown. Of course, with the rise of ‘redbrick’, and, later, ‘plateglass’ universities, this distinction became somewhat less pronounced. Rather than in terms of blurring, however, I would like to suggest we need to think of this as a shift in scale: the relationship between ‘Town’ and ‘Gown’, after all, is embedded in the broader framework of distinctions between urban and suburban, urban and rural, regional and national, national and global, and the myriad possible forms of hybridisation between these (recent work by Addie, Keil and Olds, as well as Robertson et al., offers very good insights into issues related to theorising scale in the context of higher education).

Policing the boundaries: relational ontology and ontological (in)security

What I find most interesting, in this setting, is the way in which boundaries between these categories are maintained and negotiated. In sociology, the negotiation of boundaries in the academia has been studied in detail by, among others, Michelle Lamont (in How Professors Think, as well as in an overview by Lamont and Molnár), Thomas Gieryn (both in Cultural Boundaries of Science and few other texts), Andrew Abbott in The Chaos of Disciplines (and, of course, in sociologically-inclined philosophy of science, including Feyerabend’s Against Method, Lakatos’ work on research programmes, and Kuhn’s on scientific revolutions, before that). Social anthropology has an even longer-standing obsession with boundaries, symbolic as well as material – Mary Douglas’ work, in particular, as well as Augé’s Non-Places offer a good entry point, converging with sociology on the ground of neo-Durkheimian reading of the distinction between the sacred and profane.

My interest in the cultural framing of boundaries goes back to my first PhD, which explored the construal of the category of (romantic) relationship through the delineation of its difference from other types of interpersonal relations. The concept resurfaced in research on public engagement in UK higher education: here, the negotiation of boundaries between ‘inside’ (academics) and ‘outside’ (different audiences), as well as between different groups within the university (e.g. administrators vs. academics) becomes evident through practices of engaging in the dissemination and, sometimes, coproduction of knowledge, (some of this is in my contribution to this volume). The thread that runs through these cases is the importance of positioning in relation to a (relatively) specified Other; in other words, a relational ontology.

It is not difficult to see the role of negotiating boundaries between ‘inside’ and ‘outside’ in the concept of ontological security (e.g. Giddens, 1991). Recent work in IR (e.g. Ejdus, 2017) has shifted the focus from Giddens’ emphasis on social relations to the importance of stability of material forms, including buildings. I think we can extend this to universities: in this case, however, it is not (only) the building itself that is ‘at risk’ (this can be observed in intensified securitisation of campuses, both through material structure such as gates and cards-only entrances, and modes of surveillance such as Prevent – see e.g. Gearon, 2017), but also the materiality of the institution itself. While the MOOC hype may have (thankfully) subsided (though not dissappeared) there is the ubiquitous social media, which, as quite a few people have argued, tests the salience of the distinction between ‘inside’ and ‘outside’ (I’ve written a bit about digital technologies as mediating the boundary between universities and the ‘outside world’ here as well in an upcoming article in Globalisation, Education, Societies special issue that deals with reassembling knowledge production with/out the university).

Barbarians at the gates

In this context, it should not be surprising that many academics fear digital technologies: anything that tests the material/symbolic boundaries of our own existence is bound to be seen as troubling/dirty/dangerous. This brings to mind Kavafy’s poem (and J.M. Coetzee’s novel) Waiting for the Barbarians, in which an outpost of the Empire prepares for the attack of ‘the barbarians’ – that, in fact, never arrives. The trope of the university as a bulwark against and/or at danger of descending into barbarism has been explored by a number of writers, including Thorstein Veblen and, more recently, Roy Coleman. Regardless of the accuracy or historical stretchability of the trope, what I am most interested in is its use as a simultaneously diagnostic and normative narrative that frames and situates the current transformation of higher education and research.

As the last line of Kavafy’s poem suggests, barbarians represent ‘a kind of solution’: a solution for the otherwise unanswered question of the role and purpose of universities in the 21st century, which began to be asked ever more urgently with the post-war expansion of higher education, only to be shut down by the integration/normalization of the soixante-huitards in what Boltanski and Chiapello have recognised as contemporary capitalism’s almost infinite capacity to appropriate critique. Disentangling this dynamic is key to understanding contemporary clashes and conflicts over the nature of knowledge production. Rather than locating dangers to the university firmly beyond the gates, then, perhaps we could use the current crisis to think about how we perceive, negotiate, and preserve the boundaries between ‘in’ and ‘out’. Until we have a space to do that, I believe we will continue building walls only to realise we have been left on the wrong side.

(*) I have a strong interest in campus novels, both for PhD-related and unrelated reasons, as well as a long-standing interest in Sci-Fi, but with the exception of DeLillo’s White Noise can think of very few works that straddle both genres; would very much appreciate suggestions in this domain!

(**) I have been thinking for a while about a book that would be a spin-off from my current PhD that would combine social theory, literature, and critical cultural political economy, drawing on similarities and differences between critical and magical realism to look at universities. This can be taken as a sketch for one of the chapters, so all thoughts and comments are welcome.

Against academic labour: foraging in the wildlands of digital capitalism

sqrl
Central Park, NYC, November 2013

I am reading a book called “The Slow Professor: Challenging the Culture of Speed in the Academy”, by two Canadian professors, Maggie Berg and Barbara Seeber. Published earlier in 2016, to (mostly) wide critical acclaim, it critiques the changing conditions of knowledge production in the academia, in particular those associated with the expectation to produce more and at faster rates (also known as ‘acceleration‘). As an antidote, as the Slow Professor Manifesto appended to the Preface suggests, faculty should resist the corporatisation of the university by adopting the principles of Slow Movement (as in Slow Food etc.) in their professional practices.

While the book is interesting, the argument is not particularly exceptional in the context of the expanding genre of diagnoses of the ‘end’ or ‘crisis’ of the Western university. The origins of the genre could be traced to Bill Readings’ 1996 ‘University in Ruins’ (though, of course, one could always stretch the lineage back to 1918 and Veblen’s ‘The Higher Learning in America’; predecessors in Britain include E.P. Thompson’s ‘Warwick University Ltd.’ (1972) and Halsey’s ‘The Decline of Donnish Dominion’ (1982)). Among contemporary representatives of the genre are Nussbaum’s ‘Not for Profit: Why Democracy Needs the Humanities’ (2010), Collini’s ‘What Are Universities For’ (2012), and Giroux’s ‘Neoliberal Attack on Higher Education’ (2013), to name but a few; in other words, there is no shortage of works documenting how the transformation of the conditions of academic labour fundamentally threatens the role and function of universities in the Western societies – and, by extension, the survival of these societies themselves.

I would like to say straight away that I do not, for a single moment, dispute or doubt the toll that the transformation of the conditions of academic labour is having on those who are employed at universities. Having spent the past twelve years researching the politics of academic knowledge, and most of those working in higher education in a number of different countries, I encountered hardly a single academic or student not pressured, threatened, or at the very least insecure about their future employment. What I want to argue, instead, is that the critique of the transformation of knowledge production that focuses on academic labour is no longer sufficient. Concomitantly, the critique of time – as in labour time – isn’t either.

In lieu of labour, I suggest we could think of what academics do as foraging. By this I do not in any way mean to trivialize union struggles that focus on working conditions for faculty or the position of students; these are and continue to be very important, and I have always been proud to support them. However, unfortunately, they cannot capture the way knowledge has already changed. This is not only due to the growing academic ‘precariat’ (or ‘cognitariat’): while the absence of stable or full-time employment has been used to inform both analyses and specific forms of political action on both sides of the Atlantic, they still frame the problem as fundamentally dependent on academic labour. While this may for the time being represent a good strategy in the political sense, it creates a set of potential contradictions in the conceptual.

For one, labour implies the concept of use: Marx’s labour theory of value postulates that this is what it allows it to be exchanged for something (money, favours). Yet, we as  academics are often the first to point out that lot of knowledge is not directly useful: for every paradigmatic scientist in a white lab coat that cures cancer, there is the equally paradigmatic bookworm reading 18th-century poetry (bear with me, it’s that time of the year when clichés abound). Trying to measure their value by the same or even similar standard risks slipping into the pathologies of impact, or, worse, vague statements about the necessity of social sciences and humanities for democracy, freedom, and human rights (despite personal sympathy for the latter argument, it warrants mentioning that the link between democratic regimes and academic freedom is historically contingent, rather than causal).

Second, framing what academics do as labour makes it very difficult to avoid embracing some form of measurement of output. This isn’t always related to quantity: one can also measure the quality of publications (e.g., by rating them in relation to the impact factors of journals they were published in). Often, however, the ideas of productivity and excellence go hand in hand. This contributes to the proliferation of academic writing – not all of which is exceptional, to say the very least – and, in turn, creates incentives to produce both more and better (‘slow’ academia is underpinned by the argument that taking more time creates better writing).

This also points to why the critique of the conditions of knowledge production is so focused on the notion of time. As long as creating knowledge is primarily defined as a form of labour, it depends on socially and culturally defined cycles of production and consumption. Advocating ‘slowness’, thus, does not amount to the critique of the centrality of time to capitalist production: it just asks for more of it.

The concept of foraging, by contrast, is embedded in a different temporal cycle: seasonal, rather that annual or REF-able. This isn’t some sort of neo-primitivist glorification of supposed forms of sustenance of the humanity’s forebears before the (inevitable) fall from grace; it’s, rather, a more precise description of how knowledge works. To this end, we could say most academics forage anyway: they collect bits and scraps of ideas and information, and turn them into something that can be consumed (if only by other academics). Some academics will discover new ‘edible’ things, either by trial and error or by learning from (surveying) the population that lives in the area, and introduce this to other academics. Often, however, this does not amount to creating something entirely new or original, as much to the recombination of existing flavours. This is why it is not abundance as such as much as diversity that plays a role in how interesting an environment a university, city, or region will become.

However, unlike labour, foraging is not ‘naturally’ given to the creation of surplus: while foraged food can be stored, most of it is collected and prepared more or less in relation to the needs of those who eat it. Similarly, it is also by default somewhat undisciplined: foragers must keep an eye out for the plants and other foodstuffs that may be useful to them. This does not mean that it does not rely on tradition, or that it is not susceptible to prejudice – often, people will ignore or attribute negative properties to forms of food that they are unfamiliar with, much like academics ignore or fear disciplines or approaches that do not form part of their ‘tribe’ or school of thought.

As appealing as it may sound, foraging is not a romanticized, or, worse, sterile vision of what academics do. Some academics, indeed, labour. Some, perhaps, even invent. But increasing numbers are actually foraging: hunting for bits and pieces, some of which can be exchanged for other stuff – money, prestige – thus allowing them to survive another winter. This isn’t easy: in the vast digital landscape, knowing how to spot ideas and thoughts that will have traction – and especially those that can be exchanged – requires continued focus and perseverance, as well as a lot of previously accumulated knowledge. Making a mistake can be deadly, perhaps not in the literal sense, but certainly as far as reputation is concerned.

So, workers of all lands, happy New Year, and spare a thought for the foragers in the wildlands of digital capitalism.

We are all postliberals now: teaching Popper in the era of post-truth politics

blackswan
Adelaide, South Australia, December 2014

Late in the morning after the US election, I am sitting down to read student essays for the course on social theory I’m supervising. This part of the course involves the work of Popper, Kuhn, Lakatos, and Feyerabend, and its application in the social sciences. The essay question is: do theories need to be falsifiable, and how to choose between competing theories if they aren’t? The first part is a standard essay question; I added the second a bit more than a week ago, interested to see how students would think about criteria of verification in absence of an overarching regime of truth.

This is one of my favourite topics in the philosophy of science. When I was a student at the University of Belgrade, feeling increasingly out of place in the post-truth and intensely ethnographic though anti-representationalist anthropology, the Popper-Kuhn debate in Criticism and the Growth of Knowledge held the promise that, beyond classification of elements of material culture of the Western Balkans, lurked bigger questions of the politics and sociology of knowledge (paradoxically, this may have been why it took me very long to realize I actually wanted to do sociology).

I was Popper-primed well before that, though: the principle of falsification is integral to the practice of parliamentary-style academic debating, in which the task of the opposing team(s) is to ‘disprove’ the motion. In the UK, this practice is usually associated with debate societies such as the Oxford and Cambridge Union, but it is widespread in the US as well as the rest of the world; during my undergraduate studies, I was an active member of Yugoslav (now Serbian) Universities Debating Network, known as Open Communication. Furthermore, Popper’s political ideas – especially those in Open Society and its Enemies – formed the ideological core of the Open Society Foundation, founded by the billionaire George Soros to help the promotion of democracy and civil society in Central and Eastern Europe.

In addition to debate societies, the Open Society Foundation supported and funded a greater part of civil society activism in Serbia. At the time, most of it was conceived as the opposition to the regime of Slobodan Milošević, a one-time-banker-turned-politician who ascended to power in the wake of the dissolution of the Socialist federal republic of Yugoslavia. Milošević played a major role in the conflicts in its former republics, simultaneously plunging Serbia deeper into economic and political crisis exacerbated by international isolation and sanctions, culminating in the NATO intervention in 1999. Milošević’s rule ended in a coup following a disputed election in 2000.

I had been part of the opposition from the earliest moment conceivable, skipping classes in secondary school to go to anti-government demos in 1996 and 1997. The day of the coup – 5 October 2000 – should have been my first day at university, but, together with most students and staff, I was at what would turn out to be the final public protest that ended up in the storming of the Parliament. I swallowed quite a bit of tear gas, twice in situations I expected not to get out of alive (or at the very least unharmed), but somehow made it to a friend’s house, where, together with her mom and grandma, we sat in the living room and watched one of Serbia’s hitherto banned TV and radio stations – the then-oppositional B92 – come back on air. This is when we knew it was over.

Sixteen years and little more than a month later, I am reading students’ essays on truth and falsehood in science. This, by comparison, is a breeze, and it’s always exciting to read different takes on the issue. Of course, in the course of my undergraduate studies, my own appreciation of Popper was replaced by excitement at the discovery of Kuhn – and the concomitant realization of the inertia of social structures, which, just like normal science, are incredibly slow to change – and succeeded by light perplexity by Lakatos (research programmes seemed equal parts reassuring and inherently volatile – not unlike political coalitions). At the end, obviously, came infatuation with Feyerabend: like every self-respecting former liberal, I reckoned myself a methodological (and not only methodological) anarchist.

Unsurprisingly, most of the essays I read exhibit the same trajectory. Popper is, quite obviously, passé; his critique of Marxism (and other forms of historicism) not particularly useful, his idea of falsificationism too strict a criterion for demarcation, and his association with the ideologues of neoliberalism did probably not help much either.

Except that…. this is what Popper has to say:

It is undoubtedly true that we have a more direct knowledge of the ‘inside of the human atom’ than we have of physical atoms; but this knowledge is intuitive. In other words, we certainly use our knowledge of ourselves in order to frame hypotheses about some other people, or about all people. But these hypotheses must be tested, they must be submitted to the method of selection by elimination.

(The Poverty of Historicism, 127)

Our knowledge of ourselves: for instance, our knowledge that we could never, ever, elect a racist, misogynist, reality TV star for the president of one of world’s superpowers. That we would never vote to leave the European Union, despite the fact that, like all supranational entities, it has flaws, but look at how much it invests in our infrastructure. Surely – as Popper would argue – we are rational animals: and rational animals would not do anything that puts them in unnecessary danger.

Of course, we are correct. The problem, however, is that we have forgotten about the second part of Popper’s claim: we use knowledge of ourselves to form hypotheses about other people. For instance: since we understand that a rich businessman is not likely to introduce economic policies that harm the elite, the poor would never vote for him. For instance: since we remember the victims of Nazism and fascism, everyone must understand how frail is the liberal consensus in Europe.

This is why the academia came to be “shocked” by Trump’s victory, just like it was shocked by the outcome of the Brexit referendum. This is also the key to the question of why polls “failed” to predict either of these outcomes. Perhaps we were too focused on extrapolating our assumptions to other people, and not enough on checking whether they hold.

By failing to understand that the world is not composed of left-leaning liberals with a predilection for social justice, we commit, time and again, what Bourdieu termed scholastic fallacy – propensity to attribute categories of our own thinking to those we study. Alternatively, and much worse, we deny them common standards of rationality: the voters whose political choices differ from ours are then cast as uneducated, deluded, suffering from false consciousness. And even if they’re not, they must be a small minority, right?

Well, as far as hypotheses are concerned, that one has definitely failed. Maybe it’s time we started considering alternatives.

One more time with [structures of] feeling: anxiety, labour, and social critique in/of the neoliberal academia

906736_10151382284833302_1277162293_o
Florence, April 2013

Last month, I attended the symposium on Anxiety and Work in the Accelerated Academy, the second in the Accelerated Academy series that explores the changing scapes of time, work, and productivity in the academia. Given that my research is fundamentally concerned with the changing relationships between universities and publics, and the concomitant reframing of the subjectivity, agency, and reflexivity of academics, I naturally found the question of the intersection of academic labour and time relevant. One particular bit resonated for a long time: in her presentation, Maggie O’Neill from the University of York suggested anxiety has become the primary structure of feeling in the neoliberal academia. Having found myself, in the period leading up to the workshop, increasingly reflecting on the structures of feeling,  I was intrigued by the salience of the concept. Is there a place for theoretical concepts such as this in research on the transformations of knowledge production in contemporary capitalism, and where is it?

All the feels

“Structure of feeling” may well be one of those ideas whose half-life way superseded their initial purview. Raymond Williams introduced it in a brief chapter included in Marxism and Literature, contributing to carving out what would become known as the distinctly British take on the relationship between “base” and “superstructure”: cultural studies. In it, he says:

Specific qualitative changes are not assumed to be epiphenomena of changed institutions, formations, and beliefs, or merely secondary evidence of changed social and economic relations between and within classes. At the same time they are from the beginning taken as social experience, rather than as ‘personal’ experience or as the merely superficial or incidental ‘small change’ of society. They are social in two ways that distinguish them from reduced senses of the social as the institutional and the formal: first, in that they are changes of presence (while they are being lived this is obvious; when they have been lived it is still their substantial characteristic); second, in that although they are emergent or pre-emergent, they do not have to await definition, classification, or rationalization before they exert palpable pressures and set effective limits on experience and on action. Such changes can be defined as changes in structures of feeling. (Williams, 1977:130).

Williams thus introduces structures of feeling as a form of social diagnostic; he posits it against the more durable but also more formal concepts of ‘world-view’ or ‘ideology’. Indeed, the whole chapter is devoted to the critique of the reificatory tendencies of Marxist social analysis: the idea of things (or ideas) being always ‘finished’, always ‘in the past’, in order for them to be subjected to analytical scrutiny. The concept of “structure of feeling” is thus invoked in order to keep tabs on social change and capture the perhaps less palpable elements of transformation as they are happening.

Emotions and the scholastic disposition

Over the past years, discourse of feelings has certainly become more prominent in the academia. Just last week, Cambridge’s Festival of Ideas featured a discussion on the topic, framing it within issues of free speech and trigger warnings on campus. While the debate itself has a longer history in the US, it had begun to attract more attention in the UK – most recently in relation to challenging colonial legacies at both Oxford and Cambridge.

Despite multiple nuances of political context and the complex interrelation between imperialism and higher education, the debate in the media predominantly plays out in dichotomies of ‘thinking’ and ‘feeling’. Opponents tend to pit trigger warnings or the “culture of offence” against the concept of academic freedom, arguing that today’s students are too sensitive and “coddled” which, in their view, runs against the very purpose of university education. From this perspective, education is about ‘cultivating’ feelings: exercising control, submerging them under the strict institutional structures of the intellect.

Feminist scholars, in particular, have extensively criticised this view for its reductionist properties and, not least, its propensity to translate into institutional and disciplinary policies that seek to exclude everything framed as ‘emotional’, bodily, or material (and, by association, ‘feminine’) from academic knowledge production. But the cleavage runs deeper. Research in social sciences is often framed in the dynamic of ‘closeness’ and ‘distancing’, ‘immersion’ and ‘purification’: one first collects data by aiming to be as close as possible to the social context of the object of research, but then withdraws from it in order to carry out analysis. While approaches such as grounded theory or participatory methods (cl)aim to transcend this boundary, its echoes persist in the structure of presentation of academic knowledge (for instance, the division between data and results), as well as the temporal organisation of graduate education (for instance, the idea that the road to PhD includes a period of training in methods and theories, followed by data collection/fieldwork, followed by analysis and the ‘writing up’ of results).

The idea of ‘distanced reflection’ is deeply embedded in the history of academic knowledge production. In Pascalian Meditations, Bourdieu relates it to the concept of skholē – the scholarly disposition – predicated on the distinction between intellectual and manual labour. In other words, in order for reflection to exist, it needed to be separated from the vagaries of everyday existence. One of its radical manifestations is the idea of the university as monastic community. Oxford and Cambridge, for instance, were explicitly constructed on this model, giving rise to animosities between ‘town’ and ‘gown’: concerns of the ‘lay’ folk were thought to be diametrically opposed to those of the educated. While arguably less prominent in (most) contemporary institutions of knowledge production, the dichotomy is still unproblematically transposed in concepts such as “university’s contribution to society”, which assumes universities are distinct from the society, or at least their interests radically different from those of “the society” – raising obvious questions about who, in fact, is this society.

Emotions, reason, and critique

Paradoxically, perhaps, one of the strongest reverberations of the idea is to be found in the domain of social critique. On the one hand, this sounds counter-intuitive – after all, critical social science should be about abandoning the ‘veneer’ of neutrality and engaging with the world in all of its manifestations. However, establishing the link between social science and critique rests on something that Boltanski, in his critique of Bourdieu’s sociology of domination, calls the metacritical position:

For this reason we shall say that critical theories of domination are metacritical in order. The project of taking society as an object and describing the components of social life or, if you like, its framework, appeals to a thought experiment that consists in positioning oneself outside this framework in order to consider it as a whole. In fact, a framework cannot be grasped from within. From an internal perspective, the framework coincides with reality in its imperious necessity. (Boltanski, 2011:6-7)

Academic critique, in Boltanski’s view, requires assuming a position of exteriority. A ‘simple’ form of exteriority rests on description: it requires ‘translation’ of lived experience (or practices) into categories of text. However, passing the kind of moral judgements critical theory rests on calls for, he argues, a different form of distancing: complex exteriority.

In the case of sociology, which at this level of generality can be regarded as a history of the present, with the result that the observer is part of what she intends to describe, adopting a position of exteri­ority is far from self-evident… This imaginary exit from the viscosity of the real initially assumes stripping reality of its character of implicit necessity and proceeding as if it were arbitrary (as if it could be other than it is or even not be);

This “exit from the viscosity of the real” (a lovely phrase!) proceeds in two steps. The first takes the form of “control of desire”, that is, procedural distancing from the object of research. The second is the act of judgement by which a social order is ‘ejected’, seen in its totality, and as such evaluated from the outside:

In sociology the possibility of this externalization rests on the existence of a laboratory – that is to say, the employment of protocols and instructions respect for which must constrain the sociologist to control her desires (conscious or unconscious). In the case of theories of domination, the exteriority on which cri­tique is based can be called complex, in the sense that it is established at two different levels. It must first of all be based on an exteriority of the first kind to equip itself with the requisite data to create the picture of the social order that will be submitted to critique. A meta­ critical theory is in fact necessarily reliant on a descriptive sociology or anthropology. But to be critical, such a theory also needs to furnish itself, in ways that can be explicit to very different degrees, with the means of passing a judgement on the value of the social order being described. (ibid.)

Critique: inside, outside, in-between?

To what degree could we say that this categorisation can be applied to the current critique of conditions of knowledge production in the academia? After all, most of those who criticize the neoliberal transformation of higher education and research are academics. In this sense, it would make sense to question the degree to which they can lay claims to a position of exteriority. However, more problematically (or interestingly), it is also questionable to which degree a position of exteriority is achievable at all.

Boltanski draws attention to this problem by emphasising the distinction between the cognition – awareness – of ‘ordinary’ actors, and that of sociologists (or other social scientists), the latter, presumably, able to perceive structures of domination that the subjects of their research do not:

Metacritical theories of domination tackle these asymmetries from a particular angle – that of the miscognition by the actors themselves of the exploitation to which they are subject and, above all, of the social conditions that make this exploitation possible and also, as a result, of the means by which they could stop it. That is why they present themselves indivisibly as theories of power, theories of exploitation and theories of knowledge. By this token, they encounter in an especially vexed fashion the issue of the relationship between the knowledge of social reality which is that of ordinary actors, reflexively engaged in practice, and the knowledge of social reality conceived from a reflexivity reliant on forms and instruments of totalization – an issue which is itself at the heart of the tensions out of which the possibility of a social science must be created (Boltanski, 2011:7)

Hotel Academia: you can check out any time you like, but you can never leave?

How does one go about thinking about the transformation of the conditions of knowledge production when one is at the same time reflexively engaged in practice and relying on the reflexivity provided by sociological instruments? Is it at all possible? The feelings of anxiety, to this end, could be provoked exactly by this lack of opportunity to step aside – to disembed oneself from the academic life and reflect on it at the leisurely pace of skholē. On the one hand, this certainly has to do with the changing structure and tempo of academic life – acceleration and demands for increased output: in this sense, anxiety is a reaction to the changes perceived and felt, the feeling that the ground is no longer stable, like a sense of vertigo. On the other hand, however, this feeling of decentredness could be exactly what contemporary critique calls for.

The challenge, of course, is how to turn this “structure of feeling” into something that has analytical as well as affective power – and can transform the practice itself. Stravinsky’s Rite of Spring, I think, is a wonderful example of this. As a melody, it is fundamentally disquieting: its impact primarily drawn from the fact that it disrupted what were, at the time, expectations of the (musical) genre, and in the process, rewrote them.

In other words, anxiety could be both creative and destructive. This, however, is not some broad call to “embrace anxiety”. There is a clear and pertinent need to understand the way in which the transformations of working conditions – everywhere, and also in the context of knowledge production – are influencing the sense of self and what is commonly referred to as mental health or well-being.

However, in this process, there is no need to externalise anxiety (nor other feelings): that is, frame it as if caused by forces outside of, or completely independent from, human influence, including within the academia itself (for instance, government policies, or political changes on supranational level). Conversely, there is no need to completely internalise it, in the sense of ascribing it to the embodied experience of individuals only. If feelings occupy the unstable ‘middle ground’ between institutions and individuals, this is the position from which they will have to be thought. If anxiety is an interpretation of the changes of the structures of knowledge production, its critique cannot but stem from the same position. This position is not ‘outside’, but rather ‘in-between’; insecure and thought-provoking, but no less potent for that.

Which, come to think of it, may be what Williams was trying to say all along.

All the feels

This poster drew my attention while I was working in the library of Cambridge University a couple of weeks ago:

lovethelib

 

For a while now, I have been fascinated with the way in which the language of emotions, or affect, has penetrated public discourse. People ‘love’ all sorts of things: the way a film uses interior light, the icing on a cake, their friend’s new hairstyle. They ‘hate’ Donald Trump, the weather, next door neighbours’ music. More often than not, conversations involving emotions would not be complete without mentioning online expressions of affect, such as ‘likes’ or ‘loves’ on Facebook or on Twitter.

Of course, the presence of emotions in human communication is nothing new. Even ‘ordinary’ statements – such as, for instance, “it’s going to rain tomorrow” – frequently entail an affective dimension (most people would tend to get at least slightly disappointed at the announcement). Yet, what I find peculiar is that the language of affect is becoming increasingly present not only in non-human-mediated communication, but also in relation to non-human entities. Can you really ‘love’ a library? Or be ‘friends’ with your local coffee place?

This isn’t to in any way concede ground to techno-pessimists who blame social media for ‘declining’ standards in human communication, nor even to express concern over the ways in which affective ‘reaction’ buttons allow tracking online behaviour (privacy is always a problem, and ‘unmediated’ communication largely a fiction). Even if face-to-face is qualitatively different from online interaction, there is nothing to support the claim that makes it inherently more valuable, or, indeed, ‘real’ (see: “IRL fetish[i]). It is the social and cultural framing of these emotions, and, especially, the way social sciences think about it – the social theory of affect, if you wish – that concerns me here.

Fetishism and feeling

So what is different about ‘loving’ your library as opposed to, say, ‘loving’ another human being? One possible way of going about this is to interpret expressions of emotion directed at or through non-human entities as ‘shorthand’ for those aimed at other human beings. The kernel of this idea is contained in Marx’s concept of commodity fetishism: emotion, or affect, directed at an object obscures the all-too-human (in his case, capital) relationship behind it. In this sense, ‘liking’ your local coffee place would be an expression of appreciation for the people who work there, for the way they make double macchiato, or just for the times you spent there with friends or other significant others. In human-to-human communication, things would be even more straightforward: generally speaking, ‘liking’ someone’s status updates, photos, or Tweets would signify appreciation of/for the person, agreement with, or general interest in, what they’re saying.

But what if it is actually the inverse? What if, in ‘liking’ something on Facebook or on Twitter, the human-to-human relationship is, in fact, epiphenomenal to the act? The prime currency of online communication is thus the expenditure of (emotional) energy, not the relationship that it may (or may not) establish or signify. In this sense, it is entirely irrelevant whether one is liking an inanimate object (or concept), or a person. Likes or other forms of affective engagement do not constitute any sort of human relationship; the only thing they ‘feed’ is the network itself. The network, at the same time, is not an expression, reflection, or (even) simulation of human relationships: it is the primary structure of feeling.

All hail…

Yuval Noah Harari’s latest book, Homo Deus, puts the issue of emotions at the centre of the discussion of the relationship between human and AI. In a review in The Guardian, David Runciman writes:

“Human nature will be transformed in the 21st century because intelligence is uncoupling from consciousness. We are not going to build machines any time soon that have feelings like we have feelings: that’s consciousness. Robots won’t be falling in love with each other (which doesn’t mean we are incapable of falling in love with robots). But we have already built machines – vast data-processing networks – that can know our feelings better than we know them ourselves: that’s intelligence. Google – the search engine, not the company – doesn’t have beliefs and desires of its own. It doesn’t care what we search for and it won’t feel hurt by our behaviour. But it can process our behaviour to know what we want before we know it ourselves. That fact has the potential to change what it means to be human.”

On the surface level, this makes sense. Algorithms can measure our ‘likes’ and other emotional reactions and combine them into ‘meaningful’ patterns – e.g., correlate them with specific background data (age, gender, location), time of day, etc., and, on the basis of this, predict how you will act (click, shop) in specific situations. However, does this amount to ‘knowledge’? In other words, if machines cannot have feelings – and Harari seems adamant that they cannot – how can they actually ‘know’ them?

Frege on Facebook

This comes close to a philosophical problem I’ve  been trying to get a grip on recently: the Frege-Geach (alternatively, the embedding, or Frege-Geach-Searle) problem. It is comprised of two steps. The first is to claim that there is a qualitative difference between moral and descriptive statements – for instance, between saying “It is wrong to kill” and “It is raining”. Most humans, I believe, would agree with this. The second is to observe that there is no basis for claiming this sort of difference based on sentence structure alone, which then leads to the problem of explaining its source – how do we know there is one? In other words, how it could be that moral and descriptive terms have exactly the same sort of semantic properties in complex sentences, even though they have different kinds of meaning? Where does this difference stem from?

The argument can be extended to feelings: how do we know that there is a qualitative difference between statements such as “I love you” and “I eat apples”? Or loving someone and ‘liking’ an online status? From a formal (syntactic) perspective, there isn’t. More interestingly, however, there is no reason why machines should not be capable of such a form of expression. In this sense, there is no way to reliably establish that likes coming from a ‘real’ person and, say, a Twitterbot, are qualitatively different. As humans, of course, we would claim to know the difference, or at least be able to spot it. But machines cannot. There is nothing inherent in the expression of online affect that would allow algorithms to distinguish between, say, the act of ‘loving’ the library and the act of loving a person. Knowledge of emotions, in other words, is not reducible to counting, even if counting takes increasingly sophisticated forms.

How do you know what you do not know?

The problem, however, is that humans do not have superior knowledge of emotions, their own or other people’s. I am not referring to situations in which people are unsure or ‘confused’ about how they feel [ii], but rather to the limited language – forms of expression – available to us. The documentary “One More Time With Feeling”, which I saw last week, engages with this issue in a way I found incredibly resonant. Reflecting on the loss of his son, Nick Cave relates how the words that he or people around him could use to describe the emotions seemed equally misplaced, maladjusted and superfluous (until the film comes back into circulation, Amanda Palmer’s review which addresses a similar question is  here) – not because they couldn’t reflect it accurately, but because there was no necessary link between them and the structure of feeling at all.

Clearly, the idea that language does not reflect, but rather constructs  – and thus also constrains – human reality is hardly new: Wittgenstein, Lacan, and Rorty (to name but a few) have offered different interpretations of how and why this is the case. What I found particularly poignant about the way Cave frames it in the film is that it questions the whole ontology of emotional expression. It’s not just that language acts as a ‘barrier’ to the expression of grief; it is the idea of the continuity of the ‘self’ supposed to ‘have’ those feelings that’s shattered as well.

Love’s labour’s lost (?): between practice and theory

This brings back some of my fieldwork experiences from 2007 and 2008, when I was doing a PhD in anthropology, writing on the concept of romantic relationships. Whereas most of my ‘informants’ – research participants – could engage in lengthy elaboration of the criteria they use in choosing (‘romantic’) partners (as well as, frequently, the reasons why they wouldn’t designate someone as a partner), when it came to emotions their narratives could frequently be reduced to one word: love (it wasn’t for lack of expressive skills: most were highly educated). It was framed as a binary phenomenon: either there or not there. At the time, I was more interested in the way their (elaborated) narratives reflected or coded markers of social inequality – for instance, class or status. Recently, however, I have been going back more to their inability (or unwillingness) to elaborate on the emotion that supposedly underpins, or at least buttresses, those choices.

Theoretical language is not immune to these limitations. For instance, whereas social sciences have made significant steps in deconstructing notions such as ‘man’, ‘woman, ‘happiness’, ‘family’, we are still miles away from seriously examining concepts such as ‘love’, ‘hate’, or ‘fear’. Moira Weigel’s and Eva Illouz’ work are welcome exceptions to the rule: Weigel uses the feminist concept of emotional labour to show how the responsibility for maintaining relationships tends to be unequally distributed between men and women, and Illouz demonstrates how modern notions of dating come to define subjectivity and agency of persons in ways conducive to the reproduction of capitalism. Yet, while both do a great job in highlighting social aspects of love, they avoid engaging with its ontological basis. This leaves the back door open for an old-school dualism that either assumes there is an (a- or pre-social?) ‘basis’ to human emotions, which can  be exploited or ‘harvested’ through relationships of power; or, conversely, that all emotional expression is defined by language, and thus its social construction the only thing worth studying. It’s almost as if ‘love’ is the last construct left standing, and we’re all too afraid to disenchant it.

For a relational ontology

A relational ontology of human emotions could, in principle, aspire to de-throne this nominalist (or, possibly worse, truth-proceduralist) notion of love in favour of one that sees it as a by-product of relationality. This isn’t claiming that ‘love’ is epiphenomenal: to the degree to which it is framed as a motivating force, it becomes part and parcel of the relationship itself. However, not seeing it as central to this inquiry would hopefully allow us to work on the diversification of the language of emotions. Instead of using a single marker (even as polysemic as ‘love’) for the relationship with one’s library and one’s significant other, we could start thinking about ways in which they are (or are not) the same thing. This isn’t, of course, to sanctify ‘live’ human-to-human emotion: I am certain that people can feel ‘love’ for pets, places, or deceased ones. Yet, calling it all ‘love’ and leaving it at that is a pretty shoddy way of going about feelings.

Furthermore, a relational ontology of human emotions would mean treating all relationships as unique. This isn’t, to be clear, a pseudoanarchist attempt to deny standards of or responsibility for (inter)personal decency; and even less a default glorification of long-lasting relationships. Most relationships change over time (as do people inside them), and this frequently means they can no longer exist; some relationships cannot coexist with other relationships; some relationships are detrimental to those involved in them, which hopefully means they cease to exist. Equally, some relationships are superficial, trivial, or barely worth a mention. However, this does not make them, analytically speaking, any less special.

This also means they cannot be reduced to the same standard, nor measured against each other. This, of course, runs against one of capitalism’s dearly-held assumptions: that all humans are comparable and, thus, mutually replaceable. This assumption is vital not only for the reproduction of labour power, but also, for instance, for the practice of dating [iii], whether online or offline. Moving towards a relational concept of emotions would allow us to challenge this notion. In this sense, ‘loving’ a library is problematic not because the library is not a human being, but because ‘love’, just like other human concepts, is a relatively bad proxy. Contrary to what pop songs would have us believe, it’s never an answer, and, quite possibly, neither the question.

Some Twitter wisdom for the end….

————————————————————————–

[i] Thanks go to Mark Carrigan who sent this to me.

[ii] While I am very interested in the question of self-knowledge (or self-ignorance), for some reason, I never found this particular aspect of the question analytically or personally intriguing.

[iii] Over the past couple of years, I’ve had numerous discussions on the topic of dating with friends, colleagues, but also acquaintances and (almost) strangers (the combination of having a theoretical interest in the topic and not being in a relationship seem to be particularly conducive to becoming involved in such conversations, regardless of whether one wants it or not). I feel compelled to say that my critique of dating (and the concomitant refusal to engage in it, at least as far as its dominant social forms go) does not, in any way, imply a criticism of people who do. There is quite a long list of people whom I should thank for helping me clarify this, but instead I promise to write another longer post on the topic, as well as, finally, develop that app  :).

Out of place? On Pokémon, foxes, and critical cultural political economy

WightFoxBanner
Isle of Wight, August 2016

Last week, I attended the Second international conference in Cultural political economy organized by the Centre for globalization, education and social futures at the University of Bristol. It was through working with Susan Robertson and other folk at the Graduate School of Education, where I had spent parts of 2014 and 2015 as a research fellow, that I first got introduced to cultural political economy.

The inaugural conference last year took place in Lancaster, so it was a great opportunity to both meet other people working within this paradigm and do a bit of hiking in the Lake District. This year, I was particularly glad to be in Bristol – the city that, to a great degree, comes closest to ‘home’, and where – having spent the majority of those two years not really living anywhere – I felt I kind of belonged. The conference’s theme – “Putting culture in its place” – held, for me, in this sense, a double meaning: it was both about critically assessing the concept of culture in cultural political economy, and about being in a particular place from which to engage in doing just that.

 Cultural political economy (CPE) unifies (or hybridises) approaches from cultural studies and those from (Marxist) political economy, in order to address the challenges of growing complexity (and possible incommensurability, or what Jessop refers to as in/compossibility) of elements of global capitalism. Of course, as Andrew Sayer pointed out, the ‘cultural’ streak in political economy can be traced all the way to Marx, if not downright to Aristotle. Developing it as a distinct approach, then, needs to be understood both genealogically – as a way to reconcile two strong traditions in British sociology – and politically, inasmuch as it aspires to make up for what some authors have described as cultural studies’ earlier disregard of the economic, without, at the same time, reverting to the old dichotomies of base/superstructure.

 Whereas it would be equal parts wrong, pretentious, and not particularly useful to speak of “the” way of doing cultural political economy – in fact, one of its strongest points, in my view, is that it has so far successfully eschewed theoretical and institutional ossification that seems to be an inevitable corollary of having (or building) ‘disciples’ (in both senses: as students, and as followers of a particular disciplinary approach) – what it emphasises is the interrelationship between the ‘cultural’ (as identities, materialities, civilisations, or, in Jessop and Sum’s – to date the most elaborate – view, processes of meaning-making), the political, and the economic, whilst avoiding reducing them one onto another. Studying how these interact over time, then, can help understand how specific configurations (or ‘imaginaries’) of capitalism – for instance, competitiveness and the knowledge-based economy – come into being.

My relationship to CPE is somewhat ambiguous. CPE is grounded in the ontology of critical realism, which, ceteris paribus, comes closest to my own views of reality [*]. Furthermore, having spent a good portion of the past ten years researching knowledge production in a variety of regional and historical contexts, the observation that factors we call ‘cultural’ play a role in each makes sense to me, both intuitively and analytically. On the other hand, being trained in anthropology means I am highly suspicious of the reifying and exclusionary potential of concepts such as ‘culture’ and, especially, ‘civilisation’ (in ways which, I would like to think, go beyond the (self-)righteousness immanent in many of their critiques on the Left). Last, but not least, despite a strong sense of solidarity with a number of identity-based causes, my experience in working in post-conflict environments has led me to believe that politics of identity, almost inevitably, fails to be progressive.[†]

For these reasons, the presentation I did at the conference was aimed at clarifying the different uses of the concept of ‘culture’ (and, to a lesser degree, ‘civilisation’) in cultural political economy, and discussing their political implications. To begin with, it might make sense to put culture through the 5W1H of journalistic inquiry. What is culture (or, what is its ontology)? Who is it – in other words, when we say that ‘culture does things’, how do we define agency? Where is it – in other words, how does it extend in space, and how do we know where its boundaries are? When is it – or what is its temporal dimension, and why does it seem easiest to define when it has either already passed, or is at least ‘in decline’, the label that seems particularly given to application to the Western civilisation? How is it (applied as an analytical concept)? This last bit is particularly relevant, as ‘culture’ sometimes appears in social research as a cause, sometimes as a mediating force (in positivist terms, ‘intervening variable’), and sometimes as an outcome, or consequence. Of course, the standard response is that it is, in fact, all of these, but instead of foreclosing the debate, this just opens up the question of WHY: if culture is indeed everything (or can be everything), what is its value as an analytical term?

A useful metaphor to think about different meanings of ‘culture’ could be the game of Pokémon Go. It figures equally as an entity (in the case of Pokémon, entities are largely fictional, but this is of lesser importance – many entities we identify as culturally significant, for instance deities, are); as a system of rules and relationships (for instance, those governing the game, as well as online and offline relationships between players); as a cause of behaviour (in positivist terms, an independent variable); and as an indicator (for instance, Pokémon Go is taken as a sign of globalization, alienation, revolution [in gaming], etc.). The photos in the presentation reflect some of these uses, and they are from Bristol: the first is a Pikachu caught in Castle Park (no, not mine :)); the other is from an event in July, when the Bristol Zoo was forced to close because too many people turned up for a Pokémon lure party. This brings in the political economy of the game; however, just like in CPE, the ‘lifeworld’ of Pokémon Go cannot be reduced to it, despite the fact it would not exist without it. So, when we go ‘hunting’ for culture, where should we look?

Clarifying the epistemic uses of the concept of culture serves not only to prevent treating culture as what Archer has referred to as ‘epiphenomenal’, or what Rojek & Urry have (in a brilliantly scathing review) characterised as ‘decorative’, but primarily to avoid what Woolgar & Pawluch dubbed ‘ontological gerrymandering’. Ontological gerrymandering refers to conceptual sliding in social problems definitions, and consists of “making problematic the truth status of certain states of affairs selected for analysis and explanation, while backgrounding or minimizing the possibility that the same problems apply to assumptions upon which the analysis depends. (…) Some areas are portrayed as ripe for ontological doubt and others portrayed as (at least temporarily) immune to doubt”[‡].

In the worst of cases, ‘culture’ lends itself to this sort of use – one moment almost an ‘afterthought’ of the more foundational processes related to politics and economy; the other foundational, at the very root of the transformations we see in everyday life; and yet, at other moments, mediating, as if a ‘lens’ that refracts reality. Of course, different concepts and uses of the term have been dissected and discussed at length in social theory; however, in research, just like in practice, ‘culture’ frequently resurfaces as a blackbox that can be conveniently proffered to explain elements not attributable (or reducible) to other factors.

This is important not only for theoretical but also, and possibly more, for political reasons. Culture is often seen as a space of freedom, for expression and experimentation. The line from which I borrow the title of my talk – “When I hear the word culture” – is an example of a right-wing reaction to exactly that sort of concept. Variously misattributed to Goering, Gebels, or even Hitler, the line actually comes from Schlageter, a play by Hanns Johst, written in Germany in 1933, which celebrates Nazi ideology. At some point, one of the characters breaks into a longish rant on why he hates the concept of culture – he sees it as ‘lofty’, ‘idealistic’, and in many ways distant from what he perceives to be ‘real struggles’, guns and ammo – which is why it crescendoes in the famous “When I hear the word culture, I release the safety on my Browning”. This idea of ‘culture’ as fundamentally opposed to the vagaries of material existence has informed many anti-intellectualist movements, but, equally importantly, it has also penetrated the reaction to them, resulting in the often unreflexive glorification of ‘folk’ poetry, drama, or art, as almost instantaneously effective expressions of resistance to anti-intellectualism.

Yet, in contemporary political discourse, the concept of culture has been equally appropriated by the left and the right: witness the ‘culture wars’ in the US, or the more recent use of the term to describe social divisions in the UK. Rather than disappearing, political struggles, I believe, will be increasingly framed in terms of culture. The ‘burkini ban’ in France is one case. Some societies deal with cultural diversity differently, at least on the face of it. New Zealand, where I did a part of my research, is a bicultural society. Its universities are founded on the explicit recognition of the concept of mātauranga Māori, which implies the existence of fundamentally culturally different epistemologies. This, of course, raises a number of other interesting issues; but those issues are not something we shouldn’t be prepared to face.

 As we are becoming better at dealing with culture and with the economy, it still remains a challenge to translate these insights to the political. An obvious case where we’re failing at this is knowledge production itself – cultural political economy is very well suited for analysing the transformation of universities in neoliberalism, yet none the wiser – or more efficient – in tackling these challenges in ways that provide a lasting political alternative.

——-

Later that evening, I go see two of my closest friends from Bristol. Walking back to the flat where I’m staying – right between Clifton and Stokes Croft – I run across a fox. Foxes are not particularly exceptional in Bristol, but I still remember my first encounter with one, as I was walking across Cotham side in 2014: I thought it was a large cat at first, and it was only the tail that gave it away. Having grown up in a highly urbanised environment, I cannot help but see encounters with wildlife as somewhat magical. They are, to me, visitors from another world, creatures temporarily inhabiting the same plane of existence, but subject to different motivations and rules of behaviour: in other words, completely alien. This particular night, this particular fox crosses the road and goes through the gates of Cotham School, which I find so patently symbolic that I am reluctant to share it for fear of being accused of peddling clichés.

And this, of course, marks the return of culture en pleine force. As a concept, it is constructed in opposition to ‘nature’; as a practice, its primary role is to draw boundaries – between the sacred and the profane, between the living and the dead, the civilised and the wild. I know – from my training in anthropology, if nothing else – that fascination with this particular encounter stems from the feeling of it being ‘out of place’: foxes in Bristol are magical because they transgress boundaries – in this case, between ‘cultured’, human worlds, and ‘nature’, the outer world.

I walk on, and right around St. Matthew’s church, there is another one. This one stops, actually, and looks at me. “Hey”, I say, “Hello, fox”. It waits for about six seconds, and then slowly turns around and disappears through the hedge.

I wish I could say that there was sense in that stare, or that I was able to attribute it purpose. There was none, and this is what made it so poignant. The ultimate indecipherability of its gaze made me realise I was as much out of place as the fox was. From its point of view, I was as immaterial and as transgressive as it was from mine: creature from another realm, temporarily inhabiting the same plane, but ultimately of no interest. And there it was, condensed in one moment: what it means to be human, what it means to be somewhere, what it means to belong – and the fragility, precariousness, and eternal incertitude it comes with.

[*] In truth, I’m still planning to write a book that hybridises magical realism with critical realism, but this is not the place to elaborate on that particular project.

[†] I’ve written a bit on the particular intersection of class- and identity-based projects in From Class to Identity; the rich literature on liberalism, multiculturalism, and politics of recognition is impossible to summarise here, but the Stanford Encyclopaedia of Philosophy has a decent summary overview under the entry “Identity Politics”.

[‡] I am grateful to Federico Brandmayr who initially drew my attention to this article.

Do we need academic celebrities?

 

[This post originally appeared on the Sociological Review blog on 3 August, 2016].

Why do we need academic celebrities? In this post, I would like to extend the discussion of academic celebrities from the focus on these intellectuals’ strategies, or ‘acts of positioning’, to what makes them possible in the first place, in the sense of Kant’s ‘conditions of possibility’. In other words, I want to frame the conversation in the broader framework of a critical cultural political economy. This is based on a belief that, if we want to develop an understanding of knowledge production that is truly relational, we need to analyse not only what public intellectuals or ‘academic celebrities’ do, but also what makes, maintains, and, sometimes, breaks, their wider appeal, including – not least importantly – our own fascination with them.

To begin with, an obvious point is that academic stardom necessitates a transnational audience, and a global market for intellectual products. As Peter Walsh argues, academic publishers play an important role in creating and maintaining such a market; Mark Carrigan and Eliran Bar-El remind us that celebrities like Giddens or Žižek are very good at cultivating relationships with that side of the industry. However, in order for publishers to operate at an even minimal profit, someone needs to buy the product. Simply put, public intellectuals necessitate a public.

While intellectual elites have always been to some degree transnational, two trends associated with late modernity are, in this sense, of paramount importance. One is the expansion and internationalization of higher education; the other is the supremacy of English as the language of global academic communication, coupled with the growing digitalization of the process and products of intellectual labour. Despite the fact that access to knowledge still remains largely inequitable, they have contributed to the creation of an expanded potential ‘customer base’. And yet – just like in the case of MOOCs – the availability or accessibility of a product is not sufficient to explain (or guarantee) interest in it. Regardless of whether someone can read Giddens’ books in English, or is able to watch Žižek’s RSA talk online, their arguments, presumably, still need to resonate: in other words, there must be something that people derive from them. What could this be?

In ‘The Existentialist Moment’, Patrick Baert suggests the global popularity of existentialism can be explained by Sartre’s (and other philosophers’ who came to be identified with it, such as De Beauvoir and Camus) successful connecting of core concepts of existentialist philosophy, such as choice and responsibility, to the concerns of post-WWII France. To some degree, this analysis could be applied to contemporary academic celebrities – Giddens and Bauman wrote about the problems of late or liquid modernity, and Žižek frequently comments on the contradictions and failures of liberal democracy. It is not difficult to see how they would strike a chord with the concerns of a liberal, educated, Western audience. Yet, just like in the case of Sartre, this doesn’t mean their arguments are always presented in the most palatable manner: Žižek’s writing is complex to the point of obscurantism, and Bauman is no stranger to ‘thick description’. Of the three, Giddens’ work is probably the most accessible, although this might have more to do with good editing and academic English’s predilection for short sentences, than with the simplicity of ideas themselves. Either way, it could be argued that reading their work requires a relatively advanced understanding of the core concepts of social theory and philosophy, and the patience to plough through at times arcane language – all at seemingly no or very little direct benefit to the audience.

I want to argue that the appeal of star academics has very little to do with their ideas or the ways in which they are framed, and more to do with the combination of charismatic authority they exude, and the feeling of belonging, or shared understanding, that the consumption of their ideas provides. Similarly to Weber’s priests and magicians, star academics offer a public performance of the transfiguration of abstract ideas into concrete diagnosis of social evils. They offer an interpretation of the travails of late moderns – instability, job insecurity, surveillance, etc. – and, at the same time, the promise that there is something in the very act of intellectual reflection, or the work of social critique, that allows one to achieve a degree of distance from their immediate impact. What academic celebrities thus provide is – even if temporary – (re)‘enchantment’ of the world in which the production of knowledge, so long reserved for the small elite of the ‘initiated’, has become increasingly ‘profaned’, both through the massification of higher education and the requirement to make the stages of its production, as well as its outcomes, measurable and accountable to the public.

For the ‘common’ (read: Western, left-leaning, highly educated) person, the consumption of these celebrities’ ideas offers something akin to the combination of a music festival and a mindfulness retreat: opportunity to commune with the ‘like-minded’ and take home a piece of hope, if not for salvation, then at least for temporary exemption from the grind of neoliberal capitalism. Reflection is, after all, as Marx taught us, the privilege of the leisurely; engaging in collective acts of reflection thus equals belonging to (or at least affinity with) ‘the priesthood of the intellect’. As Bourdieu noted in his reading of Weber’s sociology of religion, laity expect of religion “not only justifications of their existence that can offer them deliverance from the existential anguish of contingency or abandonment, [but] justification of their existence as occupants of a particular position in the social structure”. Thus, Giddens’ or Žižek’s books become the structural or cultural equivalent of the Bible (or Qur’an, or any religious text): not many people know what is actually in them, even fewer can get the oblique references, but everyone will want one on the bookshelf – not necessarily for what they say, but because of what having them signifies.

This helps explain why people flock to hear Žižek or, for instance, Yannis Varoufakis, another leftist star intellectual. In public performances, their ideas are distilled to the point of simplicity, and conveniently latched onto something the public can relate to. At the Subversive Festival in Zagreb, Croatia in 2013, for instance, Žižek propounded the idea of the concept of ‘love’ as a political act. Nothing new, one would say – but who in the audience would not want to believe their crush has potential to turn into an act of political subversion? Therefore, these intellectuals’ utterances represent ‘speech acts’ in quite a literal sense of the term: not because they are truly (or consequentially) performative, but because they offer the public an illusion that listening (to them) and speaking (about their work) represents, in itself, a political act.

From this perspective, the mixture of admiration, envy and resentment with which these celebrities are treated in the academic establishment represents a reflection of their evangelical status. Those who admire them quarrel about the ‘correct’ interpretation of their works and vie for the status of the nominal successor, which would, of course, also feature ritualistic patricide – which may be the reason why, although surrounded by followers, so few academic celebrities actually elect one. Those who envy them monitor their rise to fame in hope of emulating it one day. Those who resent them, finally, tend to criticize their work for intellectual ‘baseness’, an argument that is in itself predicated on the distinction between academic (and thus ‘sacred’) and popular, ‘common’ knowledge.

Many are, of course, shocked when their idols turn out not to be ‘original’ thinkers channeling divine wisdom, but plagiarists or serial repeaters. Yet, there is very little to be surprised by; academic celebrities, after all, are creatures of flesh and blood. Discovering their humanity and thus ultimate fallibility – in other words, the fact that they cheat, copy, rely on unverified information, etc. – reminds us that, in the final instance, knowledge production is work like any other. In other words, it reminds us of our own mortality. And yet, acknowledging it may be the necessary step in dismantling the structures of rigid, masculine, God-like authority that still permeate the academia. In this regard, it makes sense to kill your idols.

What after Brexit? We don’t know, and if we did, we wouldn’t dare say

[This post originally appeared on the Sociological Review blog, Sunday 3rd July, 2016]

In dark times
Will there also be singing?
Yes, there will be singing
About the dark times.

– Bertolt Brecht

Sociologists are notoriously bad at prediction. The collapse of the Soviet Union is a good example – not only did no one (or almost no one) predict it would happen, it also challenged social theory’s dearly-held assumptions about the world order and the ‘nature’ of both socialism and capitalism. When the next big ‘extraneous’ shocks to the Western world – 9/11 and the 2008 economic crisis – hit, we were almost as unprepared: save for a few isolated voices, no one foresaw either the events or the full scale of their consequences.

The victory of the Leave campaign and Britain’s likely exit from the European Union present a similar challenge. Of course, in this case, everyone knew it might happen, but there are surprisingly few ideas of what the consequences will be – not on the short-term political level, where the scenarios seem pretty clear; but in terms of longer-term societal impact – either on the macro- or micro-sociological level.

Of course, anyone but the direst of positivists will be quick to point out sociology does not predict events – it can, at best, aim to explain them retroactively (for example). Public intellectuals have already offered explanations for the referendum result, ranging from the exacerbation of xenophobia due to austerity, to the lack of awareness of what the EU does. However, as Will Davies’ more in-depth analysis suggests, how these come together is far from obvious. While it is important to work on understanding them, the fact that we are at a point of intensified morphogenesis, or multiple critical junctures – means we cannot stand on the side and wait until they unfold.

Methodological debates temporarily aside, I want to argue that one of the things that prevent us from making (informed) predictions is that we’re afraid of what the future might hold. The progressive ethos that permeates the discipline can make it difficult to think of scenarios predicated on a different worldview. A similar bias kept social scientists from realizing that countries seen as examples of real socialism – like the Soviet Union, and particularly former Yugoslavia – could ever fall apart, especially in a violent manner. The starry-eyed assumption that exit from the European Union could be a portent of a new era of progressive politics in the UK is a case in point. As much as I would like to see it happen, we need to seriously consider other possibilities – or, perhaps, that what the future has in stock is beyond our darkest dreams. In the past years, there has been a resurgence of thinking about utopias as critical alternatives to neoliberalism. Together with this, we need to actively start thinking about dystopias – not as a way of succumbing to despair, but as a way of using sociological imagination to understand both societal causes of the trends we’re observing – nationalism, racism, xenophobia, and so on – and our own fear of them.

Clearly, a strong argument against making long-term predictions is the reputational risk – to ourselves and the discipline – this involves. If the failure of Marx’s prediction of the inevitability of capitalism’s collapse is still occasionally brought up as a critique of Marxism, offering longer-term forecasts in the context where social sciences are increasingly held accountable to the public (i.e. policymakers) rightfully seems tricky. But this is where the sociological community has a role to play. Instead of bemoaning the glory of bygone days, we can create spaces from which to consider possible scenarios – even if some of them are bleak. In the final instance, to borrow from Henshel – the future cannot be predicted, but futures can be invented.

Jana Bacevic is a PhD researcher in the Department of Sociology at the University of Cambridge. She tweets at @jana_bacevic.

Europe of Knowledge: Paradoxes and Challenges

 

[This article originally appeared in the Federation of Young European Greens’ ‘Youth Emancipation’ publication]

The Bologna process was a step towards creating a “Europe of Knowledge” where ideas and people could travel freely throughout Europe. Yet, this goal is threatened by changes to the structure of the higher education sector and perhaps by the nature of academia itself.

“The Europe of knowledge” is a sentence one can hardly avoid hearing today. It includes the goal of building the European higher education area through the Bologna process; the aim of making mobility a reality for many young (and not only young) people through programs of the European Commission such as Erasmus; and numerous scientific cooperation programmes aimed at boosting research and innovation. The European Commission has committed to assuring that up to 20% young people in the European Union will be academically mobile by 2020. The number of universities, research institutes, think tanks and other organizations whose mission is to generate, spread and apply knowledge seems to be growing by the minute. As information technologies continue to develop, knowledge becomes more readily available to a growing number of individuals across the world. In a certain sense, Europe is today arguably more “knowledgeable” than it ever was in the past.

And yet, this picture masks deeper tensions below the surface. Repeated students’ protests across Europe show that the transformation of European higher education and research entails, as Guy Neave [1] once diplomatically put it, an “inspiring number of contradictions”. This text will proceed to outline some of these contradictions or, as I prefer to call them, paradoxes, and then point to the main challenges generated by these paradoxes – challenges that will not only have to be answered if the “Europe of knowledge” is ever to become anything but a catchy slogan, but will also continue to pop up in the long process of transforming it into a political reality for all Europeans.

Paradoxes: Commercialisation, Borders and the Democratic Deficit

Although a “Europe of knowledge” hints at a shared space where everyone has the same (or similar) access and right to participate in the creation and transmission of knowledge, this is hardly the case. To begin with, Europe is not without borders; some of them are towards the outside, but many are also inside. A number of education and research initiatives distinguish between people and institutions based on whether they are from the EU – despite the fact that 20 out of 47 countries that make up the European Higher Education Area are not EU member states. European integration in higher education and research has maybe simplified, but did not remove obstacles to free circulation of knowledge: for many students, researchers and scholars who are not citizens of the EU, mobility entails lengthy visa procedures, stringent criteria for obtaining residence permits, and reporting requirements that not only resemble surveillance, but also can directly interfere with their learning processes.

Another paradox of the Europe of knowledge is that the massification and globalization of higher education have, in many cases, led to the growing construction of knowledge as a commodity – something that can be bought or sold. The privatisation of education and research has not only changed the entire ethos related to knowledge production, it also brought very tangible consequences for financing of higher education (with tuition fees becoming at the same time higher and more prominent way of paying for education), access to knowledge (with scholarly publishers increasingly charging exorbitant prices both for access and publishing), and changing working conditions for those in the academia (with short-term and precarious modes of employment becoming more prominent). On a more paradigmatic level, it led to the instrumentalisation of knowledge – its valorisation only or primarily in terms of its contribution to economic growth, and the consequent devaluation of other, more “traditional” purposes, such as self-awareness, development and intellectual pursuit for its own sake, which some critics associate with the Humboldtian model of university.

It is possible to see these paradoxes and contradictions as inevitable parts of global transformations, and thus accept their consequences as unavoidable. However, this text wants to argue that it is still possible to use knowledge in order to fight for a better world, but that this process entails a number of tough challenges. The ensuing section will outline some of them.

Challenges: Equality and the Conservativism of Academia

Probably the biggest challenge is to ensure that knowledge contributes to the equality of opportunities and chances for everyone. This should not translate into political clichés, or remain limited to policies that try to raise the presence or visibility of underrepresented populations in education and research. Recognizing inequalities is a first step, but changing them is a far more complex endeavour than it may at first appear. Sociologists of education have shown that one of the main purposes of education – and especially higher education – is to distinguishing between those who have it and those who don’t, bestowing the former with higher economic and social status. In other words, education reproduces social inequalities not only because it is unfair at the point of entry, but also because it is supposed to create social stratification. Subverting social inequalities in education, thus, can only work if becomes a part of a greater effort to eliminate or minimise inequalities based on class, status, income or power. Similarly, research that is aimed only at economic competitiveness – not to mention military supremacy – can hardly contribute to making a more equal or peaceful world. As long as knowledge remains a medium of power, it will continue to serve the purposes of maintaining the status quo.

This brings us to the key challenge in thinking about knowledge. In theory as well as in practice, knowledge always rests somewhere on the slippery ground between reproduction and innovation. On the one hand, one of the primary tasks of education as the main form of knowledge transmission is to integrate people into the society – e.g. teaching them to read, write and count, as well as to “fit” within the broader social structure. In this sense, all education is, essentially, conservative: it is focused on preserving human societies, rather than changing them. On the other hand, knowledge is also there to change the world: both in the conventional sense of the development of science and technology, but also in the more challenging sense of awareness of what it means to be human, and what are the implications and consequences – including, but not limited to, the consequences of technological development. The latter task, traditionally entrusted to the social sciences and humanities, is to always doubt, challenge, and “disrupt” the dominant or accepted modes of thinking.

The balance between these two “faces” of knowledge is very delicate. In times of scarcity or crisis, the uses of knowledge too easily slip into the confines of reproduction – assuring that human societies preserve themselves, usually with the power relationships and inequalities intact, and not infrequently at the expense of others, including our own environment. On the other hand, one-sided emphasis on the uses of knowledge for development can obscure the conditions of sustainability, as insights from environmental research and activism have displayed numerous times. The challenge, thus, is in maintaining both of these aspects, while not allowing only one to assume a dominant role.

Conclusion

These paradoxes and challenges are just a fraction of the changes that are now facing higher education and research in Europe. Yet, without knowing what they and their consequences are, action will remain lost in the woods of technical jargon and petty “turf wars” between different movements, fractions, disciplines and institutions. The higher education and research policies developed in Europe today to a large extent try to smooth over these conflicts and tensions by coating them in a neutral language that promises equality, efficiency and prosperity. Checking and probing the meaning of these terms is a task for the future.


[1] Neave, G. 2002. (2002) Anything Goes: Or, How the Accommodation of Europe’s Universities to European Integration Integrates an Inspiring Number of Contradictions. Tertiary Education and Management, 8 (3). pp. 181-197. ISSN 1358-3883