Half a world away – 18 December

Exhaustion takes many forms, some less destructive than others. With term over and many things left on a rather long ‘to do’ list I return to these ‘work notes’ with a sense of regret – that I did not manage to keep to my aim, to regularly write, and that quite so many things feel left undone. Nevertheless, I did not write for self-protection – to combat feelings of exhaustion. I was faced with it feeling like ‘yet another thing’. It may well be possible to turn that sense around and make it something productive but, honestly, it just felt pragmatically better to let some things slip. I am very tired, for a number of reasons, and I recognise exhaustion in many other colleagues (not just ‘academics’) across my institution and more broadly. Working in academia in the final month of 2018 is fraught, as many can attest and as documented by our trade union and in the pages of professional publications. As I reflect upon not maintaining these ‘work notes’ and on the final term of this calendar year I want to offer some thoughts about negotiating ‘exhaustion’ in academia.

A ‘permanent’ position in academia is a privilege, even when it (often) doesn’t feel like one. It brings choices and some freedoms, alongside (over time) growing responsibilities. When a university is functioning as we (historically) expect, we are, more-or-less, free to structure aspects of our work around our lives. For a number of reasons I chose to commit to commuting around 80-miles/ 1 ½ hours (each way). I am able to ask for timetable adjustments and to compress my hours to accommodate childcare. These are measures that are simply not widely available to other workers. There is no requirement to be in my office outside of term-time, or even outside of timetabled and/or contractually required commitments. Many of us work in all sorts of places. Nevertheless, such apparent freedom and choice comes with a host of accompanying issues that, if you are like me, can be quite hard to negotiate. I think I want to make two points about this privilege, and how ill-prepared I have felt to negotiate it, in relation to exhaustion. The first is in relation to how to choose and how this relates to the character of working as a lecturer. The second is in relation to commuting.

As I reflect upon now being in my current job for the longest period in my career so far, I cannot help thinking academics are really poorly prepared, in terms of professional development/ training, for the choices we are able and are required to make. My experience of academia is that you are largely left to get on with it, on your own. There is no ‘team’, in the sense of the other kinds of work I’ve done – in administrative office work (circa late-90s) and in web development (circa mid 00s). We do not necessarily have to regularly negotiate with colleagues about how to conduct work together, unless you do fairly involved team teaching. We have meetings, of course, but in my career to-date this does not appear to go hand-in-hand with weekly or monthly cycles of work in the way it can in other areas of work. So, we must make individual decisions about what to prioritise, what work to say ‘yes’ or ‘no’ to and in who’s interests we can or should act.

These sorts of issues tend to come out in relation to specific sorts of work when they’re discussed on blogs, in professional publications and so on. For example: much of the ‘how to write’ literature is concerned with time management and the sorts of choice we can or should make. The onus is often placed by commentators and advisors on the individual, even when, in the same argument, the evils of ‘neoliberalism’ or other articulations of individualism and self-interest/personal gain are bemoaned. Of course, it is true that much of the manner in which we are addressed by institutions, government policy and professional organisations is as autonomous individual academics – and, indeed, some of that involves pitting us against one another as ‘entrepreneurial’ competitors (for funding, status and so on). Nevertheless, it seems to me that we rarely talk about how to take decisions in solidarity, while attending to self-care and for a sustainable career. Choices, when faced alone, when required frequently, can be exhausting.

With a finite number of universities and jobs spread across them, we cannot always live and work in the same place. Academics are, in some senses, fortunate to be able to choose. Nevertheless, commuting is really tiring. Even when it does not involve driving and the public transport works, travel over a particular timeframe is tiring. You have to be prepared – you need to plan and be alive to timetables and so on. You have to give over a small amount of background concentration to your travel. In circumstances where your choice are limited – say: one train per hour – you have to make choices about contingency, how early should you be, just in case? When the transit systems are less than reliable it can mean carrying a permanent low-level anxiety about being able to get home for children and so on. When the systems do not function it can be very stressful – asking colleagues to apologise to students when you won’t be there on time, or not getting home til late take an emotional toll.

I have no easy answers about making productive or sustainable choices, beyond suggesting that I think we need to consciously make time for negotiating the choices we must make. Dealing with our autonomy, however free or restricted it might be, in academia is work – I have been slow to recognise this. Perhaps to do it effectively we need to actively acknowledge this, give it proper time and consideration and (kindly) hold ourselves to account for the choices we then make. To be ‘critical’, ‘radical’ or other flavours of autonomous and responsible intellectual workers (contra ‘neoliberalism’ etc etc.) should not, I suggest, mean to be in some way chaotic or to avoid choice. Neither should it mean that we take on more responsibility than we can or should be expected to handle (you can choose to say ‘no’ productively). Rather, I increasingly feel the need to find ways to make those choices in solidarity – in a way that minimises exhaustion, both for ourselves and for others. Perhaps this simply means we should allow ourselves to take time.

Half a world away – R.E.M

A genealogy of theorising information technology, through Simondon [video]

Glitched image of a mural of Prometheus giving humans' fire in Freiberg

This post follows from the video of Bernard Stiegler talking about Simondon’s ‘notion’ of information, in relation to his reading of Simondon and others’ theorisation of technogenesis. That paper was a key note in the conference ‘Culture & Technics: The Politics of Du Mode‘, held by the University of Kent’s Centre for Critical Though. It is worth highlighting the whole conference is available on YouTube.

In particular, the panel session with Anne Sauvagnargues and Yuk Hui discussing the genealogy of Simondon’s thought (as articulated in his two perhaps best-known books). For those interested in (more-or-less) French philosophies of technology (largely in the 20th century) this is a fascinating and actually quite accessible discussion.

Sauvagnargues discusses the historical and institutional climate/context of Simondon’s work and Yuk excavates (in a sort of archeological manner) some of the key assumptions and intellectual histories of Simondon’s theorisation of individuation, information and technics.

Bernard Stiegler on the on the notion of information and its limits

Bernard Stiegler being interviewed

I have only just seen this via the De Montfort Media and Communications Research Centre Twitter feed. The above video is Bernard Stiegler’s ‘key note’ (can’t have been a big conference?) at the University of Kent Centre for Critical Though conference on the politics of Simondon’s Modes of Existence of Technical Objects

In engaging with Simondon’s theory (or in his terms ‘notion’) of information, Stiegler reiterates some of the key elements of his Technics and Time in relation to exosomatisation and tertiary retention being the principal tendency of an originary technics that, in turn, has the character of a pharmakon, that, in more recent work, Stiegler articulates in relation to the contemporary epoch (the anthoropocene) as the (thermodynamic style) tension between entropy and negentropy. Stiegler’s argument is, I think, that Simondon misses this pharmacological character of information. In arguing this out, Stiegler riffs on some of the more recent elements of his project (the trilogy of ‘As’) – the anthropocene, attention and automation – which characterise the contemporary tendency towards proletarianisation, a loss of knowledge and capacities to remake the world.

It is interesting to see this weaving together of various elements of his project over the last twenty(+) years both: in relation to his engagement with Simondon’s work (a current minor trend in ‘big’ theory), and: in relation to what seems to me to be a moral philosophical character to Stiegler’s project, in terms of his diagnosis of the anthropocene and a call for a ‘neganthropocene’.

Sophia – show robots and deception

Hanson Robotics' "Sophia"

…when first we practice to deceive…

Walter Scott

Prof Noel Sharkey has written a thoughtful, informative and entertaining piece for Forbes (so, for a general audience) that does some unpacking of ‘Sophia’ with reference to the history of ‘show robots’ (such as the Westinghouse show robots of the the mid-C20, like Elektro, and of course Honda’s Asimo). It’s worth reading the piece in full but here’s a couple of choice clips:

Sophia is not the first show robot to attain celebrity status. Yet accusations of hype and deception have proliferated about the misrepresentation of AI to public and policymakers alike. In an AI-hungry world where decisions about the application of the technologies will impact significantly on our lives, Sophia’s creators may have crossed a line. What might the negative consequences be? To get answers, we need to place Sophia in the context of earlier show robots.


The tradition extends back to the automata precursors of robots in antiquity. Moving statues were used in the temples of ancient Egypt and Greece to create the illusion of a manifestation of the gods. Hidden puppeteers pulled ropes and spoke with powerful booming voices emitted from hidden tubes. This is not so different from how show robots like Sophia operate today to create the illusion of a manifestation of AI.

For me, the biggest problem with the hype surrounding Sophia is that we have entered a critical moment in the history of AI where informed decisions need to be made. AI is sweeping through the business world and being delegated decisions that impact significantly on peoples lives from mortgage and loan applications to job interviews, to prison sentences and bail guidance, to transport and delivery services to medicine and care.


It is vitally important that our governments and policymakers are strongly grounded in the reality of AI at this time and are not misled by hype, speculation, and fantasy. It is not clear how much the Hanson Robotics team are aware of the dangers that they are creating by appearing on international platforms with government ministers and policymakers in the audience.

Sing the body electric… robots in music videos

Still from the video for All is Love by Bjork

I recently saw the Chemical Brothers new-ish video for the song “Free Yourself”, featuring androids/robots apparently going feral and raving in a warehouse and it made me consciously think about something I’ve known for some time – there are quite a few music videos with ‘robots’ in them. 

So, here’s a very partial collection:

AI as organ -on|-ology

Kitt the 'intelligent' car from the TV show Knight Rider

Let’s begin with a postulate: there is either no “AI” – artificial intelligence – or every intelligence is, in fact, in some way artificial (following a recent talk by Bernard Stiegler). In doing so we commence from an observation that intelligence is not peculiar to one body, it is shared. A corollary is that there is no (‘human’) intelligence without artifice, insofar as we need to exteriorise thought (or what Stiegler refers to as ‘exosomatisation’) for that intelligence to function – as language, in writing and through tools – and that this is inherently collective. Further, we might argue that there is no AI. Taking that suggestion forward, we can say that there are, rather, forms of artificial (or functional) stupidity, following Alvesson & Spicer (2012: p. 1199), insofar as it inculcates forms of lack of capacity: “characterised by an unwillingness or inability to mobilize three aspects of cognitive capacity: reflexivity, justification, and substantive reasoning”. Following Alvesson & Spicer [& Stiegler] we might argue that such forms of stupidity are necessary passage points through our sense-making in/of the world, thus are not morally ‘wrong’ or ‘negative’. Instead, the forms of functional stupidity derive from technology/techniques are a form of pharmakon – both enabling and disabling in various registers of life.

Given such a postulate, we might categorise “AI” in particular ways. We might identify ‘AI’ not as ‘other’ to the ‘human’ but rather a part of our extended (exosomatic) capacities of reasoning and sense. This would be to think of AI ‘organologically’ (again following Stiegler) – as part of our widening, collective, ‘organs’ of action in the world. We might also identify ‘AI’ as an organising rationale in and of itself – a kind of ‘organon’ (following Aristotle). In this sense “AI” (the discipline, institutions and the outcome of their work [‘an AI’]) is/are an organisational framework for certain kinds of action, through particular forms of reasoning.

It would be tempting (in geographyland and across particular bits of the social sciences) to frame all of this stemming from, or in terms of, an individual figure: ‘the subject’. In such an account, technology (AI) is a supplement that ‘the human subject’ precedes. ‘The subject’, in such an account, is the entity to which things get done by AI, but also the entity ultimately capable of action. Likewise, such an account might figure ‘the subject’ and it’s ‘other’ (AI) in terms of moral agency/patiency. However, for this postulate such a framing would be unhelpful (I would also add that thinking in terms of ‘affect’, especially through neuro-talk would be just as unhelpful). If we think about AI organologically then we are prompted to think about the relation between what is figured as ‘the human’ and ‘AI’ (and the various other things that might be of concern in such a story) as ‘parasitic’ (in Derrida’s sense) – its a reciprocal (and, in Stiegler’s terms, ‘pharmacological’) relation with no a priori preceding entity. ‘Intelligence’ (and ‘stupidity’ too, of course) in such a formulation proceeds from various capacities for action/inaction.

If we don’t/shouldn’t think about Artificial Intelligence through the lens of the (‘sovereign’) individual ‘subject’ then we might look for other frames of reference. I think there are three recent articles/blogposts that may be instructive.

First, here’s David Runciman in the LRB:

Corporations are another form of artificial thinking machine, in that they are designed to be capable of taking decisions for themselves. Information goes in and decisions come out that cannot be reduced to the input of individual human beings. The corporation speaks and acts for itself. Many of the fears that people now have about the coming age of intelligent robots are the same ones they have had about corporations for hundreds of years.

Second, here’s Jonnie Penn riffing on Runciman in The Economist:

To reckon with this legacy of violence, the politics of corporate and computational agency must contend with profound questions arising from scholarship on race, gender, sexuality and colonialism, among other areas of identity.
A central promise of AI is that it enables large-scale automated categorisation. Machine learning, for instance, can be used to tell a cancerous mole from a benign one. This “promise” becomes a menace when directed at the complexities of everyday life. Careless labels can oppress and do harm when they assert false authority. 

Finally, here’s (the outstanding) Lucy Suchman discussing the ways in which figuring complex systems of ‘AI’-based categorisations as somehow exceeding our understanding does particular forms of political work that need questioning and resisting:

The invocation of Project Maven in this context is symptomatic of a wider problem, in other words. Raising alarm over the advent of machine superintelligence serves the self-serving purpose of reasserting AI’s promise, while redirecting the debate away from closer examination of more immediate and mundane problems in automation and algorithmic decision systems. The spectacle of the superintelligentsia at war with each other distracts us from the increasing entrenchment of digital infrastructures built out of unaccountable practices of classification, categorization, and profiling. The greatest danger (albeit one differentially distributed across populations) is that of injurious prejudice, intensified by ongoing processes of automation. Not worrying about superintelligence, in other words, doesn’t mean that there’s nothing about which we need to worry.
As critiques of the reliance on data analytics in military operations are joined by revelations of data bias in domestic systems, it is clear that present dangers call for immediate intervention in the governance of current technologies, rather than further debate over speculative futures. The admission by AI developers that so-called machine learning algorithms evade human understanding is taken to suggest the advent of forms of intelligence superior to the human. But an alternative explanation is that these are elaborations of pattern analysis based not on significance in the human sense, but on computationally-detectable correlations that, however meaningless, eventually produce results that are again legible to humans. From training data to the assessment of results, it is humans who inform the input and evaluate the output of the black box’s operations. And it is humans who must take the responsibility for the cultural assumptions, and political and economic interests, on which those operations are based and for the life-and-death consequences that already follow.

All of these quotes more-or-less exhibit my version of what an ‘organological’ take on AI might look like. Likewise, they illustrate the ways in which we might bring to bear a form of analysis that seeks to understand ‘intelligence’ as having ‘supidity’as a necessary component (it’s a pharmkon, see?), which in turn can be functional (following Alvesson & Spicer). In this sense, the framing of ‘the corporation’ from Runciman and Penn is instructive – AI qua corporation (as a thing, as a collective endeavour [a ‘discipline’]) has ‘parasitical’ organising principles through which play out the pharmacological tendencies of intelligence-stupidity.

I suspect this would also resonate strongly with Feminist Technology Studies approaches (following Judy Wajcman in particular) to thinking about contemporary technologies. An organological approach situates the knowledges that go towards and result from such an understanding of intelligence-stupidity. Likewise, to resist figuring ‘intelligence’ foremost in terms of the sovereign and universal ‘subject’ also resists the elision of difference. An organological approach as put forward here can (perhaps should[?]) also be intersectional.

That’s as far as I’ve got in my thinking-aloud, I welcome any responses/suggestions and may return to this again.

If you’d like to read more on how this sort of argument might play out in terms of ‘agency’ I blogged a little while ago.

ADD. If this sounds a little like the ‘extended mind‘ (of Clark & Chalmers) or various forms of ‘extended self’ theory then it sort of is. What’s different is the starting assumptions: here, we’re not assuming a given (a priori) ‘mind’ or ‘self’. In Stiegler’s formulation the ‘internal’ isn’t realised til the external is apprehended: mental interior is only recognised as such with the advent of the technical exterior. This is the aporia of origin of ‘the human’ that Stiegler and Derrida diagnose, and that gives rise to the theory of ‘originary technics’. The interior and exterior, and with them the contemporary understanding of the experience of being ‘human’ and what we understand to be technology, are mutually co-constituted – and continue to be so [more here]. I choose to render this in a more-or-less epistemological, rather than ontological manner – I am not so much interested in the categorisation of ‘what there is’, rather in ‘how we know’.

Bernard Stiegler’s Age of Disruption – out soon

Bernard Stiegler being interviewed

Out next year with Polity, this is one of the earlier of Stiegler’s ‘Anthropocene’ books (in terms of publication in French, see also The Neganthropocene) explicating quite a bit of the themes that come out in the interviews I’ve had a go at translating in the past three years (see: “The time saved through automation must be given to the people”; “How to survive disruption”; “Stop the Uberisation of society!“; and “Only by planning a genuine future can we fight Daesh“). Of further interest, to some, is that it also contains a dialogue with Nancy (another Derrida alumnus). This book is translated by the excellent Daniel Ross.

Details on the Polity website. Here’s the blurb:

Half a century ago Horkheimer and Adorno argued, with great prescience, that our increasingly rationalised and Westernised world was witnessing the emergence of a new kind of barbarism, thanks in part to the stultifying effects of the culture industries. What they could not foresee was that, with the digital revolution and the pervasive automation associated with it, the developments they had discerned would be greatly accentuated and strengthened, giving rise to the loss of reason and to the loss of the reason for living. Individuals are overwhelmed by the sheer quantity of digital information and the speed of digital flows, and profiling and social media satisfy needs before they have even been expressed, all in the service of the data economy. This digital reticulation has led to the disintegration of social relations, replaced by a kind of technological Wild West, in which individuals and groups find themselves increasingly powerless, driven by their lack of agency to the point of madness.
How can we find a way out of this situation? In this book, Bernard Stiegler argues that we must first acknowledge our era as one of fundamental disruption and detachment. We are living in an absence of epokh? in the philosophical sense, by which Stiegler means that we have lost our noetic method, our path of thinking and being. Weaving in powerful accounts from his own life story, including struggles with depression and time spent in prison, Stiegler calls for a new epokh? based on public power. We must forge new circuits of meaning outside of the established algorithmic routes. For only then will forms of thinking and life be able to arise that restore meaning and aspiration to the individual.
Concluding with a substantial dialogue between Stiegler and Jean-Luc Nancy in which they reflect on techniques of selfhood, this book will be of great interest to students and scholars in social and cultural theory, media and cultural studies, philosophy and the humanities generally.

Popular automative imagination (some novels)

Twiki the robot from Buck Rogers

I’ve had about six months of reading various versions of speculative/science fiction after not having read in that genre for a little while… so here’s a selection of books I’ve read (almost exclusively on an ereader) that have more-or-less been selected following the ‘people who read [a] also read [b]’ lists.

I’m not sure these books necessarily offer any novel insights but they do respond to the current milieu of imagining automation (AI, big data, platform-ing, robots, surveillance capitalism etc etc) and in that sense are a sort of very partial (and weird) guide to that imagination and the sorts of visions being promulgated.

I’d like to write more but I don’t have the time or energy so this is more or less a place-holder for trying to say something more interesting at a later date… I do welcome other suggestions though! Especially less conventionally Western ones.

ADD. Jennie Day kindly shared a recent blogpost by David Murakami Wood in which he makes some recommendations for SF books. Some of these may be of interest if you’re looking for wider recommendations. In particular, I agree with his recommendations of Okorafor’s “Lagoon“, which is a great novel.