CFP AAG 2020 – ‘New geographies of automation?’

Still from the video for All is Love by Bjork

I’d welcome submissions, questions or any form of interest for the proposed session I outline below.

My aim with this session is to continue a conversation that has arisen in geography and beyond about as wide a range of tropes about automation as possible. Papers needn’t be empirical per se or about actually existing automation, they could equally be about the rationales, promises or visions for automation. Likewise, automation has been about for a while, so historical geographies of automation, in agriculture for example, or policies for automation that have been tried and failed would be also welcome.

There are all sorts of ways that ‘automation’ has been packaged in other rubrics, such as ‘smart’ things, cities and so on, or perhaps become a ‘fig leaf’ or ‘red herring’ to cover for unscrupulous activities, such as iniquitous labour practices.

I guess what I’m driving at is – I welcome any and all ideas relevant to the broad theme!

CFP: New Geographies of Automation?

Denver, USA, 6-10 April 2020

Organiser: Sam Kinsley (Exeter).

Abstract deadline: 16th October 2019.

This session invites papers that respond to the variously promoted or forewarned explosion of automation and the apparent transformations of culture, economy, labour and workplace we are told will ensue. Papers are sought from any and all branches of geography to investigate what contemporary geographies of automation may or should look like, how we are/could/should be doing them and to perhaps question the grandiose rhetoric of alarmism/boosterism of current debates.

Automation has lately gained a renewed focus of hyperbolic commentary in print and online. We are warned by some of the ‘rise of the robots’ (Ford 2015) sweeping away whole sectors of employment or by others exhorted to strive towards ‘fully automated luxury communism’ (Srnicek & Williams 2015). Beyond the hyperbole it is possible to trace longer lineages of geographies of automation. Studies of the industrialisation of agriculture (Goodman & Watts 1997); Fordist/post-Fordist systems of production (Harvey 1989); shifts to globalisation (Dicken 1986) and (some) post-industrial societies (Clement & Myles 1994) stand testament to the range of work that has addressed the theme of automation in geography. Indeed, in the last decade geographers have begun to draw out specific geographical contributions to debates surrounding ‘digital’ automation. In similar if somewhat divergent ways, geographers have paid a closer attention to: the apparent automation of labour and workplaces (Bissell & Del Casino 2017); encounters with apparently autonomous ‘bots’ (Cockayne et al. 2017); the interrogation of automation in governance and surveillance across a range of scales (Amoore 2013, Kitchin & Dodge 2011); the integration of AI techniques into spatial analysis (Openshaw & Openshaw 1997); and the processing of ‘big’ data in order to discern things about, or control, people (Leszczynski 2015). 

The invitation of this session is to submit papers that consider contemporary discussions, movements and propositions of automation from a geographical perspective (in the broadest sense). 

Examples of topics might include (but are certainly not limited to):

  • AI, machine learning and cognitive work
  • Boosterism and tales of automation
  • Gender, race and A.I
  • Labour and work
  • Autonomy, agency and law-making
  • Robotics and the everyday
  • Automation and workplace governance
  • Techno-bodily relations
  • Mobilities and materialities
  • Governance and surveillance

I intend to organize at least one paper session, depending on quantity and quality of submissions.  If you would like to propose a paper presentation, please email an abstract of 250 words to me by 16th October.

If you would also like to participate in a special issue on this topic I welcome expressions of interest.

Ballet Robotique – popular representations of automation

Warehouse robots moving packages

In between doing other things I am trying to maintain a little progress with work on The Automative Imagination. Recently I’ve been looking at (largely Anglophone and/or global North/West) representations of robots or automatons in cinema. There’s some funny examples (I posted a few music video representations some time ago) and it is interesting how humour, and I suppose forms of satire, and artistic representations are an enduring way of getting to grips with whatever we think ‘robots’ might be.

So, for your consideration – I have posted below two interesting pieces I have found recently (to me). I’ll try to write more on this in the near future.

The Automatic Motorist (1911)

Ballet Robotique (1982)

Automated lettuce

A robot arm holding a lettuce plant

Following on from the earlier post about recurring stories, here’s two headlines more-or-less reporting the same story. The first is a story from the Daily Mail newspaper in 1965. As appears to often be the case for that paper, the innovation is framed in terms of some kind of national threat. The second is a story from tech news website engadget from 2018.

"Warning! Automated lettuce" - a headline from the Daily Mail, 1965

They are essentially the same story. Different technologies are invoked, perhaps different orders of sophistication are implied (or achieved), but more-or-less the same outcome is inferred – people do less work in preparing lettuces for sale.

I don’t really have time to add anything to the analysis I’ve already offered on this sort of story but I wanted to post this while I was still thinking about it.

Robots that are repeatedly coming, still

Industrial factory robot arms

On Tim Harford’s second series of Fifty things that made the modern economy there is an interesting trend of highlighting how some of the ‘things’ tell wider stories about automation in some regard. There are two things I’d pull out here.

First, there’s the issue of job or task displacement. Harford argues that, for example, spreadsheets automate certain elements of accountancy but make accountancy that much more efficient that more accountancy takes place. Quite a nice concise story about automation. This is indicative of a wider argument that often gets made about automation, perhaps in contradistinction to the ‘robots are stealing jobs’ hysteria — that automation may involve technology replacing people in certain tasks but that it often results in new tasks, or new forms of work (e.g. in the WEF ‘Future of Jobs Report 2018‘).

Second, there’s the issue of us being told by those with particular interests in automation and robotics that robots are about to replace a particular kind of work. This is a story that get’s trotted out rather a lot. ‘The robots are coming’ is a phrase often repeated in newspaper and web headlines. There are host of ‘packages’ for modern, and not-so-modern, news programmes about a ‘new’ machine that is going to replace a particular kind of worker. Harford gives a great example right at the end of the programme about bricks. We get through a lot of bricks and laying them as walls and building those into buildings are labour-intensive. There is a ‘new’ robot to displace that work: Construction Robotics‘ Semi Automated Mason (SAM – great name eh?) works alongside builders to speed up building walls (video below).

The thing is – this is not actually new. As Harford points out in the ‘bricks‘ programme, this is a story that has been told before. In the 1960s Pathé news reported on a remarkably similar mechanical system: the ‘motor mason’ (video below).

We can see then that in Harford’s popular economics podcast, 50 things, automation is a common theme – just as it is in wider discussions about social and political-economic ‘progress’. Yet it also nicely demonstrates some recurring tropes. First, there are now fairly established narratives about automation in relation to ‘jobs’ that are told in different ways, depending upon your political or theoretical persuasion – job ‘replacement’ and/or ‘creation’. Second, there is a common subsequent narrative when the ‘replacement’ story is playing out – that of the clever machine that is going to do a particular worker, such as a brick layer, out of their job. Here we also see how that narrative can keep being repeated, the robot is always coming but, perhaps sometimes, not quite arriving.

"The robots are coming" headline from the Guardian in 1986
"The robots are coming" headline from the Guardian in 2019

Bernard Stiegler on disruption & stupidity in education & politics – podcast

Bernard Stiegler being interviewed

Via Museu d’Art Conptemporani de Barcelona.

On the Ràdio Web Macba website there is a podcast interview with philosopher Bernard Stiegler as part of a series to ‘Reimagine Europe’. It covers many of the major themes that have preoccupied Stiegler for the last ten years (if not longer). You can download the pod as an mp3 for free. Please find the blurb below and a link.

In his books and lectures, Stiegler presents a broad philosophical approach in which technology becomes the starting point for thinking about living together and individual fulfilment. All technology has the power to increase entropy in the world, and also to reduce it: it is potentially a poison or cure, depending on our ability to distil beneficial, non-toxic effects through its use. Based on this premise, Stiegler proposes a new model of knowledge and a large-scale contributive economy to coordinate an alliance between social agents such as academia, politics, business, and banks. The goal, he says, is to create a collective intelligence capable of reversing the planet’s self-destructive course, and to develop a plan – within an urgent ten-year time-frame – with solutions to the challenges of the Anthropocene, robotics, and the increasing quantification of life.

In this podcast Bernard Stiegler talks about education and smartphones, translations and linguists, about economic war, climate change, and political stupidity. We also chat about pharmacology and organology, about the erosion of biodiversity, the vital importance of error, and the Neganthropocene as a desirable goal to work towards, ready to be constructed.

Timeline
00:00 Contributory economy: work vs proletarianization
05:21 Our main organs are outside of our body
07:45 Reading and writing compose the republic
12:49 Refounding Knowledge 
15:03 Digital pharmakon 
18:28 Contributory research. Neganthropy, biodiversity and diversification
24:02 The need of an economic peace
27:24 The limits of micropolitics
29:32 Macroeconomics and Neganthropic bifurcation
36:55 Libido is fidelity
42:33 A pharmacological critique of acceleration
46:35 Degrowth is the wrong question

Sophia – show robots and deception

Hanson Robotics' "Sophia"

…when first we practice to deceive…

Walter Scott

Prof Noel Sharkey has written a thoughtful, informative and entertaining piece for Forbes (so, for a general audience) that does some unpacking of ‘Sophia’ with reference to the history of ‘show robots’ (such as the Westinghouse show robots of the the mid-C20, like Elektro, and of course Honda’s Asimo). It’s worth reading the piece in full but here’s a couple of choice clips:

Sophia is not the first show robot to attain celebrity status. Yet accusations of hype and deception have proliferated about the misrepresentation of AI to public and policymakers alike. In an AI-hungry world where decisions about the application of the technologies will impact significantly on our lives, Sophia’s creators may have crossed a line. What might the negative consequences be? To get answers, we need to place Sophia in the context of earlier show robots.


The tradition extends back to the automata precursors of robots in antiquity. Moving statues were used in the temples of ancient Egypt and Greece to create the illusion of a manifestation of the gods. Hidden puppeteers pulled ropes and spoke with powerful booming voices emitted from hidden tubes. This is not so different from how show robots like Sophia operate today to create the illusion of a manifestation of AI.

For me, the biggest problem with the hype surrounding Sophia is that we have entered a critical moment in the history of AI where informed decisions need to be made. AI is sweeping through the business world and being delegated decisions that impact significantly on peoples lives from mortgage and loan applications to job interviews, to prison sentences and bail guidance, to transport and delivery services to medicine and care.


It is vitally important that our governments and policymakers are strongly grounded in the reality of AI at this time and are not misled by hype, speculation, and fantasy. It is not clear how much the Hanson Robotics team are aware of the dangers that they are creating by appearing on international platforms with government ministers and policymakers in the audience.

Sing the body electric… robots in music videos

Still from the video for All is Love by Bjork

I recently saw the Chemical Brothers new-ish video for the song “Free Yourself”, featuring androids/robots apparently going feral and raving in a warehouse and it made me consciously think about something I’ve known for some time – there are quite a few music videos with ‘robots’ in them. 

So, here’s a very partial collection:

AI as organ -on|-ology

Kitt the 'intelligent' car from the TV show Knight Rider

Let’s begin with a postulate: there is either no “AI” – artificial intelligence – or every intelligence is, in fact, in some way artificial (following a recent talk by Bernard Stiegler). In doing so we commence from an observation that intelligence is not peculiar to one body, it is shared. A corollary is that there is no (‘human’) intelligence without artifice, insofar as we need to exteriorise thought (or what Stiegler refers to as ‘exosomatisation’) for that intelligence to function – as language, in writing and through tools – and that this is inherently collective. Further, we might argue that there is no AI. Taking that suggestion forward, we can say that there are, rather, forms of artificial (or functional) stupidity, following Alvesson & Spicer (2012: p. 1199), insofar as it inculcates forms of lack of capacity: “characterised by an unwillingness or inability to mobilize three aspects of cognitive capacity: reflexivity, justification, and substantive reasoning”. Following Alvesson & Spicer [& Stiegler] we might argue that such forms of stupidity are necessary passage points through our sense-making in/of the world, thus are not morally ‘wrong’ or ‘negative’. Instead, the forms of functional stupidity derive from technology/techniques are a form of pharmakon – both enabling and disabling in various registers of life.

Given such a postulate, we might categorise “AI” in particular ways. We might identify ‘AI’ not as ‘other’ to the ‘human’ but rather a part of our extended (exosomatic) capacities of reasoning and sense. This would be to think of AI ‘organologically’ (again following Stiegler) – as part of our widening, collective, ‘organs’ of action in the world. We might also identify ‘AI’ as an organising rationale in and of itself – a kind of ‘organon’ (following Aristotle). In this sense “AI” (the discipline, institutions and the outcome of their work [‘an AI’]) is/are an organisational framework for certain kinds of action, through particular forms of reasoning.

It would be tempting (in geographyland and across particular bits of the social sciences) to frame all of this stemming from, or in terms of, an individual figure: ‘the subject’. In such an account, technology (AI) is a supplement that ‘the human subject’ precedes. ‘The subject’, in such an account, is the entity to which things get done by AI, but also the entity ultimately capable of action. Likewise, such an account might figure ‘the subject’ and it’s ‘other’ (AI) in terms of moral agency/patiency. However, for this postulate such a framing would be unhelpful (I would also add that thinking in terms of ‘affect’, especially through neuro-talk would be just as unhelpful). If we think about AI organologically then we are prompted to think about the relation between what is figured as ‘the human’ and ‘AI’ (and the various other things that might be of concern in such a story) as ‘parasitic’ (in Derrida’s sense) – its a reciprocal (and, in Stiegler’s terms, ‘pharmacological’) relation with no a priori preceding entity. ‘Intelligence’ (and ‘stupidity’ too, of course) in such a formulation proceeds from various capacities for action/inaction.

If we don’t/shouldn’t think about Artificial Intelligence through the lens of the (‘sovereign’) individual ‘subject’ then we might look for other frames of reference. I think there are three recent articles/blogposts that may be instructive.

First, here’s David Runciman in the LRB:

Corporations are another form of artificial thinking machine, in that they are designed to be capable of taking decisions for themselves. Information goes in and decisions come out that cannot be reduced to the input of individual human beings. The corporation speaks and acts for itself. Many of the fears that people now have about the coming age of intelligent robots are the same ones they have had about corporations for hundreds of years.

Second, here’s Jonnie Penn riffing on Runciman in The Economist:

To reckon with this legacy of violence, the politics of corporate and computational agency must contend with profound questions arising from scholarship on race, gender, sexuality and colonialism, among other areas of identity.
A central promise of AI is that it enables large-scale automated categorisation. Machine learning, for instance, can be used to tell a cancerous mole from a benign one. This “promise” becomes a menace when directed at the complexities of everyday life. Careless labels can oppress and do harm when they assert false authority. 

Finally, here’s (the outstanding) Lucy Suchman discussing the ways in which figuring complex systems of ‘AI’-based categorisations as somehow exceeding our understanding does particular forms of political work that need questioning and resisting:

The invocation of Project Maven in this context is symptomatic of a wider problem, in other words. Raising alarm over the advent of machine superintelligence serves the self-serving purpose of reasserting AI’s promise, while redirecting the debate away from closer examination of more immediate and mundane problems in automation and algorithmic decision systems. The spectacle of the superintelligentsia at war with each other distracts us from the increasing entrenchment of digital infrastructures built out of unaccountable practices of classification, categorization, and profiling. The greatest danger (albeit one differentially distributed across populations) is that of injurious prejudice, intensified by ongoing processes of automation. Not worrying about superintelligence, in other words, doesn’t mean that there’s nothing about which we need to worry.
As critiques of the reliance on data analytics in military operations are joined by revelations of data bias in domestic systems, it is clear that present dangers call for immediate intervention in the governance of current technologies, rather than further debate over speculative futures. The admission by AI developers that so-called machine learning algorithms evade human understanding is taken to suggest the advent of forms of intelligence superior to the human. But an alternative explanation is that these are elaborations of pattern analysis based not on significance in the human sense, but on computationally-detectable correlations that, however meaningless, eventually produce results that are again legible to humans. From training data to the assessment of results, it is humans who inform the input and evaluate the output of the black box’s operations. And it is humans who must take the responsibility for the cultural assumptions, and political and economic interests, on which those operations are based and for the life-and-death consequences that already follow.

All of these quotes more-or-less exhibit my version of what an ‘organological’ take on AI might look like. Likewise, they illustrate the ways in which we might bring to bear a form of analysis that seeks to understand ‘intelligence’ as having ‘supidity’as a necessary component (it’s a pharmkon, see?), which in turn can be functional (following Alvesson & Spicer). In this sense, the framing of ‘the corporation’ from Runciman and Penn is instructive – AI qua corporation (as a thing, as a collective endeavour [a ‘discipline’]) has ‘parasitical’ organising principles through which play out the pharmacological tendencies of intelligence-stupidity.

I suspect this would also resonate strongly with Feminist Technology Studies approaches (following Judy Wajcman in particular) to thinking about contemporary technologies. An organological approach situates the knowledges that go towards and result from such an understanding of intelligence-stupidity. Likewise, to resist figuring ‘intelligence’ foremost in terms of the sovereign and universal ‘subject’ also resists the elision of difference. An organological approach as put forward here can (perhaps should[?]) also be intersectional.

That’s as far as I’ve got in my thinking-aloud, I welcome any responses/suggestions and may return to this again.

If you’d like to read more on how this sort of argument might play out in terms of ‘agency’ I blogged a little while ago.

ADD. If this sounds a little like the ‘extended mind‘ (of Clark & Chalmers) or various forms of ‘extended self’ theory then it sort of is. What’s different is the starting assumptions: here, we’re not assuming a given (a priori) ‘mind’ or ‘self’. In Stiegler’s formulation the ‘internal’ isn’t realised til the external is apprehended: mental interior is only recognised as such with the advent of the technical exterior. This is the aporia of origin of ‘the human’ that Stiegler and Derrida diagnose, and that gives rise to the theory of ‘originary technics’. The interior and exterior, and with them the contemporary understanding of the experience of being ‘human’ and what we understand to be technology, are mutually co-constituted – and continue to be so [more here]. I choose to render this in a more-or-less epistemological, rather than ontological manner – I am not so much interested in the categorisation of ‘what there is’, rather in ‘how we know’.

Popular automative imagination (some novels)

Twiki the robot from Buck Rogers

I’ve had about six months of reading various versions of speculative/science fiction after not having read in that genre for a little while… so here’s a selection of books I’ve read (almost exclusively on an ereader) that have more-or-less been selected following the ‘people who read [a] also read [b]’ lists.

I’m not sure these books necessarily offer any novel insights but they do respond to the current milieu of imagining automation (AI, big data, platform-ing, robots, surveillance capitalism etc etc) and in that sense are a sort of very partial (and weird) guide to that imagination and the sorts of visions being promulgated.

I’d like to write more but I don’t have the time or energy so this is more or less a place-holder for trying to say something more interesting at a later date… I do welcome other suggestions though! Especially less conventionally Western ones.

ADD. Jennie Day kindly shared a recent blogpost by David Murakami Wood in which he makes some recommendations for SF books. Some of these may be of interest if you’re looking for wider recommendations. In particular, I agree with his recommendations of Okorafor’s “Lagoon“, which is a great novel.