Bernard Stiegler on disruption & stupidity in education & politics – podcast

Bernard Stiegler being interviewed

Via Museu d’Art Conptemporani de Barcelona.

On the Ràdio Web Macba website there is a podcast interview with philosopher Bernard Stiegler as part of a series to ‘Reimagine Europe’. It covers many of the major themes that have preoccupied Stiegler for the last ten years (if not longer). You can download the pod as an mp3 for free. Please find the blurb below and a link.

In his books and lectures, Stiegler presents a broad philosophical approach in which technology becomes the starting point for thinking about living together and individual fulfilment. All technology has the power to increase entropy in the world, and also to reduce it: it is potentially a poison or cure, depending on our ability to distil beneficial, non-toxic effects through its use. Based on this premise, Stiegler proposes a new model of knowledge and a large-scale contributive economy to coordinate an alliance between social agents such as academia, politics, business, and banks. The goal, he says, is to create a collective intelligence capable of reversing the planet’s self-destructive course, and to develop a plan – within an urgent ten-year time-frame – with solutions to the challenges of the Anthropocene, robotics, and the increasing quantification of life.

In this podcast Bernard Stiegler talks about education and smartphones, translations and linguists, about economic war, climate change, and political stupidity. We also chat about pharmacology and organology, about the erosion of biodiversity, the vital importance of error, and the Neganthropocene as a desirable goal to work towards, ready to be constructed.

Timeline
00:00 Contributory economy: work vs proletarianization
05:21 Our main organs are outside of our body
07:45 Reading and writing compose the republic
12:49 Refounding Knowledge 
15:03 Digital pharmakon 
18:28 Contributory research. Neganthropy, biodiversity and diversification
24:02 The need of an economic peace
27:24 The limits of micropolitics
29:32 Macroeconomics and Neganthropic bifurcation
36:55 Libido is fidelity
42:33 A pharmacological critique of acceleration
46:35 Degrowth is the wrong question

“Merger” by Keiichi Matsuda – automation, work and ‘replacement’

A still from the 360-degree video "Merger" by Keiichi Matsuda
“With automation disrupting centuries-old industries, the professional must reshape and expand their service to add value. Failure is a mindset. It is those who empower themselves with technology who will thrive.
“Merger is a new film about the future of work, from cult director/designer Keiichi Matsuda (HYPER-REALITY). Set against the backdrop of AI-run corporations, a tele-operator finds herself caught between virtual and physical reality, human and machine. As she fights for her economic survival, she finds herself immersed in the cult of productivity, in search of the ultimate interface. This short film documents her last 4 minutes on earth.”

I came across the most recent film by Keichii Matsuda which concerns a possible future of work, with the protagonist embedded in an (aesthetically Microsoft-style) augmented reality of screen-surfaces, and in which the narrative denouement is a sort of trans-human ‘uploading’ moment.

I like Matsuda’s work. i think he skilfully and playfully provokes particular sorts of conversations, mostly about what we used to call ‘immersion’ and the nature of mediation. This has, predictably happened in terms of human vs. AI vs. eschatology (etc etc.) sorts of narratives in various outlets (e.g. the Verge). The first time I encountered his work was at a Passenger Films event at which Rob Kitchin talked about theorisations of mediation in relation to both Matsuda’s work and the (original) Disney film ‘Tron‘.

What is perhaps (briefly) interesting here are two things:

  1. The narrative is a provocative short story that asks us to reflect upon how our world of work and technological development get us from now (the status quo) to an apparent future state of affairs, which carries with it certain kinds of ethical, normative and political contentions. So, this is a story that piggybacks the growing narrative of ‘post-work’ or widespread automation of work by apparently ‘inhuman’ technologies (i.e. A.I) that provokes debate about the roles of ‘technology’ and ‘work’ and what it means to be ‘human’. Interestingly, this (arguably) places “Merger” in the genre of ‘fantasy’ rather than ‘science fiction’ – it is, after all, an eschatological story (I don’t see this final point as a negative). I suppose it could also be seen as a fictional suicide note but I’d rather not dwell on that…
  2. The depiction of the interface and the interaction with the technology-world of the protagonist– and indeed the depiction of these within a 360-degree video –are as important as the story to what the video is signifying. By which I mean – like the videos I called ‘vision videos’ back in 2009/10 (and (in some cases) might be called ‘design fiction’ or ‘diagetic prototypes’) – this video is also trying to show you and perhaps sell you the idea of a technology (Matsuda recently worked for Leap Motion). As I and others have argued – the more familiar audiences are with prospective/speculative technologies the more likely we are (perhaps) to sympathise with their funding/ production/ marketing and ultimately to adopt them.

Call for papers: Geography of/with A.I

Still from the video for All is Love by Bjork

I very much welcome any submissions to this call for papers for the proposed session for the RGS-IBG annual conference (in London in late-August) outlined below. I also welcome anyone getting in touch to talk about possible papers or ideas for other sorts of interventions – please do get in touch.

Call for papers:

We are variously being invited to believe that (mostly Global North, Western) societies are in the cusp, or early stages, of another industrial revolution led by “Artificial Intelligence” – as many popular books (e.g. Brynjolfsson and McAfee 2014) and reports from governments and management consultancies alike will attest (e.g. PWC 2018, UK POST 2016). The goal of this session is to bring together a discussion explicitly focusing on the ways in which geographers already study (with) ‘Artificial Intelligence’ and to, perhaps, outline ways in which we might contribute to wider debates concerning ‘AI’. 

There is widespread, inter-disciplinary analysis of ‘AI’ from a variety of perspective, from embedded systematic bias (Eubanks 2017, Noble 2018) to the kinds of under-examined rationales and work through which such systems emerge (e.g. Adam 1998, Collins 1993) and further to the sorts of ethical-moral frameworks that we should apply to such technologies (Gunkel 2012, Vallor 2016). In similar, if somewhat divergent ways, geographers have variously been interested in the kinds of (apparently) autonomous algorithms or sociotechnical systems are integrated into decision-making processes (e.g. Amoore 2013, Kwan 2016); encounters with apparently autonomous ‘bots’ (e.g. Cockayne et al. 2017); the integration of AI techniques into spatial analysis (e.g. Openshaw & Openshaw 1997); and the processing of ‘big’ data in order to discern things about, or control, people (e.g. Leszczynski 2015). These conversations appear, in conference proceedings and academic outputs, to rarely converge, nevertheless there are many ways in which geographical research does and can continue to contribute to these contemporary concerns.

The invitation of this session is to contribute papers that make explicit the ways in which geographers are (already) contributing to research on and with ‘AI’, to identify research questions that are (perhaps) uniquely geographical in relation to AI, and to thereby advance wider inter-disciplinary debates concerning ‘AI’.

Examples of topics might include (but are certainly not limited to):

  • A.I and governance
  • A.I and intimacy
  • Artificially intelligent mobilities
  • Autonomy, agency and the ethics of A.I
  • Autonomous weapons systems
  • Boosterism and ‘A.I’
  • Feminist and intersectional interventions in/with A.I
  • Gender, race and A.I
  • Labour, work and A.I
  • Machine learning and cognitive work
  • Playful A.I
  • Science fiction, spatial imaginations and A.I
  • Surveillance and A.I

Please send submissions (titles, abstracts (250 words) and author details) to: Sam Kinsley by 31st January 2019.

A genealogy of theorising information technology, through Simondon [video]

Glitched image of a mural of Prometheus giving humans' fire in Freiberg

This post follows from the video of Bernard Stiegler talking about Simondon’s ‘notion’ of information, in relation to his reading of Simondon and others’ theorisation of technogenesis. That paper was a key note in the conference ‘Culture & Technics: The Politics of Du Mode‘, held by the University of Kent’s Centre for Critical Though. It is worth highlighting the whole conference is available on YouTube.

In particular, the panel session with Anne Sauvagnargues and Yuk Hui discussing the genealogy of Simondon’s thought (as articulated in his two perhaps best-known books). For those interested in (more-or-less) French philosophies of technology (largely in the 20th century) this is a fascinating and actually quite accessible discussion.

Sauvagnargues discusses the historical and institutional climate/context of Simondon’s work and Yuk excavates (in a sort of archeological manner) some of the key assumptions and intellectual histories of Simondon’s theorisation of individuation, information and technics.

Bernard Stiegler on the on the notion of information and its limits

Bernard Stiegler being interviewed

I have only just seen this via the De Montfort Media and Communications Research Centre Twitter feed. The above video is Bernard Stiegler’s ‘key note’ (can’t have been a big conference?) at the University of Kent Centre for Critical Though conference on the politics of Simondon’s Modes of Existence of Technical Objects

In engaging with Simondon’s theory (or in his terms ‘notion’) of information, Stiegler reiterates some of the key elements of his Technics and Time in relation to exosomatisation and tertiary retention being the principal tendency of an originary technics that, in turn, has the character of a pharmakon, that, in more recent work, Stiegler articulates in relation to the contemporary epoch (the anthoropocene) as the (thermodynamic style) tension between entropy and negentropy. Stiegler’s argument is, I think, that Simondon misses this pharmacological character of information. In arguing this out, Stiegler riffs on some of the more recent elements of his project (the trilogy of ‘As’) – the anthropocene, attention and automation – which characterise the contemporary tendency towards proletarianisation, a loss of knowledge and capacities to remake the world.

It is interesting to see this weaving together of various elements of his project over the last twenty(+) years both: in relation to his engagement with Simondon’s work (a current minor trend in ‘big’ theory), and: in relation to what seems to me to be a moral philosophical character to Stiegler’s project, in terms of his diagnosis of the anthropocene and a call for a ‘neganthropocene’.

Published> A very public cull – the anatomy of an online issue public

Twitter

I am pleased to share that an article I co-authored with Rebecca Sandover (1st author) and Steve Hinchliffe has finally been published in Geoforum. I would like to congratulate my co-author Rebecca Sandover for this achievement – the article went through a lengthy review process but is now available as an open access article. You can read the whole article, for free, on the Geoforum website. To get a sense of the argument, here is the abstract:

Geographers and other social scientists have for some time been interested in how scientific and environmental controversies emerge and become public or collective issues. Social media are now key platforms through which these issues are publicly raised and through which groups or publics can organise themselves. As media that generate data and traces of networking activity, these platforms also provide an opportunity for scholars to study the character and constitution of those groupings. In this paper we lay out a method for studying these ‘issue publics’: emergent groupings involved in publicising an issue. We focus on the controversy surrounding the state-sanctioned cull of wild badgers in England as a contested means of disease management in cattle. We analyse two overlapping groupings to demonstrate how online issue publics function in a variety of ways – from the ‘echo chambers’ of online sharing of information, to the marshalling of agreements on strategies for action, to more dialogic patterns of debate. We demonstrate the ways in which digital media platforms are themselves performative in the formation of issue publics and that, while this creates issues, we should not retreat into debates around the ‘proper object’ of research but rather engage with the productive complications of mapping social media data into knowledge (Whatmore, 2009). In turn, we argue that online issue publics are not homogeneous and that the lines of heterogeneity are neither simple or to be expected and merit study as a means to understand the suite of processes and novel contexts involved in the emergence of a public.

(More) Gendered imaginings of automata

My Cayla Doll

A few more bits on how automation gets gendered in particular kinds of contexts and settings. In particular, the identification of ‘home’ or certain sorts of intimacy with certain kinds of domestic or caring work that then gets gendered is something that has been increasingly discussed.

Two PhD researchers I am lucky enough to be working with, Paula Crutchlow (Exeter) and Kate Byron (Bristol), have approached some of these issues from different directions. Paula has had to wrangle with this in a number of ways in relation to the Museum of Contemporary Commodities but it was most visible in the shape of Mikayla, the hacked ‘My Friend Cayla Doll’. Kate is doing some deep dives on the sorts of assumptions that are embedded into the doing of AI/machine learning through the practices of designing, programming and so on. They are not, of course, alone. Excellent work by folks like Kate Crawford, Kate Devlin and Gina Neff (below) inform all of our conversations and work.

Here’s a collection of things that may provoke thought… I welcome any further suggestions or comments 🙂

Alexa, does AI have gender?


Alexa is female. Why? As children and adults enthusiastically shout instructions, questions and demands at Alexa, what messages are being reinforced? Professor Neff wonders if this is how we would secretly like to treat women: ‘We are inadvertently reproducing stereotypical behaviour that we wouldn’t want to see,’ she says.

Prof Gina Neff in conversation with Ruth Abrahams, OII.

Predatory Data: Gender Bias in Artificial Intelligence

it has been reported that female-sounding assistive chatbots regularly receive sexually charged messages. It was recently cited that five percent of all interactions with Robin Labs, whose bot platform helps commercial drivers with routes and logistics, is sexually explicit. The fact that the earliest female chatbots were designed to respond to these suggestions
deferentially or with sass was problematic as it normalised sexual harassment.

Vidisha Mishra and Madhulika Srikumar – Predatory Data: Gender Bias in Artificial Intelligence

The Gender of Artificial Intelligence

Chart showing that the gender of artificial intelligence (AI) is not neutral
The gendering, or not, of chatbots, digital assistants and AI movie characters – Tyler Schnoebelen

Consistently representing digital assistants as femalehard-codes a connection between a woman’s voice and subservience.

Stop Giving Digital Assistants Female Voices – Jessica Nordell, The New Republic

“The good robot”

Anki Vector personal robot

A fascinating and very evocative example of the ‘automative imagination’ in action in the form of an advertisement for the “Vector” robot from a company called Anki.

How to narrate or analyse such a robot? Well, there are lots of the almost-archetypical figures of ‘robot’ or automation. The cutesy and non-threatening pseudo-pet that the Vector invites us to assume it is, marks the first. This owes a lot to Wall-E (also, the robots in Batteries Not Included and countless other examples) and the doe-eyed characterisation of the faithful assistant/companion/servant. The second is the all-seeing surveillant machine uploading all your data to “the cloud”. The third is the two examples of quasi-military monsters with shades of “The Terminator”, with a little bit of helpless baby jeopardy for good measure. Finally, the brief nod to HAL 9000, and the flip of the master/slave that it represents, completes a whistle-stop tour of pop culture understandings of ‘robots’, stitched together in order to sell you something.

I assume that the Vector actually still does the kinds of surveillance it is sending up in the advert, but I have no evidence – there is no publicly accessible copy of the terms & conditions for the operation of the robot in your home. However, in a advertorial on ‘Robotics Business Review‘, there is a quote that sort of pushes one to suspect that Vector is yet another device that on the face of it is an ‘assistant’ but is also likely to be hoovering up everything it can about you and your family’s habits in order to sell that data on:

“We don’t want a person to ever turn this robot off,” Palatucci said. “So if the lights go off and it’s on your nightstand and he starts snoring, it’s not going to work. He really needs to use his sensors, his vision system, and his microphone to understand the context of what’s going on, so he knows when you want to interact, and more importantly, when you don’t.”

If we were to be cynical we might ask – why else would it need to be able to do all of this? –>

Anki Vector “Alive and aware”

Regardless, the advert is a useful example of how the bleed from fictional representations of ‘robots’ into contemporary commercial products we can take home – and perhaps even what we might think of as camouflage for the increasingly prevalent ‘extractive‘ business model of in-home surveillance.

“Decolonizing Technologies, Reprogramming Education” HASTAC 2019 call

Louise Bourgeois work of art

This looks interesting. Read the full call here.

Call for Proposals

On 16-18 May 2019, the Humanities, Arts, Science, and Technology Alliance and Collaboratory (HASTAC), in partnership with the Institute for Critical Indigenous Studies at the University of British Columbia (UBC) and the Department of English at the University of Victoria (UVic), will be guests on the traditional, ancestral, and unceded territory of the h?n?q??min??m?-speaking Musqueam (x?m??k??y??m) people, facilitating a conference about decolonizing technologies and reprogramming education.

Deadline for proposals is Monday 15 October 2018.

Submit a proposal. Please note: This link will take you to a new website (HASTAC’s installation of ConfTool), where you will create a new user account to submit your proposal. Proposals may be submitted in EnglishFrench, or Spanish.


Conference Theme

The conference will hold up and support Indigenous scholars and knowledges, centering work by Indigenous women and women of colour. It will engage how technologies are, can be, and have been decolonized. How, for instance, are extraction technologies repurposed for resurgence? Or, echoing Ellen Cushman, how do we decolonize digital archives? Equally important, how do decolonial and anti-colonial practices shape technologies and education? How, following Kimberlé Crenshaw, are such practices intersectional? How do they correspond with what Grace Dillon calls Indigenous Futurisms? And how do they foster what Eve Tuck and Wayne Yang describe as an ethic of incommensurability, unsettling not only assumptions of innocence but also discourses of reconciliation?

With these investments, HASTAC 2019: “Decolonizing Technologies, Reprogramming Education” invites submissions addressing topics such as:

  • Indigenous new media and infrastructures,
  • Self-determination and data sovereignty, accountability, and consent,
  • Racist data and biased algorithms,
  • Land-based pedagogy and practices,
  • Art, history, and theory as decolonial or anti-colonial practices,
  • Decolonizing the classroom or university,
  • Decolonial or anti-colonial approaches involving intersectional feminist, trans-feminist, critical race, and queer research methods,
  • The roles of technologies and education in the reclamation of language, land, and water,
  • Decolonial or anti-colonial approaches to technologies and education around the world,
  • Everyday and radical resistance to dispossession, extraction, and appropriation,
  • Decolonial or anti-colonial design, engineering, and computing,
  • Alternatives to settler heteropatriarchy and institutionalized ableism in education,
  • Unsettling or defying settler geopolitics and frontiers,
  • Trans-Indigenous activism, networks, and knowledges, and
  • Indigenous resurgence through technologies and education.