Bernard Stiegler on disruption & stupidity in education & politics – podcast

Bernard Stiegler being interviewed

Via Museu d’Art Conptemporani de Barcelona.

On the Ràdio Web Macba website there is a podcast interview with philosopher Bernard Stiegler as part of a series to ‘Reimagine Europe’. It covers many of the major themes that have preoccupied Stiegler for the last ten years (if not longer). You can download the pod as an mp3 for free. Please find the blurb below and a link.

In his books and lectures, Stiegler presents a broad philosophical approach in which technology becomes the starting point for thinking about living together and individual fulfilment. All technology has the power to increase entropy in the world, and also to reduce it: it is potentially a poison or cure, depending on our ability to distil beneficial, non-toxic effects through its use. Based on this premise, Stiegler proposes a new model of knowledge and a large-scale contributive economy to coordinate an alliance between social agents such as academia, politics, business, and banks. The goal, he says, is to create a collective intelligence capable of reversing the planet’s self-destructive course, and to develop a plan – within an urgent ten-year time-frame – with solutions to the challenges of the Anthropocene, robotics, and the increasing quantification of life.

In this podcast Bernard Stiegler talks about education and smartphones, translations and linguists, about economic war, climate change, and political stupidity. We also chat about pharmacology and organology, about the erosion of biodiversity, the vital importance of error, and the Neganthropocene as a desirable goal to work towards, ready to be constructed.

Timeline
00:00 Contributory economy: work vs proletarianization
05:21 Our main organs are outside of our body
07:45 Reading and writing compose the republic
12:49 Refounding Knowledge 
15:03 Digital pharmakon 
18:28 Contributory research. Neganthropy, biodiversity and diversification
24:02 The need of an economic peace
27:24 The limits of micropolitics
29:32 Macroeconomics and Neganthropic bifurcation
36:55 Libido is fidelity
42:33 A pharmacological critique of acceleration
46:35 Degrowth is the wrong question

A genealogy of theorising information technology, through Simondon [video]

Glitched image of a mural of Prometheus giving humans' fire in Freiberg

This post follows from the video of Bernard Stiegler talking about Simondon’s ‘notion’ of information, in relation to his reading of Simondon and others’ theorisation of technogenesis. That paper was a key note in the conference ‘Culture & Technics: The Politics of Du Mode‘, held by the University of Kent’s Centre for Critical Though. It is worth highlighting the whole conference is available on YouTube.

In particular, the panel session with Anne Sauvagnargues and Yuk Hui discussing the genealogy of Simondon’s thought (as articulated in his two perhaps best-known books). For those interested in (more-or-less) French philosophies of technology (largely in the 20th century) this is a fascinating and actually quite accessible discussion.

Sauvagnargues discusses the historical and institutional climate/context of Simondon’s work and Yuk excavates (in a sort of archeological manner) some of the key assumptions and intellectual histories of Simondon’s theorisation of individuation, information and technics.

Bernard Stiegler on the on the notion of information and its limits

Bernard Stiegler being interviewed

I have only just seen this via the De Montfort Media and Communications Research Centre Twitter feed. The above video is Bernard Stiegler’s ‘key note’ (can’t have been a big conference?) at the University of Kent Centre for Critical Though conference on the politics of Simondon’s Modes of Existence of Technical Objects

In engaging with Simondon’s theory (or in his terms ‘notion’) of information, Stiegler reiterates some of the key elements of his Technics and Time in relation to exosomatisation and tertiary retention being the principal tendency of an originary technics that, in turn, has the character of a pharmakon, that, in more recent work, Stiegler articulates in relation to the contemporary epoch (the anthoropocene) as the (thermodynamic style) tension between entropy and negentropy. Stiegler’s argument is, I think, that Simondon misses this pharmacological character of information. In arguing this out, Stiegler riffs on some of the more recent elements of his project (the trilogy of ‘As’) – the anthropocene, attention and automation – which characterise the contemporary tendency towards proletarianisation, a loss of knowledge and capacities to remake the world.

It is interesting to see this weaving together of various elements of his project over the last twenty(+) years both: in relation to his engagement with Simondon’s work (a current minor trend in ‘big’ theory), and: in relation to what seems to me to be a moral philosophical character to Stiegler’s project, in terms of his diagnosis of the anthropocene and a call for a ‘neganthropocene’.

AI as organ -on|-ology

Kitt the 'intelligent' car from the TV show Knight Rider

Let’s begin with a postulate: there is either no “AI” – artificial intelligence – or every intelligence is, in fact, in some way artificial (following a recent talk by Bernard Stiegler). In doing so we commence from an observation that intelligence is not peculiar to one body, it is shared. A corollary is that there is no (‘human’) intelligence without artifice, insofar as we need to exteriorise thought (or what Stiegler refers to as ‘exosomatisation’) for that intelligence to function – as language, in writing and through tools – and that this is inherently collective. Further, we might argue that there is no AI. Taking that suggestion forward, we can say that there are, rather, forms of artificial (or functional) stupidity, following Alvesson & Spicer (2012: p. 1199), insofar as it inculcates forms of lack of capacity: “characterised by an unwillingness or inability to mobilize three aspects of cognitive capacity: reflexivity, justification, and substantive reasoning”. Following Alvesson & Spicer [& Stiegler] we might argue that such forms of stupidity are necessary passage points through our sense-making in/of the world, thus are not morally ‘wrong’ or ‘negative’. Instead, the forms of functional stupidity derive from technology/techniques are a form of pharmakon – both enabling and disabling in various registers of life.

Given such a postulate, we might categorise “AI” in particular ways. We might identify ‘AI’ not as ‘other’ to the ‘human’ but rather a part of our extended (exosomatic) capacities of reasoning and sense. This would be to think of AI ‘organologically’ (again following Stiegler) – as part of our widening, collective, ‘organs’ of action in the world. We might also identify ‘AI’ as an organising rationale in and of itself – a kind of ‘organon’ (following Aristotle). In this sense “AI” (the discipline, institutions and the outcome of their work [‘an AI’]) is/are an organisational framework for certain kinds of action, through particular forms of reasoning.

It would be tempting (in geographyland and across particular bits of the social sciences) to frame all of this stemming from, or in terms of, an individual figure: ‘the subject’. In such an account, technology (AI) is a supplement that ‘the human subject’ precedes. ‘The subject’, in such an account, is the entity to which things get done by AI, but also the entity ultimately capable of action. Likewise, such an account might figure ‘the subject’ and it’s ‘other’ (AI) in terms of moral agency/patiency. However, for this postulate such a framing would be unhelpful (I would also add that thinking in terms of ‘affect’, especially through neuro-talk would be just as unhelpful). If we think about AI organologically then we are prompted to think about the relation between what is figured as ‘the human’ and ‘AI’ (and the various other things that might be of concern in such a story) as ‘parasitic’ (in Derrida’s sense) – its a reciprocal (and, in Stiegler’s terms, ‘pharmacological’) relation with no a priori preceding entity. ‘Intelligence’ (and ‘stupidity’ too, of course) in such a formulation proceeds from various capacities for action/inaction.

If we don’t/shouldn’t think about Artificial Intelligence through the lens of the (‘sovereign’) individual ‘subject’ then we might look for other frames of reference. I think there are three recent articles/blogposts that may be instructive.

First, here’s David Runciman in the LRB:

Corporations are another form of artificial thinking machine, in that they are designed to be capable of taking decisions for themselves. Information goes in and decisions come out that cannot be reduced to the input of individual human beings. The corporation speaks and acts for itself. Many of the fears that people now have about the coming age of intelligent robots are the same ones they have had about corporations for hundreds of years.

Second, here’s Jonnie Penn riffing on Runciman in The Economist:

To reckon with this legacy of violence, the politics of corporate and computational agency must contend with profound questions arising from scholarship on race, gender, sexuality and colonialism, among other areas of identity.
A central promise of AI is that it enables large-scale automated categorisation. Machine learning, for instance, can be used to tell a cancerous mole from a benign one. This “promise” becomes a menace when directed at the complexities of everyday life. Careless labels can oppress and do harm when they assert false authority. 

Finally, here’s (the outstanding) Lucy Suchman discussing the ways in which figuring complex systems of ‘AI’-based categorisations as somehow exceeding our understanding does particular forms of political work that need questioning and resisting:

The invocation of Project Maven in this context is symptomatic of a wider problem, in other words. Raising alarm over the advent of machine superintelligence serves the self-serving purpose of reasserting AI’s promise, while redirecting the debate away from closer examination of more immediate and mundane problems in automation and algorithmic decision systems. The spectacle of the superintelligentsia at war with each other distracts us from the increasing entrenchment of digital infrastructures built out of unaccountable practices of classification, categorization, and profiling. The greatest danger (albeit one differentially distributed across populations) is that of injurious prejudice, intensified by ongoing processes of automation. Not worrying about superintelligence, in other words, doesn’t mean that there’s nothing about which we need to worry.
As critiques of the reliance on data analytics in military operations are joined by revelations of data bias in domestic systems, it is clear that present dangers call for immediate intervention in the governance of current technologies, rather than further debate over speculative futures. The admission by AI developers that so-called machine learning algorithms evade human understanding is taken to suggest the advent of forms of intelligence superior to the human. But an alternative explanation is that these are elaborations of pattern analysis based not on significance in the human sense, but on computationally-detectable correlations that, however meaningless, eventually produce results that are again legible to humans. From training data to the assessment of results, it is humans who inform the input and evaluate the output of the black box’s operations. And it is humans who must take the responsibility for the cultural assumptions, and political and economic interests, on which those operations are based and for the life-and-death consequences that already follow.

All of these quotes more-or-less exhibit my version of what an ‘organological’ take on AI might look like. Likewise, they illustrate the ways in which we might bring to bear a form of analysis that seeks to understand ‘intelligence’ as having ‘supidity’as a necessary component (it’s a pharmkon, see?), which in turn can be functional (following Alvesson & Spicer). In this sense, the framing of ‘the corporation’ from Runciman and Penn is instructive – AI qua corporation (as a thing, as a collective endeavour [a ‘discipline’]) has ‘parasitical’ organising principles through which play out the pharmacological tendencies of intelligence-stupidity.

I suspect this would also resonate strongly with Feminist Technology Studies approaches (following Judy Wajcman in particular) to thinking about contemporary technologies. An organological approach situates the knowledges that go towards and result from such an understanding of intelligence-stupidity. Likewise, to resist figuring ‘intelligence’ foremost in terms of the sovereign and universal ‘subject’ also resists the elision of difference. An organological approach as put forward here can (perhaps should[?]) also be intersectional.

That’s as far as I’ve got in my thinking-aloud, I welcome any responses/suggestions and may return to this again.

If you’d like to read more on how this sort of argument might play out in terms of ‘agency’ I blogged a little while ago.

ADD. If this sounds a little like the ‘extended mind‘ (of Clark & Chalmers) or various forms of ‘extended self’ theory then it sort of is. What’s different is the starting assumptions: here, we’re not assuming a given (a priori) ‘mind’ or ‘self’. In Stiegler’s formulation the ‘internal’ isn’t realised til the external is apprehended: mental interior is only recognised as such with the advent of the technical exterior. This is the aporia of origin of ‘the human’ that Stiegler and Derrida diagnose, and that gives rise to the theory of ‘originary technics’. The interior and exterior, and with them the contemporary understanding of the experience of being ‘human’ and what we understand to be technology, are mutually co-constituted – and continue to be so [more here]. I choose to render this in a more-or-less epistemological, rather than ontological manner – I am not so much interested in the categorisation of ‘what there is’, rather in ‘how we know’.

HKW Speaking to Racial Conditions Today [video]

racist facial recognition

This video of a panel session at HKW entitled “Speaking to Racial Conditions Today” is well-worth watching.

Follow this link (the video is not available for embedding here).

Inputs, discussions, Mar 15, 2018. With Zimitri Erasmus, Maya Indira Ganesh, Ruth Wilson Gilmore, David Theo Goldberg, Serhat Karakayali, Shahram Khosravi, Françoise Vergès
English original version

Reblog> Internet Addiction watch “Are We All Addicts Now? Video

Twitter

Via Tony Sampson. Looks interesting >

This topic has been getting a lot of TV/Press coverage here in the UK.Here’s a video of a symposium discussing artistic resistance, critical theory strategies to ‘internet addiction’ and the book Are We All Addicts Now? Convened at Central St Martins, London on 7th Nov 2017. Introduced by Ruth Catlow with talks by Katriona Beales, Feral Practice, Emily Rosamond and myself…

@KatrionaBeales @FeralPractice @TonyDSpamson @EmilyRosamond & @furtherfield

Event > Data Feminism with Lauren Klein (at KCL)

Melba Roy

This event looks really interesting!

Via Pip Thornton.

Data Feminism

Lauren Klein, Assistant Professor, Georgia Tech

How might we draw on feminist critical thought to reimagine data practices and data work? Join us for a public talk with Lauren Klein (Assistant Professor, Georgia Tech) to discuss her recent work on data feminism. Hosted by Jonathan Gray at the Department for Digital Humanities at King’s College London.

With their ability to depict hundreds, thousands, and sometimes even millions of relationships at a single glance, visualizations of data can dazzle, inform, and persuade. It is precisely this power that makes it worth asking: “Visualization by whom? For whom? In whose interest? Informed by whose values?” These are some of the questions that emerge from what we call data feminism, a way of thinking about data and its visualization that is informed by the past several decades of feminist critical thought. Data feminism prompts questions about how, for instance, challenges to the male/female binary can also help challenge other binary and hierarchical classification systems. It encourages us to ask how the concept of invisible labor can help to expose the invisible forms of labor associated with data work. And it points to how an understanding of affective and embodied knowledge can help to expand the notion of what constitutes data and what does not. Using visualization as a starting point, this talk works backwards through the data-processing pipeline in order to show how a feminist approach to thinking about data not only exposes how power and privilege presently operate in visualization work, but also suggests how different design principles can help to mitigate inequality and work towards justice.

Lauren Klein is an assistant professor in the School of Literature, Media, and Communication at Georgia Tech, where she also directs the Digital Humanities Lab. With Matthew Gold, she editsDebates in the Digital Humanities (University of Minnesota Press), a hybrid print/digital publication stream that explores debates in the field as they emerge. Her literary monograph,Matters of Taste: Eating, Aesthetics, and the Early American Archive, is forthcoming from Minnesota in Spring 2019. She is also at work on two new projects: Data Feminism, co-authored with Catherine D’Ignazio, and under contract with MIT Press, which distills key lessons from feminist theory into a set of principles for the design and interpretation of data visualizations, and Data by Design, which provides an interactive history of data visualization from the eighteenth century to the present.

Reblog> Lecture by Stelarc and discussion in DesignLab, U. Twente

Peter Paul Verbeek blogged this, looks good! >>

Lecture by Stelarc and discussion in DesignLab

Thursday 24th of May from 4-7 pm

Location: DesignLab Universiteit Twente

The Australian performance artist Stelarc has visually probed and acoustically amplified his body and is well known for his pioneering work and ideas about extending the capabilities of the human body with technology. From 17th of May until 19th of August, Tetem is presenting the exhibition StickMan by Stelarc. During his stay in Enschede, Stelarc will give an extensive lecture in DesignLab about his pioneering performances and installations – for which he uses prosthetics, robotics, medical instruments, suspension, VR, biotechnology and internet to investigate the psychological and physical limitations of the body.

The event will be introduced by Frank Kresin (managing director of DesignLab). After Stelarc’s lecture, there will be a discussion with Stelarc, Herman van der Kooij (professor in Biomechatronics and Rehabilitation Technology and director of Wearable Robotics Lab) and Peter-Paul Verbeek (professor of Philosophy of Technology and co-director of DesignLab). During the discussion, led by moderator Wilja Jurg (director Tetem), we will explore the scientific, social and ethical implications of wearable robotics.

This event is organized in collaboration with DesignLab as part of StickMan exhibition in Tetem. DesignLab is a creative and cross-disciplinary ecosystem at the University of Twente, connecting science and society through design: https://www.utwente.nl/en/designlab/.

Short description about StickMan:

The StickMan is a minimal but full-body exoskeleton, that algorithmically actuates the artist with six degrees-of-freedom. 64 possible combinations of gestures are generated. Sensors on StickMan generate sounds that augment the pneumatic noise and register the limb movements. A ring of speakers circulates the sounds, immersing the audience in an acoustic landscape as an extension of StickMan’s body.

The StickMan is an anthropomorphized, programmable motion and sound machine which functions with not only the body connected, but also as an installation by itself. A smaller replica of StickMan enables visitors to record and play their choreography by bending the limbs into a sequence of positions, which also inadvertently composes the sounds generated.

StickMan is shown for the first time in Europe. The smaller replica of StickMan was made especially for the exhibition in Tetem.

For more information and events visit:

http://www.tetem.nl/portfolio/stickman/

Brian Cox, cyberpunk

Man with a colander on his head attached to electrodes

Doing public comms of science is hard, and it’s good to have people trying to make things accessible and good to excite and interest people about finding things out about the world… but it can tip over into being daft pretty easily.

Here’s the great D:ream-er Brian Cox going all cyberpunk on brain/mind uploads… (note the lad raising his eyes to the ceiling at 0:44 🙂 )

This made me wonder how Hubert Dreyfus would attempt to dispel the d:ream (don’t all groan at once!) as the ‘simulation of brains/minds’ is precisely the version of AI that Dreyfus was critiquing in the 1970s. If you’re interested in further discussion of ‘mind uploading’, and not my flippant remarks, see John Danaher’s writing on this on his excellent blog.