A great piece by Ruben Van de Ven stemming from his artwork of the same name, published on the Institute of Network Culture site. Van de Ven, in a similar vein to Will Davies, deconstructs the logic of ‘affective’ computing, sentiment analysis and their application to what has been termed the ‘attention economy’. The article does a really go job of demonstrating how the knowledge claims, and the epistemologies (perhaps ontologies too), that are at work behind these technologies are (of course) deeply political in their application. Very much worth reading! (snippet below).
I particularly like these bits copied below, but please read the whole post.
…I imagine what it may be like to arrive on the scene of a driverless car crash, and the kinds of maps I’d draw to understand what happened. Scenario planning is one way in which ‘unthinkable futures’ may be planned for.
The ‘scenario’ is a phenomenon that became prominent during the Korean War, and through the following decades of the Cold War, to allow the US army to plan its strategy in the event of nuclear disaster. Peter Galison describes scenarios as a “literature of future war” “located somewhere between a story outline and ever more sophisticated role-playing war games”, “a staple of the new futurism”. Since then scenario-planning has been adopted by a range of organisations, and features in the modelling of risk and to identify errors. Galison cites the Boston Group as having written a scenario – their very first one- in which feminist epistemologists, historians, and philosophers of science running amok might present a threat to the release of radioactive waste from the Cold War (“A Feminist World, 2091″).
The applications of the Trolley Problem to driverless car crashes are a sort of scenario planning exercise. Now familiar to most readers of mainstream technology reporting, the Trolley problem is presented as a series of hypothetical situations with different outcomes derived from a pitting of consequentialism against deontological ethics. Trolley Problems are constructed as either/or scenarios where a single choice must be made.
What the Trolley Problem scenario and the applications of machine learning in driving suggest is that we’re seeing a shift in how ethics is being constructed: from accounting for crashes after the fact, to pre-empting them (though, the automotive industry has been using computer simulated crash modeling for over twenty years); from ethics that is about values, or reasoning, to ethics as based on datasets of correct responses, and, crucially, of ethics as the outcome of software engineering. Specifically in the context of driverless cars, there is the shift from ethics as a framework of “values for living well and dying well”, as Gregoire Chamayou puts it, to a framework for “killing well”, or ‘necroethics’.
Perhaps the unthinkable scenario to confront is that ethics is not a machine-learned response, nor an end-point, but as a series of socio-technical, technical, human, and post-human relationships, ontologies, and exchanges. These challenging and intriguing scenarios are yet to be mapped.
Coincidentally, in the latest Machine Ethics podcast (which I participated in a while ago), Joanna Bryson discusses these issues about the bases for deriving ethics in relation to AI, which is quite interesting.
From a Conversation piece on Huxley, dystopia and how we might think about Facebook etc. in relation to Huxley’s “College of Emotional Engineering”, this concise evocation of his understanding of ‘neuroculture’ is interesting:
The origins of neuroculture begin in early anatomical drawings and subsequent neuron doctrine in the late 1800s. This was the first time that the brain was understood as a discontinuous network of cells connected by what became known as synaptic gaps. Initially, scientists assumed these gaps were connected by electrical charges, but later revealed the existence of neurochemical transmissions. Brain researchers went on to discover more about brain functionality and subsequently started to intervene in underlying chemical processes.
On one hand, these chemical interventions point to possible inroads to understanding some crucial issues, relating to mental health, for example. But on the other, they warn of the potential of a looming dystopian future. Not, as we may think, defined by the forceful invasive probing of the brain in Room 101, but via much more subtle intermediations.
There’s lots going on, much of it creative and interesting – so if you’re in Exeter or nearby: come and visit!
Two immediate things this week:
RIGHT NOW!: help re-create the internet in paper with Artist Louise Ashcroft from 11 -2 in the Exeter University Forum.
TOMORROW: sign up to do a data walkshop with Alison Powell from the LSE on Saturday from 10-1. Places have to be booked, and the Eventbrite page is here https://www.eventbrite.com/e/mocc-data-walkshop-tickets-24464719635
87 Fore St,
Open 10:00-18:00pm Weds-Sat, 4th-21st May.
An interesting post by Mark Purcell on his paper at the AAG:
There are slides from the talks plus an interesting commentary by Rupert Alcock now posted to the Behaviour Change and Psychological Governance website.
Both Clive Barnett and I will be speaking at the sixth seminar in the Behaviour Change & Psychological Governance series (funded by the ESRC) which is being held at the University of Bristol on the 14th of December.
I’ll be revisiting some of the things Patrick Crogan and I wrote about back in 2012 in the themed issue of Culture Machine we co-edited following an ESF-funded conference on attention in 2010. My move forward is probably being a little more critical in my thinking about what constitutes the process of valuing attention (critically reflecting on why a Labour Theory of Value might not quite fit) and thinking (with the work of Bernard Stiegler) about how the socio-technical systems that attempt to economise (something that gets called) attention are a kind of pharmakon (an indeterminacy, originating from the idea of a drug as both poison and cure). By looking at some examples I hope to offer some suggestions about how we might understand what’s going on (in networked technology systems in particular) when an attempt is made to place a financial value on ‘attention’.
This provocative article “Got a French flag on your Facebook profile picture? Congratulations on your corporate white supremacy” on The Independent‘s website makes for a compelling read.
I think is worth reading alongside the excellent letter from Paris by Judith Butler posted to the Verso blog “Mourning becomes the law” – also a must-read, really:
Mourning seems fully restricted within the national frame. The nearly 50 dead in Beirut from the day before are barely mentioned, and neither are the 111 in Palestine killed in the last weeks alone, or the scores in Ankara. Most people I know describe themseves as “at an impasse”, not able to think the situation through. One way to think about it may be to come up with a concept of transversal grief, to consider how the metrics of grievability work, why the cafe as target pulls at my heart in ways that other targets cannot. It seems that fear and rage may well turn into a fierce embrace of a police state. I suppose this is why I prefer those who find themselves at an impasse. That means that this will take some time to think through. It is difficult to think when one is appalled. It requires time, and those who are willing to take it with you
A few snippets from The Independent article:
So you want to show solidarity with France – specifically, with those killed in Paris this weekend. If you’re a British person who wants to do that because you feel sympathy and sadness for people who are brutally massacred, regardless of their nationality, then fine. I just hope that you also change your profile picture to a different country’s flag every time people are wrongly killed as the result of international conflicts – for example, during the attack on Beirut in Lebanon just the day before.
Flags are politically and historically charged symbols (just look at the infamous and aptly self-styled Isis flag itself), symbolising states and representing influence, power, segregation, borders, nationalism and identity – some of the most commonly held reasons for armed conflict. It’s important, before overlaying a flag on your smiling face, to think about this.
I’m guessing you didn’t feel moved to drape yourself in the Tricolore [sic] until Facebook pushed that option out to you, possibly even until you saw how many people had already snapped it up. But paint-by-numbers solidarity when it’s foisted on you by one of the most powerful companies in the world is simply not the way to help a traumatised nation in shock after murder.
I’d just add that apparently the tricolor overlay implemented by Facebook has a setting that allows the user to automatically switch it off after a given length of time… how convenient.
There has, of course, been some interesting academic and journalistic discussion of what has been referred to as ‘recreational grieving’ and ‘mourning sickness‘ that is cognate to this argument, but the article above puts in sharper relief complex issues concerning the kinds of imaginative geographies that are being (re)produced and performed in response to the incredibly sad and horrific events that took place in Paris last weekend and their aftermath… something Derek Gregory has also written about on his blog.
This looks interesting: