Choose how you feel, you have seven options

A great piece by Ruben Van de Ven stemming from his artwork of the same name, published on the Institute of Network Culture site. Van de Ven, in a similar vein to Will Davies, deconstructs the logic of ‘affective’ computing, sentiment analysis and their application to what has been termed the ‘attention economy’. The article does a really go job of demonstrating how the knowledge claims, and the epistemologies (perhaps ontologies too), that are at work behind these technologies are (of course) deeply political in their application. Very much worth reading! (snippet below).

 ‘Weeks ago I saw an older woman crying outside my office building as I was walking in. She was alone, and I worried she needed help. I was afraid to ask, but I set my fears aside and walked up to her. She appreciated my gesture, but said she would be fine and her husband would be along soon. With emotion enabled (Augmented Reality), I could have had far more details to help me through the situation. It would have helped me know if I should approach her. It would have also let me know how she truly felt about my talking to her.’

FOREST HANDFORD

This is how Forest Handford, a software developer, outlines his ideal future for a technology that has emerged over the past years. It is known as emotion analysis software, emotion detection, emotion recognition or emotion analytics. One day, Hartford hopes, the software will aid in understanding the other’s genuine, sincere, yet unspoken feelings (‘how she truly felt’). Technology will guide us through a landscape of emotions, like satellite navigation technologies guide us to destinations unknown to us: we blindly trust the route that is plotted out for us. But in a world of digitized emotions, what does it mean to feel 63% surprised and 54% joyful?

Please take the time to read the whole article.

Reblog> Accident tourist – driverless cars and ethics

Automated taxi figure in the 1990 film Total Recall

An interesting and well-written piece over on Cyborgology by Maya from Tactical Tech Collective (amongst many other things!)

I particularly like these bits copied below, but please read the whole post.

Accident Tourist: Driverless car crashes, ethics, machine learning

…I imagine what it may be like to arrive on the scene of  a driverless car crash, and the kinds of maps I’d draw to understand what happened. Scenario planning is one way in which ‘unthinkable futures’ may be planned for.

The ‘scenario’ is a phenomenon that became prominent during the Korean War, and through the following decades of the Cold War, to allow the US army to plan its strategy in the event of nuclear disaster. Peter Galison describes scenarios as a “literature of future war” “located somewhere between a story outline and ever more sophisticated role-playing war games”, “a staple of the new futurism”. Since then scenario-planning has been adopted by a range of organisations, and features in the modelling of risk and to identify errors. Galison cites the Boston Group as having written a scenario – their very first one-  in which feminist epistemologists, historians, and philosophers of science running amok might present a threat to the release of radioactive waste from the Cold War (“A Feminist World, 2091″).

The applications of the Trolley Problem to driverless car crashes are a sort of scenario planning exercise. Now familiar to most readers of mainstream technology reporting, the Trolley problem is presented as a series of hypothetical situations with different outcomes derived from a pitting of consequentialism against deontological ethicsTrolley Problems are constructed as either/or scenarios where a single choice must be made.

[…]

What the Trolley Problem scenario and the applications of machine learning in driving suggest is that we’re seeing a shift in how ethics is being constructed: from accounting for crashes after the fact, to pre-empting them (though, the automotive industry has been using computer simulated crash modeling for over twenty years); from ethics that is about values, or reasoning, to ethics as based on datasets of correct responses, and, crucially, of ethics as the outcome of software engineering. Specifically in the context of driverless cars, there is the shift from ethics as a framework of “values for living well and dying well”, as Gregoire Chamayou puts it, to a framework for “killing well”, or ‘necroethics’.

Perhaps the unthinkable scenario to confront is that  ethics is not a machine-learned response, nor an end-point, but as a series of socio-technical, technical, human, and post-human relationships, ontologies, and exchanges. These challenging and intriguing scenarios are yet to be mapped.

Coincidentally, in the latest Machine Ethics podcast (which I participated in a while ago), Joanna Bryson discusses these issues about the bases for deriving ethics in relation to AI, which is quite interesting.

Tony Sampson on neuroculture

From a Conversation piece on Huxley, dystopia and how we might think about Facebook etc. in relation to Huxley’s “College of Emotional Engineering”, this concise evocation of his understanding of ‘neuroculture’ is interesting:

The origins of neuroculture begin in early anatomical drawings and subsequent neuron doctrine in the late 1800s. This was the first time that the brain was understood as a discontinuous network of cells connected by what became known as synaptic gaps. Initially, scientists assumed these gaps were connected by electrical charges, but later revealed the existence of neurochemical transmissions. Brain researchers went on to discover more about brain functionality and subsequently started to intervene in underlying chemical processes.

Interpretation of Cajal’s anatomy of a Purkinje neuron, by Dorota Piekorz.

On one hand, these chemical interventions point to possible inroads to understanding some crucial issues, relating to mental health, for example. But on the other, they warn of the potential of a looming dystopian future. Not, as we may think, defined by the forceful invasive probing of the brain in Room 101, but via much more subtle intermediations.

The Museum of Contemporary Commodities in EXETER

The Museum of Contemporary Commodities

Just a quick note to let you know that the brilliant Paula Crutchlow has brought “The Museum of Contemporary Commodities” (MoCC) to Exeter for the majority of May.

There’s lots going on, much of it creative and interesting – so if you’re in Exeter or nearby: come and visit!

Two immediate things this week:

RIGHT NOW!: help re-create the internet in paper with Artist Louise Ashcroft from 11 -2 in the Exeter University Forum.

TOMORROW: sign up to do a data walkshop with Alison Powell from the LSE on Saturday from 10-1. Places have to be booked, and the Eventbrite page is here https://www.eventbrite.com/e/mocc-data-walkshop-tickets-24464719635

Please do visit the MoCC website for lots more events and activities taking place this month and visit the shop:

87 Fore St,
Exeter
EX46RT

Open 10:00-18:00pm Weds-Sat, 4th-21st May.

Reblog> Everyday Code

An interesting post by Mark Purcell on his paper at the AAG:

Everyday Code

Here is the text from my talk at the AAG conference last week. It was for a really great session organized by Joe Shaw and Mark Graham (who are at the Oxford Internet Institute) on “An Informational Right to the City”.

Everyday Code: The Right to Information and Our Struggle for Democracy

Introduction

Henri Lefebvre proposed a right to information, and he thought that right must be associated with a right to the city. I want to urge us to understand both those rights in the context of Lefebvre’s wider political project. That wider project was the struggle for self-management, what Lefebvre often called “autogestion,” and what I prefer to call democracy.

Lefebvre articulates his wider political vision in terms of what he called a “new contract of citizenship between State and citizen.”

Read the full post.

A politics and economics of attention

Just before Christmas (and just before injuring myself) I took part in an ESRC-funded seminar concerning the politics and economics of attention, alongside my colleague Clive Barnett and others.

Jessica Pykett very kindly invited me to talk on the back of the theme issue of Culture Machine Patrick Crogan and I co-edited back in 2012.

There are slides from the talks plus an interesting commentary by Rupert Alcock now posted to the Behaviour Change and Psychological Governance website.

My slides are on ResearchGate

Event> The Politics and Economics of Attention (14/12)

Both Clive Barnett and I will be speaking at the sixth seminar in the Behaviour Change & Psychological Governance series (funded by the ESRC) which is being held at the University of Bristol on the 14th of December.

I’ll be revisiting some of the things Patrick Crogan and I wrote about back in 2012 in the themed issue of Culture Machine we co-edited following an ESF-funded conference on attention in 2010. My move forward is probably being a little more critical in my thinking about what constitutes the process of valuing attention (critically reflecting on why a Labour Theory of Value might not quite fit) and thinking (with the work of Bernard Stiegler) about how the socio-technical systems that attempt to economise (something that gets called) attention are a kind of pharmakon (an indeterminacy, originating from the idea of a drug as both poison and cure). By looking at some examples I hope to offer some suggestions about how we might understand what’s going on (in networked technology systems in particular) when an attempt is made to place a financial value on ‘attention’.

The problematic imaginative geographies of collective ‘grieving’ on social media

This provocative article “Got a French flag on your Facebook profile picture? Congratulations on your corporate white supremacy” on The Independent‘s website makes for a compelling read.

I think is worth reading alongside the excellent letter from Paris by Judith Butler posted to the Verso blog “Mourning becomes the law” – also a must-read, really:

Mourning seems fully restricted within the national frame. The nearly 50 dead in Beirut from the day before are barely mentioned, and neither are the 111 in Palestine killed in the last weeks alone, or the scores in Ankara. Most people I know describe themseves as “at an impasse”, not able to think the situation through. One way to think about it may be to come up with a concept of transversal grief, to consider how the metrics of grievability work, why the cafe as target pulls at my heart in ways that other targets cannot. It seems that fear and rage may well turn into a fierce embrace of a police state. I suppose this is why I prefer those who find themselves at an impasse. That means that this will take some time to think through.  It is difficult to think when one is appalled. It requires time, and those who are willing to take it with you

A few snippets from The Independent article:

So you want to show solidarity with France – specifically, with those killed in Paris this weekend. If you’re a British person who wants to do that because you feel sympathy and sadness for people who are brutally massacred, regardless of their nationality, then fine. I just hope that you also change your profile picture to a different country’s flag every time people are wrongly killed as the result of international conflicts – for example, during the attack on Beirut in Lebanon just the day before.

Flags are politically and historically charged symbols (just look at the infamous and aptly self-styled Isis flag itself), symbolising states and representing influence, power, segregation, borders, nationalism and identity – some of the most commonly held reasons for armed conflict. It’s important, before overlaying a flag on your smiling face, to think about this.

I’m guessing you didn’t feel moved to drape yourself in the Tricolore [sic] until Facebook pushed that option out to you, possibly even until you saw how many people had already snapped it up. But paint-by-numbers solidarity when it’s foisted on you by one of the most powerful companies in the world is simply not the way to help a traumatised nation in shock after murder.

I’d just add that apparently the tricolor overlay implemented by Facebook has a setting that allows the user to automatically switch it off after a given length of time… how convenient.

There has, of course, been some interesting academic and journalistic discussion of what has been referred to as ‘recreational grieving’ and ‘mourning sickness‘ that is cognate to this argument, but the article above puts in sharper relief complex issues concerning the kinds of imaginative geographies that are being (re)produced and performed in response to the incredibly sad and horrific events that took place in Paris last weekend and their aftermath… something Derek Gregory has also written about on his blog.

CFP> Streams of Consciousness: Data, Cognition and Intelligent Devices, Apr 2016

This looks interesting:

Streams of Consciousness

Data, Cognition and Intelligent Devices

21st and 22nd of April 2016

Call for Papers

consc

“What’s on your mind?” This is the question to which every Facebook now responds. Millions of users sharing their thoughts in one giant performance of what Clay Shirky once called “cognitive surplus”. Contemporary media platforms aren’t simply a stage for this cognitive performance. They are more like directors, staging scenes, tweaking scripts, working to get the best or fully “optimized” performance. As Katherine Hayles has pointed out, media theory has long taken for granted that we think “through, with and alongside media”. Pen and paper, the abacus, and modern calculators are obvious cases in point, but the list quickly expands and with it longstanding conceptions of the Cartesian mind dissolve away. Within the cognitive sciences, cognition is now routinely described as embodied, extended, and distributed. They too recognize that cognition takes place beyond the brain, in between people, between people and things, and combinations thereof. The varieties of specifically human thought, from decision-making to reasoning and interpretation, are now considered one part of a broader cognitive spectrum shared with other animals, systems, and intelligent devices.

Today, the technology we mostly think through, with and alongside are computers. We routinely rely on intelligent devices for any number of operations, but this is no straightforward “augmentation”. Our cognitive capacities are equally instrumentalized, plugged into larger cognitive operations from which we have little autonomy. Our cognitive weaknesses are exploited and manipulated by techniques drawn from behavioural economics and psychology. If Vannevar Bush once pondered how we would think in the future, he received a partial response in Steve Krug’s best selling book on web usability: Don’t Make Me Think! Streams of Consciousness aims to explore cognition, broadly conceived, in an age of intelligent devices. We aim to critically interrogate our contemporary infatuation with specific cognitive qualities – such as “smartness” and “intelligence” – while seeking to genuinely understand the specific forms of cognition that are privileged in our current technological milieu. We are especially interested in devices that mediate access to otherwise imperceptible forms of data (too big, too fast), so it can be acted upon in routine or novel ways.

Topics of the conference include but are not limited to:

  • data and cognition
  • decision-making technologies
  • algorithms, AI and machine learning
  • visualization, perception
  • sense and sensation
  • business intelligence and data exploration
  • signal intelligence and drones
  • smart and dumb things
  • choice and decision architecture
  • behavioural economics and design
  • technologies of nudging
  • interfaces
  • bodies, data, and (wearable) devices
  • optimization
  • web and data analytics (including A/B and multivariate testing)

Please submit individual abstracts of no longer than 300 words. Panel proposals are also welcome and should also be 300 words. Panel proposals should also include indvidual abstracts. The deadline for submissions is Friday the 18th of December and submissions should be made to cimconf@warwick.ac.uk. Accepted submissions will be notified by 20th of January 2016.
Streams of Consciousness is organised by Nathaniel Tkacz and Ana Gross. The event is supported by the Economic and Social Research Council.