“Ready Lawnmower Player Man One”, or VR’s persistent future

For the last couple of years I have given a lecture that takes the “Virtual/Real” distinction and deconstructs it using various bits of geographical theory. I also talk about the enduring trope of the sort of eschatological narrative of VR, tying it back to a little bit of history, mainly focusing on how this has been propagated in pop culture. This year, for fun, I have decided to be a little bit more creative than resorting to my usual trick of showing the trailer for Lawnmower Man or Ready Player One – I’ve made a quick mashup of the two, which I think show quite nicely how the underlying narratives of the ‘virtual’ somehow counterposed to the ‘real’ but also in some way ‘crossing over’ is an enduring theme.

The other enduring theme here is that the forms of gameplay drawn upon are characterised as highly masculine (and normatively heterosexual – no queering of the digital here, sadly), which is something I will try to also blog about sometime…

Other film examples might include:

“Merger” by Keiichi Matsuda – automation, work and ‘replacement’

A still from the 360-degree video "Merger" by Keiichi Matsuda
“With automation disrupting centuries-old industries, the professional must reshape and expand their service to add value. Failure is a mindset. It is those who empower themselves with technology who will thrive.
“Merger is a new film about the future of work, from cult director/designer Keiichi Matsuda (HYPER-REALITY). Set against the backdrop of AI-run corporations, a tele-operator finds herself caught between virtual and physical reality, human and machine. As she fights for her economic survival, she finds herself immersed in the cult of productivity, in search of the ultimate interface. This short film documents her last 4 minutes on earth.”

I came across the most recent film by Keichii Matsuda which concerns a possible future of work, with the protagonist embedded in an (aesthetically Microsoft-style) augmented reality of screen-surfaces, and in which the narrative denouement is a sort of trans-human ‘uploading’ moment.

I like Matsuda’s work. i think he skilfully and playfully provokes particular sorts of conversations, mostly about what we used to call ‘immersion’ and the nature of mediation. This has, predictably happened in terms of human vs. AI vs. eschatology (etc etc.) sorts of narratives in various outlets (e.g. the Verge). The first time I encountered his work was at a Passenger Films event at which Rob Kitchin talked about theorisations of mediation in relation to both Matsuda’s work and the (original) Disney film ‘Tron‘.

What is perhaps (briefly) interesting here are two things:

  1. The narrative is a provocative short story that asks us to reflect upon how our world of work and technological development get us from now (the status quo) to an apparent future state of affairs, which carries with it certain kinds of ethical, normative and political contentions. So, this is a story that piggybacks the growing narrative of ‘post-work’ or widespread automation of work by apparently ‘inhuman’ technologies (i.e. A.I) that provokes debate about the roles of ‘technology’ and ‘work’ and what it means to be ‘human’. Interestingly, this (arguably) places “Merger” in the genre of ‘fantasy’ rather than ‘science fiction’ – it is, after all, an eschatological story (I don’t see this final point as a negative). I suppose it could also be seen as a fictional suicide note but I’d rather not dwell on that…
  2. The depiction of the interface and the interaction with the technology-world of the protagonist– and indeed the depiction of these within a 360-degree video –are as important as the story to what the video is signifying. By which I mean – like the videos I called ‘vision videos’ back in 2009/10 (and (in some cases) might be called ‘design fiction’ or ‘diagetic prototypes’) – this video is also trying to show you and perhaps sell you the idea of a technology (Matsuda recently worked for Leap Motion). As I and others have argued – the more familiar audiences are with prospective/speculative technologies the more likely we are (perhaps) to sympathise with their funding/ production/ marketing and ultimately to adopt them.

Talk – Plymouth, 17 Oct: ‘New geographies of automation?’

Rachael in the film Blade Runner

I am looking forward to visiting Plymouth (tomorrow) the 17th October to give a Geography department research seminar. It’s been nearly twenty years (argh!) since I began my first degree, in digital art, at Plymouth so I’m looking forward to returning. I’ll be talking about a couple of aspects of ‘The Automative Imagination’ under a slightly different title – ‘New geographies of automation?’ The talk will take in archival BBC and newspaper automation anxieties, management consultant magical thinking (and the ‘Fourth Industrial Revolution’), gendered imaginings of domesticity (with the Jetsons amongst others) and some slightly under-cooked (at the moment) thoughts about how ‘agency’ (what kinds of ‘beings’ or ‘things’ can do what kinds of action).

Do come along if you’re free and happen to be in the glorious gateway to the South West that is Plymouth.

CFP> Intelligent Futures: automation, AI & cognitive ecologies

statue of a man holding his head with his right hand

This looks like an interesting conference. Also – the keynote is Prof. Joanna Zylinska who really is both an excellent researcher and a wonderful speaker.

Call For Papers

Intelligent Futures: Automation, AI and Cognitive Ecologies

A Postgraduate Conference supported by CHASE DTP and Sussex Humanities Lab

1–2 October 2018, University of Sussex (UK)

CALL FOR PAPERS

CHASE DTP and the Sussex Humanities Lab (University of Sussex) seek to engage doctoral and early-career researchers working on philosophical, cultural and literary approaches to Artificial Intelligence. The aim of the event is to bring scholars from the humanities into discussion with their peers from the social sciences, informatics and engineering, psychology and the life sciences. The conference will promote critical and speculative engagements with questions of technical cognition, with special emphasis on sustainability and the emergence of new planetary ecologies of thought.

We are looking for papers addressing a wide range of approaches to AI. These could include, but need not be limited to, the following:

  • Natural and technical cognition
  • Automation
  • Planetary computing
  • Artificial Lives and Digital Selves
  • Narrative, Meaning and Images of the Future
  • Materiality of Memory
  • Sustainability and Technology

Please send a short abstract (250 words) for a 20 minutes paper to intelligentfutures@sussex.ac.ukby 15 August 2018.

Conference Organising Committee:

Programme Chairs: M. Beatrice Fazi (Sussex) and Michael Jonik (Sussex)

CHASE Chair: Rob Witts (Sussex)

Administrative Assistance and Website: Gabriel Chin (Sussex)

Conference Website:

http://intelligentfutures.org/

Some more A.I. links

Twiki the robot from Buck Rogers

This post contains some tabs I have had open in my browser for a while that I’m pasting here both to save them in a place I may remember to look and to share them with others that might find them of interest. I’m afraid I don’t have time, at present, to offer any cogent commentary or analysis – just simply to share…

Untold AI - Christopher NoesselUntold A.I. – “What stories are we not telling ourselves about A.I?”, Christopher Noessel: An interesting attempt to look at popular, sci-fi stories of A.I. and compare them to contemporary A.I. research manifestos and look at where we might not be telling ourselves stories about the things people are actually trying to do.

 

The ethics of crashes with self?driving cars: A roadmapSven Nyholm: A two-part series of papers [one and two ($$) / one and two (open)] published in Philosophy Compass concerning how to think through the ethical issues associated with self-driving cars. Nyholm recently talked about this with John Danaher on his podcast.

Cognitive Bias CodexWEF on the Toronto Declaration and the “cognitive bias codex”: A post on the World Economic Forum’s website about “The Toronto Declaration on Machine Learning” on guiding principles for protecting human rights in relation to automated systems. As part of the post they link to a nice diagram about cognitive bias – the ‘cognitive bias codex‘.

RSA public engagement with AI reportRSA report on public engagement with AI: “Our new report, launched today, argues that the public needs to be engaged early and more deeply in the use of AI if it is to be ethical. One reason why is because there is a real risk that if people feel like decisions about how technology is used are increasingly beyond their control, they may resist innovation, even if this means they could lose out on benefits.”

artificial unintelligence - broussardArtificial Unintelligence, Meredith Broussard: “In Artificial Unintelligence, Meredith Broussard argues that our collective enthusiasm for applying computer technology to every aspect of life has resulted in a tremendous amount of poorly designed systems. We are so eager to do everything digitally—hiring, driving, paying bills, even choosing romantic partners—that we have stopped demanding that our technology actually work.”

Data-driven discrimination: a new challenge for civil society: A blogpost on the LSE ‘Impact of Soc. Sci.’ blog: “Having recently published a report on automated discrimination in data-driven systems, J?drzej Niklas and Seeta Peña Gangadharan explain how algorithms discriminate, why this raises concerns for civil society organisations across Europe, and what resources and support are needed by digital rights advocates and anti-discrimination groups in order to combat this problem.”

‘AI and the future of work’ – talk by Phoebe Moore: Interesting talk transcript with links to videos. Snippet: “Human resource and management practices involving AI have introduced the use of big data to make judgements to eliminate the supposed “people problem”. However, the ethical and moral questions this raises must be addressed, where the possibilities for discrimination and labour market exclusion are real. People’s autonomy must not be forgotten.”

Government responds to report by Lords Select Committee on Artificial Intelligence: “The Select Committee on Artificial Intelligence receives the Government response to the report: AI in the UK: Ready, willing and able?, published on 16 April 2018.”

How a Pioneer of Machine Learning Became One of Its Sharpest Critics, Kevin Hartnett – The Atlantic: “Judea Pearl helped artificial intelligence gain a strong grasp on probability, but laments that it still can’t compute cause and effect.”

Practising speculation and tech futures

Glitched AT&T 1990s advert

I’ve had a sort of moment of realisation this morning that a bunch of tabs I’ve had open, saved, reopened (etc etc) for the past few months are all more-or-less about doing speculative work around A.I., automation and suchlike.

This is interesting for me cos I wrote a PhD (and I am by no means the only one) about rationales for and forms of speculative practice in computing R&D (my fieldwork for this was, soberingly, now approximately ten years ago). It’s also interesting cos I have, in the last eight or so years, pitched for funding to do this sort of work and miserably failed three times.

I think what interests me most is the ways in which story telling is more-or-less the method. I’m not sure how good we are at this, as academics. There’s some good work that analyses speculative things, such as architects visualisations, but I’m not sure I’ve seen much work doing speculation that is not design-oriented. I am not seeking to criticise speculative design practices, I really admire that work, I just wonder if there is a way of de-centring the ‘design’ bit to engage in broader forms of ‘speculation’. I’m also not sure how one can tread the line between evoking particular kinds of scenario/ story (or dare I say imaginative geography) and affirming them. Likewise, I don’t think it is sufficient to simply refer to Black Mirror – it’s fun but it’s not the only way of doing speculation about technology (as afrofuturism demonstrates). I don’t think we want to merely replicate the sorts of ‘visioning’ practices of the likes of Microsoft, Samsung or Beko, not because they’re not interesting but because I’d like to think academics doing this kind of thing want to critically reflect on, not simply propose (or impose!), possibilities.  Playful examples that I think are successful include Superflux’s excellent “Uninvited Guests” – though again, this is perhaps more design-oriented: it’s more about the function in relation to the individual rather than the kinds of world that are necessary for those functions to work.

I do not claim any special insight here – I’m curious about speculative methods – they seem to have some analytical/ explanatory/ critical power but also that also seems to be rather hard to negotiate. In practice, I think you may have to be in the right context, and I’m not convinced academic geography is (without quite a bit of work, given particular kinds of disciplinary assumptions and proclivities – happy to be proven wrong!), and you may have to work with non-academic partners in a way I am not skilled in doing. Good examples, I think, are work like Anne’s Counting Sheep project, which is a canonical example of interesting and provocative speculative design. As I’ve said – I’m not so sure about where non-design-oriented work sits and how this is, or can be, done well. I’m interested in some of the attempts anyway, and here’s some examples, listed below.

UPDATE: Sam Hind shared this piece from Warwick concerning issue mapping techniques that allowed for speculative reflection on driverless cars:

Surfacing Social Aspects of Driverless Cars with Creative Methods, Noortje Marres, Rebecca Cain, Ana Gross, Lucy Kimbell and Arun Ulahannan – “The Warwick workshop explored the potential of creative social research methods – such as design research and debate mapping – to surface still hidden social dynamics around the operation of intelligent technologies in everyday environments, and to complement more established approaches to societal testing of these technologies.”

This made me also think of the speculative policy making practices that arose from “Open Policy” work at the British Cabinet Office’s PolicyLab, which I think involved folks from Strange Telemetry and Superflux.


Crafting stories of technology and progress: five  considerations, Cian O’Donavan & Johan Schot – From Technology Stories the website of the Society for the History of Technology comes this brief post that refers to the longer report from the International Panel on Social Progress concerning the fairly classic Science and Technology Studies issue of how to tell stories about “progress” without necessarily resorting to (unreflexive) forms of determinism. There are four ‘stories’ by several researchers linked from this article that address a number of issues:

Economic Science Fictions, edited by William Davies – I’m not really sure why the “science” is in the title but there we go… From the blurb: “Rooted in the sense that our current economic reality is no longer credible or viable, this collection treats our economy as a series of fictions and science fiction as a means of anticipating different economic futures.”

Designing the future, Justin Reynolds – reviews the above book on the New Socialist site, with some interesting commentary.

Future Perfect conference/event, coordinated by Data & Society – characterised as “speculative fiction in the public interest” this event was first run in 2017 as an invitation-only thing but had an open call in 2018. From the 2018 event blurb: “Future Perfect is an annual workshop and conference dedicated to different approaches to understanding, living in, and challenging dominant narratives of speculative fiction in a time where powerful actors in technology and politics treat the future like a foregone conclusion.”

Robot Futures, Illah Reza Nourbakhsh – “Future robots will have superhuman abilities in both the physical and digital realms. They will be embedded in our physical spaces, with the ability to go where we cannot, and will have minds of their own, thanks to artificial intelligence. In Robot Futures, the roboticist Illah Reza Nourbakhsh considers how we will share our world with these creatures, and how our society could change as it incorporates a race of stronger, smarter beings.”

Brian Cox, cyberpunk

Man with a colander on his head attached to electrodes

Doing public comms of science is hard, and it’s good to have people trying to make things accessible and good to excite and interest people about finding things out about the world… but it can tip over into being daft pretty easily.

Here’s the great D:ream-er Brian Cox going all cyberpunk on brain/mind uploads… (note the lad raising his eyes to the ceiling at 0:44 🙂 )

This made me wonder how Hubert Dreyfus would attempt to dispel the d:ream (don’t all groan at once!) as the ‘simulation of brains/minds’ is precisely the version of AI that Dreyfus was critiquing in the 1970s. If you’re interested in further discussion of ‘mind uploading’, and not my flippant remarks, see John Danaher’s writing on this on his excellent blog.

Our friends electric

Another wonderful video from superflux exploring how to think about the kinds of relationships we may or may not have with our ‘smart’ stuff…

Our Friends Electric from Superflux on Vimeo.
Our Friends Electric is a short film by Superflux about voice-enabled AI assistants who ask too many questions, swear & recite Marxist texts.

The film was commissioned by Mozilla’s Open IoT Studio. The devices in the film are made Loraine Clarke and Martin Skelly from Mozilla’s Open IoT Studio and the University of Dundee.

For more information about the project visit: http://superflux.in/index.php/work/friends-electric/#

Talking with Mikayla

Talking with Mikayla, the Museum of Contemporary Commodities GuideImage credit: Mike Duggan.

At the RGS-IBG Annual International Conference 2017, co-originator of the Museum of Contemporary Commodities (MoCC) Paula Crutchlow and I staged a conversation with Mikayla the MoCC guide, a hacked ‘My Cayla Doll’. This was part of two sessions that capped off the presence of MoCC at the RGS-IBG and was performed alongside a range of other provocations on the theme(s) of ‘data-place-trade-value’. The doll was only mildly disobedient and it was fun to be able to show the subversion of an object of commercial surveillance in a playful way. Below is the visuals that displayed during the conversation, with additional sound…

For more, please do go and read Paula’s excellent blogpost about Mikayla on the MoCC website.