Talk – Plymouth, 17 Oct: ‘New geographies of automation?’

Rachael in the film Blade Runner

I am looking forward to visiting Plymouth (tomorrow) the 17th October to give a Geography department research seminar. It’s been nearly twenty years (argh!) since I began my first degree, in digital art, at Plymouth so I’m looking forward to returning. I’ll be talking about a couple of aspects of ‘The Automative Imagination’ under a slightly different title – ‘New geographies of automation?’ The talk will take in archival BBC and newspaper automation anxieties, management consultant magical thinking (and the ‘Fourth Industrial Revolution’), gendered imaginings of domesticity (with the Jetsons amongst others) and some slightly under-cooked (at the moment) thoughts about how ‘agency’ (what kinds of ‘beings’ or ‘things’ can do what kinds of action).

Do come along if you’re free and happen to be in the glorious gateway to the South West that is Plymouth.

CFP> Intelligent Futures: automation, AI & cognitive ecologies

statue of a man holding his head with his right hand

This looks like an interesting conference. Also – the keynote is Prof. Joanna Zylinska who really is both an excellent researcher and a wonderful speaker.

Call For Papers

Intelligent Futures: Automation, AI and Cognitive Ecologies

A Postgraduate Conference supported by CHASE DTP and Sussex Humanities Lab

1–2 October 2018, University of Sussex (UK)

CALL FOR PAPERS

CHASE DTP and the Sussex Humanities Lab (University of Sussex) seek to engage doctoral and early-career researchers working on philosophical, cultural and literary approaches to Artificial Intelligence. The aim of the event is to bring scholars from the humanities into discussion with their peers from the social sciences, informatics and engineering, psychology and the life sciences. The conference will promote critical and speculative engagements with questions of technical cognition, with special emphasis on sustainability and the emergence of new planetary ecologies of thought.

We are looking for papers addressing a wide range of approaches to AI. These could include, but need not be limited to, the following:

  • Natural and technical cognition
  • Automation
  • Planetary computing
  • Artificial Lives and Digital Selves
  • Narrative, Meaning and Images of the Future
  • Materiality of Memory
  • Sustainability and Technology

Please send a short abstract (250 words) for a 20 minutes paper to intelligentfutures@sussex.ac.ukby 15 August 2018.

Conference Organising Committee:

Programme Chairs: M. Beatrice Fazi (Sussex) and Michael Jonik (Sussex)

CHASE Chair: Rob Witts (Sussex)

Administrative Assistance and Website: Gabriel Chin (Sussex)

Conference Website:

http://intelligentfutures.org/

Some more A.I. links

Twiki the robot from Buck Rogers

This post contains some tabs I have had open in my browser for a while that I’m pasting here both to save them in a place I may remember to look and to share them with others that might find them of interest. I’m afraid I don’t have time, at present, to offer any cogent commentary or analysis – just simply to share…

Untold AI - Christopher NoesselUntold A.I. – “What stories are we not telling ourselves about A.I?”, Christopher Noessel: An interesting attempt to look at popular, sci-fi stories of A.I. and compare them to contemporary A.I. research manifestos and look at where we might not be telling ourselves stories about the things people are actually trying to do.

 

The ethics of crashes with self?driving cars: A roadmapSven Nyholm: A two-part series of papers [one and two ($$) / one and two (open)] published in Philosophy Compass concerning how to think through the ethical issues associated with self-driving cars. Nyholm recently talked about this with John Danaher on his podcast.

Cognitive Bias CodexWEF on the Toronto Declaration and the “cognitive bias codex”: A post on the World Economic Forum’s website about “The Toronto Declaration on Machine Learning” on guiding principles for protecting human rights in relation to automated systems. As part of the post they link to a nice diagram about cognitive bias – the ‘cognitive bias codex‘.

RSA public engagement with AI reportRSA report on public engagement with AI: “Our new report, launched today, argues that the public needs to be engaged early and more deeply in the use of AI if it is to be ethical. One reason why is because there is a real risk that if people feel like decisions about how technology is used are increasingly beyond their control, they may resist innovation, even if this means they could lose out on benefits.”

artificial unintelligence - broussardArtificial Unintelligence, Meredith Broussard: “In Artificial Unintelligence, Meredith Broussard argues that our collective enthusiasm for applying computer technology to every aspect of life has resulted in a tremendous amount of poorly designed systems. We are so eager to do everything digitally—hiring, driving, paying bills, even choosing romantic partners—that we have stopped demanding that our technology actually work.”

Data-driven discrimination: a new challenge for civil society: A blogpost on the LSE ‘Impact of Soc. Sci.’ blog: “Having recently published a report on automated discrimination in data-driven systems, J?drzej Niklas and Seeta Peña Gangadharan explain how algorithms discriminate, why this raises concerns for civil society organisations across Europe, and what resources and support are needed by digital rights advocates and anti-discrimination groups in order to combat this problem.”

‘AI and the future of work’ – talk by Phoebe Moore: Interesting talk transcript with links to videos. Snippet: “Human resource and management practices involving AI have introduced the use of big data to make judgements to eliminate the supposed “people problem”. However, the ethical and moral questions this raises must be addressed, where the possibilities for discrimination and labour market exclusion are real. People’s autonomy must not be forgotten.”

Government responds to report by Lords Select Committee on Artificial Intelligence: “The Select Committee on Artificial Intelligence receives the Government response to the report: AI in the UK: Ready, willing and able?, published on 16 April 2018.”

How a Pioneer of Machine Learning Became One of Its Sharpest Critics, Kevin Hartnett – The Atlantic: “Judea Pearl helped artificial intelligence gain a strong grasp on probability, but laments that it still can’t compute cause and effect.”

Practising speculation and tech futures

Glitched AT&T 1990s advert

I’ve had a sort of moment of realisation this morning that a bunch of tabs I’ve had open, saved, reopened (etc etc) for the past few months are all more-or-less about doing speculative work around A.I., automation and suchlike.

This is interesting for me cos I wrote a PhD (and I am by no means the only one) about rationales for and forms of speculative practice in computing R&D (my fieldwork for this was, soberingly, now approximately ten years ago). It’s also interesting cos I have, in the last eight or so years, pitched for funding to do this sort of work and miserably failed three times.

I think what interests me most is the ways in which story telling is more-or-less the method. I’m not sure how good we are at this, as academics. There’s some good work that analyses speculative things, such as architects visualisations, but I’m not sure I’ve seen much work doing speculation that is not design-oriented. I am not seeking to criticise speculative design practices, I really admire that work, I just wonder if there is a way of de-centring the ‘design’ bit to engage in broader forms of ‘speculation’. I’m also not sure how one can tread the line between evoking particular kinds of scenario/ story (or dare I say imaginative geography) and affirming them. Likewise, I don’t think it is sufficient to simply refer to Black Mirror – it’s fun but it’s not the only way of doing speculation about technology (as afrofuturism demonstrates). I don’t think we want to merely replicate the sorts of ‘visioning’ practices of the likes of Microsoft, Samsung or Beko, not because they’re not interesting but because I’d like to think academics doing this kind of thing want to critically reflect on, not simply propose (or impose!), possibilities.  Playful examples that I think are successful include Superflux’s excellent “Uninvited Guests” – though again, this is perhaps more design-oriented: it’s more about the function in relation to the individual rather than the kinds of world that are necessary for those functions to work.

I do not claim any special insight here – I’m curious about speculative methods – they seem to have some analytical/ explanatory/ critical power but also that also seems to be rather hard to negotiate. In practice, I think you may have to be in the right context, and I’m not convinced academic geography is (without quite a bit of work, given particular kinds of disciplinary assumptions and proclivities – happy to be proven wrong!), and you may have to work with non-academic partners in a way I am not skilled in doing. Good examples, I think, are work like Anne’s Counting Sheep project, which is a canonical example of interesting and provocative speculative design. As I’ve said – I’m not so sure about where non-design-oriented work sits and how this is, or can be, done well. I’m interested in some of the attempts anyway, and here’s some examples, listed below.

UPDATE: Sam Hind shared this piece from Warwick concerning issue mapping techniques that allowed for speculative reflection on driverless cars:

Surfacing Social Aspects of Driverless Cars with Creative Methods, Noortje Marres, Rebecca Cain, Ana Gross, Lucy Kimbell and Arun Ulahannan – “The Warwick workshop explored the potential of creative social research methods – such as design research and debate mapping – to surface still hidden social dynamics around the operation of intelligent technologies in everyday environments, and to complement more established approaches to societal testing of these technologies.”

This made me also think of the speculative policy making practices that arose from “Open Policy” work at the British Cabinet Office’s PolicyLab, which I think involved folks from Strange Telemetry and Superflux.


Crafting stories of technology and progress: five  considerations, Cian O’Donavan & Johan Schot – From Technology Stories the website of the Society for the History of Technology comes this brief post that refers to the longer report from the International Panel on Social Progress concerning the fairly classic Science and Technology Studies issue of how to tell stories about “progress” without necessarily resorting to (unreflexive) forms of determinism. There are four ‘stories’ by several researchers linked from this article that address a number of issues:

Economic Science Fictions, edited by William Davies – I’m not really sure why the “science” is in the title but there we go… From the blurb: “Rooted in the sense that our current economic reality is no longer credible or viable, this collection treats our economy as a series of fictions and science fiction as a means of anticipating different economic futures.”

Designing the future, Justin Reynolds – reviews the above book on the New Socialist site, with some interesting commentary.

Future Perfect conference/event, coordinated by Data & Society – characterised as “speculative fiction in the public interest” this event was first run in 2017 as an invitation-only thing but had an open call in 2018. From the 2018 event blurb: “Future Perfect is an annual workshop and conference dedicated to different approaches to understanding, living in, and challenging dominant narratives of speculative fiction in a time where powerful actors in technology and politics treat the future like a foregone conclusion.”

Robot Futures, Illah Reza Nourbakhsh – “Future robots will have superhuman abilities in both the physical and digital realms. They will be embedded in our physical spaces, with the ability to go where we cannot, and will have minds of their own, thanks to artificial intelligence. In Robot Futures, the roboticist Illah Reza Nourbakhsh considers how we will share our world with these creatures, and how our society could change as it incorporates a race of stronger, smarter beings.”

Brian Cox, cyberpunk

Man with a colander on his head attached to electrodes

Doing public comms of science is hard, and it’s good to have people trying to make things accessible and good to excite and interest people about finding things out about the world… but it can tip over into being daft pretty easily.

Here’s the great D:ream-er Brian Cox going all cyberpunk on brain/mind uploads… (note the lad raising his eyes to the ceiling at 0:44 🙂 )

This made me wonder how Hubert Dreyfus would attempt to dispel the d:ream (don’t all groan at once!) as the ‘simulation of brains/minds’ is precisely the version of AI that Dreyfus was critiquing in the 1970s. If you’re interested in further discussion of ‘mind uploading’, and not my flippant remarks, see John Danaher’s writing on this on his excellent blog.

Our friends electric

Another wonderful video from superflux exploring how to think about the kinds of relationships we may or may not have with our ‘smart’ stuff…

Our Friends Electric from Superflux on Vimeo.
Our Friends Electric is a short film by Superflux about voice-enabled AI assistants who ask too many questions, swear & recite Marxist texts.

The film was commissioned by Mozilla’s Open IoT Studio. The devices in the film are made Loraine Clarke and Martin Skelly from Mozilla’s Open IoT Studio and the University of Dundee.

For more information about the project visit: http://superflux.in/index.php/work/friends-electric/#

Talking with Mikayla

Talking with Mikayla, the Museum of Contemporary Commodities GuideImage credit: Mike Duggan.

At the RGS-IBG Annual International Conference 2017, co-originator of the Museum of Contemporary Commodities (MoCC) Paula Crutchlow and I staged a conversation with Mikayla the MoCC guide, a hacked ‘My Cayla Doll’. This was part of two sessions that capped off the presence of MoCC at the RGS-IBG and was performed alongside a range of other provocations on the theme(s) of ‘data-place-trade-value’. The doll was only mildly disobedient and it was fun to be able to show the subversion of an object of commercial surveillance in a playful way. Below is the visuals that displayed during the conversation, with additional sound…

For more, please do go and read Paula’s excellent blogpost about Mikayla on the MoCC website.

Hyperland

glitches image of a 1990s NASA VR experience

A bit of nostalgia… ‘practising tomorrows‘ and all that.

Lots of things to crit with the benefit of hindsight, which I’m sure some folks did – I mean, the peculiar sort of aesthetic policing implied is funny and the fact that none of the folk used as talking heads can imagine a collaborative form of authorship is quite interesting. This programme came out in 1990, around the same time Berners Lee is pioneering the web – a rather different, perhaps more “interactive” vision of ‘multimedia’ – insofar as with the web we can all contribute to the creation as well as consumption of media [he writes in the dialog box of the “Add New Post” page of the WordPress interface]…

A slightly geeky thing I appreciate though is the very clear visual reference to the 1987 Apple Computer ‘video prototype’ called ‘Knowledge Navigator‘ (<–follow the link, third video down, see also), which I’m certain is deliberate.

19 ‘AI’-related links

Twiki the robot from Buck Rogers

Here’s some links from various sources on what “AI” may or may not mean and what sorts of questions that prompts… If I was productive, not sleep-deprived (if… if… etc. etc.) I’d write something about this, but instead I’m just posting links.