Sophia – show robots and deception

Hanson Robotics' "Sophia"

…when first we practice to deceive…

Walter Scott

Prof Noel Sharkey has written a thoughtful, informative and entertaining piece for Forbes (so, for a general audience) that does some unpacking of ‘Sophia’ with reference to the history of ‘show robots’ (such as the Westinghouse show robots of the the mid-C20, like Elektro, and of course Honda’s Asimo). It’s worth reading the piece in full but here’s a couple of choice clips:

Sophia is not the first show robot to attain celebrity status. Yet accusations of hype and deception have proliferated about the misrepresentation of AI to public and policymakers alike. In an AI-hungry world where decisions about the application of the technologies will impact significantly on our lives, Sophia’s creators may have crossed a line. What might the negative consequences be? To get answers, we need to place Sophia in the context of earlier show robots.


The tradition extends back to the automata precursors of robots in antiquity. Moving statues were used in the temples of ancient Egypt and Greece to create the illusion of a manifestation of the gods. Hidden puppeteers pulled ropes and spoke with powerful booming voices emitted from hidden tubes. This is not so different from how show robots like Sophia operate today to create the illusion of a manifestation of AI.

For me, the biggest problem with the hype surrounding Sophia is that we have entered a critical moment in the history of AI where informed decisions need to be made. AI is sweeping through the business world and being delegated decisions that impact significantly on peoples lives from mortgage and loan applications to job interviews, to prison sentences and bail guidance, to transport and delivery services to medicine and care.


It is vitally important that our governments and policymakers are strongly grounded in the reality of AI at this time and are not misled by hype, speculation, and fantasy. It is not clear how much the Hanson Robotics team are aware of the dangers that they are creating by appearing on international platforms with government ministers and policymakers in the audience.

Sing the body electric… robots in music videos

Still from the video for All is Love by Bjork

I recently saw the Chemical Brothers new-ish video for the song “Free Yourself”, featuring androids/robots apparently going feral and raving in a warehouse and it made me consciously think about something I’ve known for some time – there are quite a few music videos with ‘robots’ in them. 

So, here’s a very partial collection:

AI as organ -on|-ology

Kitt the 'intelligent' car from the TV show Knight Rider

Let’s begin with a postulate: there is either no “AI” – artificial intelligence – or every intelligence is, in fact, in some way artificial (following a recent talk by Bernard Stiegler). In doing so we commence from an observation that intelligence is not peculiar to one body, it is shared. A corollary is that there is no (‘human’) intelligence without artifice, insofar as we need to exteriorise thought (or what Stiegler refers to as ‘exosomatisation’) for that intelligence to function – as language, in writing and through tools – and that this is inherently collective. Further, we might argue that there is no AI. Taking that suggestion forward, we can say that there are, rather, forms of artificial (or functional) stupidity, following Alvesson & Spicer (2012: p. 1199), insofar as it inculcates forms of lack of capacity: “characterised by an unwillingness or inability to mobilize three aspects of cognitive capacity: reflexivity, justification, and substantive reasoning”. Following Alvesson & Spicer [& Stiegler] we might argue that such forms of stupidity are necessary passage points through our sense-making in/of the world, thus are not morally ‘wrong’ or ‘negative’. Instead, the forms of functional stupidity derive from technology/techniques are a form of pharmakon – both enabling and disabling in various registers of life.

Given such a postulate, we might categorise “AI” in particular ways. We might identify ‘AI’ not as ‘other’ to the ‘human’ but rather a part of our extended (exosomatic) capacities of reasoning and sense. This would be to think of AI ‘organologically’ (again following Stiegler) – as part of our widening, collective, ‘organs’ of action in the world. We might also identify ‘AI’ as an organising rationale in and of itself – a kind of ‘organon’ (following Aristotle). In this sense “AI” (the discipline, institutions and the outcome of their work [‘an AI’]) is/are an organisational framework for certain kinds of action, through particular forms of reasoning.

It would be tempting (in geographyland and across particular bits of the social sciences) to frame all of this stemming from, or in terms of, an individual figure: ‘the subject’. In such an account, technology (AI) is a supplement that ‘the human subject’ precedes. ‘The subject’, in such an account, is the entity to which things get done by AI, but also the entity ultimately capable of action. Likewise, such an account might figure ‘the subject’ and it’s ‘other’ (AI) in terms of moral agency/patiency. However, for this postulate such a framing would be unhelpful (I would also add that thinking in terms of ‘affect’, especially through neuro-talk would be just as unhelpful). If we think about AI organologically then we are prompted to think about the relation between what is figured as ‘the human’ and ‘AI’ (and the various other things that might be of concern in such a story) as ‘parasitic’ (in Derrida’s sense) – its a reciprocal (and, in Stiegler’s terms, ‘pharmacological’) relation with no a priori preceding entity. ‘Intelligence’ (and ‘stupidity’ too, of course) in such a formulation proceeds from various capacities for action/inaction.

If we don’t/shouldn’t think about Artificial Intelligence through the lens of the (‘sovereign’) individual ‘subject’ then we might look for other frames of reference. I think there are three recent articles/blogposts that may be instructive.

First, here’s David Runciman in the LRB:

Corporations are another form of artificial thinking machine, in that they are designed to be capable of taking decisions for themselves. Information goes in and decisions come out that cannot be reduced to the input of individual human beings. The corporation speaks and acts for itself. Many of the fears that people now have about the coming age of intelligent robots are the same ones they have had about corporations for hundreds of years.

Second, here’s Jonnie Penn riffing on Runciman in The Economist:

To reckon with this legacy of violence, the politics of corporate and computational agency must contend with profound questions arising from scholarship on race, gender, sexuality and colonialism, among other areas of identity.
A central promise of AI is that it enables large-scale automated categorisation. Machine learning, for instance, can be used to tell a cancerous mole from a benign one. This “promise” becomes a menace when directed at the complexities of everyday life. Careless labels can oppress and do harm when they assert false authority. 

Finally, here’s (the outstanding) Lucy Suchman discussing the ways in which figuring complex systems of ‘AI’-based categorisations as somehow exceeding our understanding does particular forms of political work that need questioning and resisting:

The invocation of Project Maven in this context is symptomatic of a wider problem, in other words. Raising alarm over the advent of machine superintelligence serves the self-serving purpose of reasserting AI’s promise, while redirecting the debate away from closer examination of more immediate and mundane problems in automation and algorithmic decision systems. The spectacle of the superintelligentsia at war with each other distracts us from the increasing entrenchment of digital infrastructures built out of unaccountable practices of classification, categorization, and profiling. The greatest danger (albeit one differentially distributed across populations) is that of injurious prejudice, intensified by ongoing processes of automation. Not worrying about superintelligence, in other words, doesn’t mean that there’s nothing about which we need to worry.
As critiques of the reliance on data analytics in military operations are joined by revelations of data bias in domestic systems, it is clear that present dangers call for immediate intervention in the governance of current technologies, rather than further debate over speculative futures. The admission by AI developers that so-called machine learning algorithms evade human understanding is taken to suggest the advent of forms of intelligence superior to the human. But an alternative explanation is that these are elaborations of pattern analysis based not on significance in the human sense, but on computationally-detectable correlations that, however meaningless, eventually produce results that are again legible to humans. From training data to the assessment of results, it is humans who inform the input and evaluate the output of the black box’s operations. And it is humans who must take the responsibility for the cultural assumptions, and political and economic interests, on which those operations are based and for the life-and-death consequences that already follow.

All of these quotes more-or-less exhibit my version of what an ‘organological’ take on AI might look like. Likewise, they illustrate the ways in which we might bring to bear a form of analysis that seeks to understand ‘intelligence’ as having ‘supidity’as a necessary component (it’s a pharmkon, see?), which in turn can be functional (following Alvesson & Spicer). In this sense, the framing of ‘the corporation’ from Runciman and Penn is instructive – AI qua corporation (as a thing, as a collective endeavour [a ‘discipline’]) has ‘parasitical’ organising principles through which play out the pharmacological tendencies of intelligence-stupidity.

I suspect this would also resonate strongly with Feminist Technology Studies approaches (following Judy Wajcman in particular) to thinking about contemporary technologies. An organological approach situates the knowledges that go towards and result from such an understanding of intelligence-stupidity. Likewise, to resist figuring ‘intelligence’ foremost in terms of the sovereign and universal ‘subject’ also resists the elision of difference. An organological approach as put forward here can (perhaps should[?]) also be intersectional.

That’s as far as I’ve got in my thinking-aloud, I welcome any responses/suggestions and may return to this again.

If you’d like to read more on how this sort of argument might play out in terms of ‘agency’ I blogged a little while ago.

ADD. If this sounds a little like the ‘extended mind‘ (of Clark & Chalmers) or various forms of ‘extended self’ theory then it sort of is. What’s different is the starting assumptions: here, we’re not assuming a given (a priori) ‘mind’ or ‘self’. In Stiegler’s formulation the ‘internal’ isn’t realised til the external is apprehended: mental interior is only recognised as such with the advent of the technical exterior. This is the aporia of origin of ‘the human’ that Stiegler and Derrida diagnose, and that gives rise to the theory of ‘originary technics’. The interior and exterior, and with them the contemporary understanding of the experience of being ‘human’ and what we understand to be technology, are mutually co-constituted – and continue to be so [more here]. I choose to render this in a more-or-less epistemological, rather than ontological manner – I am not so much interested in the categorisation of ‘what there is’, rather in ‘how we know’.

Popular automative imagination (some novels)

Twiki the robot from Buck Rogers

I’ve had about six months of reading various versions of speculative/science fiction after not having read in that genre for a little while… so here’s a selection of books I’ve read (almost exclusively on an ereader) that have more-or-less been selected following the ‘people who read [a] also read [b]’ lists.

I’m not sure these books necessarily offer any novel insights but they do respond to the current milieu of imagining automation (AI, big data, platform-ing, robots, surveillance capitalism etc etc) and in that sense are a sort of very partial (and weird) guide to that imagination and the sorts of visions being promulgated.

I’d like to write more but I don’t have the time or energy so this is more or less a place-holder for trying to say something more interesting at a later date… I do welcome other suggestions though! Especially less conventionally Western ones.

ADD. Jennie Day kindly shared a recent blogpost by David Murakami Wood in which he makes some recommendations for SF books. Some of these may be of interest if you’re looking for wider recommendations. In particular, I agree with his recommendations of Okorafor’s “Lagoon“, which is a great novel.

(More) Gendered imaginings of automata

My Cayla Doll

A few more bits on how automation gets gendered in particular kinds of contexts and settings. In particular, the identification of ‘home’ or certain sorts of intimacy with certain kinds of domestic or caring work that then gets gendered is something that has been increasingly discussed.

Two PhD researchers I am lucky enough to be working with, Paula Crutchlow (Exeter) and Kate Byron (Bristol), have approached some of these issues from different directions. Paula has had to wrangle with this in a number of ways in relation to the Museum of Contemporary Commodities but it was most visible in the shape of Mikayla, the hacked ‘My Friend Cayla Doll’. Kate is doing some deep dives on the sorts of assumptions that are embedded into the doing of AI/machine learning through the practices of designing, programming and so on. They are not, of course, alone. Excellent work by folks like Kate Crawford, Kate Devlin and Gina Neff (below) inform all of our conversations and work.

Here’s a collection of things that may provoke thought… I welcome any further suggestions or comments 🙂

Alexa, does AI have gender?


Alexa is female. Why? As children and adults enthusiastically shout instructions, questions and demands at Alexa, what messages are being reinforced? Professor Neff wonders if this is how we would secretly like to treat women: ‘We are inadvertently reproducing stereotypical behaviour that we wouldn’t want to see,’ she says.

Prof Gina Neff in conversation with Ruth Abrahams, OII.

Predatory Data: Gender Bias in Artificial Intelligence

it has been reported that female-sounding assistive chatbots regularly receive sexually charged messages. It was recently cited that five percent of all interactions with Robin Labs, whose bot platform helps commercial drivers with routes and logistics, is sexually explicit. The fact that the earliest female chatbots were designed to respond to these suggestions
deferentially or with sass was problematic as it normalised sexual harassment.

Vidisha Mishra and Madhulika Srikumar – Predatory Data: Gender Bias in Artificial Intelligence

The Gender of Artificial Intelligence

Chart showing that the gender of artificial intelligence (AI) is not neutral
The gendering, or not, of chatbots, digital assistants and AI movie characters – Tyler Schnoebelen

Consistently representing digital assistants as femalehard-codes a connection between a woman’s voice and subservience.

Stop Giving Digital Assistants Female Voices – Jessica Nordell, The New Republic

“The good robot”

Anki Vector personal robot

A fascinating and very evocative example of the ‘automative imagination’ in action in the form of an advertisement for the “Vector” robot from a company called Anki.

How to narrate or analyse such a robot? Well, there are lots of the almost-archetypical figures of ‘robot’ or automation. The cutesy and non-threatening pseudo-pet that the Vector invites us to assume it is, marks the first. This owes a lot to Wall-E (also, the robots in Batteries Not Included and countless other examples) and the doe-eyed characterisation of the faithful assistant/companion/servant. The second is the all-seeing surveillant machine uploading all your data to “the cloud”. The third is the two examples of quasi-military monsters with shades of “The Terminator”, with a little bit of helpless baby jeopardy for good measure. Finally, the brief nod to HAL 9000, and the flip of the master/slave that it represents, completes a whistle-stop tour of pop culture understandings of ‘robots’, stitched together in order to sell you something.

I assume that the Vector actually still does the kinds of surveillance it is sending up in the advert, but I have no evidence – there is no publicly accessible copy of the terms & conditions for the operation of the robot in your home. However, in a advertorial on ‘Robotics Business Review‘, there is a quote that sort of pushes one to suspect that Vector is yet another device that on the face of it is an ‘assistant’ but is also likely to be hoovering up everything it can about you and your family’s habits in order to sell that data on:

“We don’t want a person to ever turn this robot off,” Palatucci said. “So if the lights go off and it’s on your nightstand and he starts snoring, it’s not going to work. He really needs to use his sensors, his vision system, and his microphone to understand the context of what’s going on, so he knows when you want to interact, and more importantly, when you don’t.”

If we were to be cynical we might ask – why else would it need to be able to do all of this? –>

Anki Vector “Alive and aware”

Regardless, the advert is a useful example of how the bleed from fictional representations of ‘robots’ into contemporary commercial products we can take home – and perhaps even what we might think of as camouflage for the increasingly prevalent ‘extractive‘ business model of in-home surveillance.

“Emett” and “Miss Honeywell”

Twiki the robot from Buck Rogers

A couple of short films produced by British Pathé, both documenting what I guess were seen as whimsical takes on computerisation and automation originating from Honeywell. I don’t have much to say about these at the moment beyond the ways in which these videos more-or-less demonstrate the biases and norms of their time (gender and sexism being the most clear here) but also the ways in which they say something about how ‘automation’, robots and forms of novel technology (and so on) have been bound up with ideas about invention (which again is coloured by contemporary assumptions about who does the inventing).

Thanks to Mar Hicks for sharing “Miss Honeywell” on Twitter.

The Computer by Emett (1966) – British Pathé
Miss Honeywell (1968) – British Pathé

CFP> International Labour Process Conference STREAM Artifical Intelligence, Technology and Work

Industrial factory robot arms

Via Phoebe Moore.

ILPC STREAM Artifical Intelligence, Technology and Work

INTERNATIONAL LABOUR PROCESS CONFERENCE

Artifical Intelligence, Technology and Work 

ILPC 2019 Special Stream No. 5

Please submit abstracts via the International Labour Process Conference website (ilpc.org.uk) by the deadline of 26 October 2018.

Of all the social changes occurring over the past six or seven decades, perhaps most fascinating is the integration of computers and machines into the fabric of our lives and organizations. Machines are rapidly becoming direct competitors with humans for intelligence and decision-making powers. This is important for research in international labour process because artificial intelligence (AI) brings about challenges and questions for how organizations, globally, are designed and established with respective human resources planning and execution and industrial relations negotiations. We start with John McCarthy’s term, who both invented and defined AI as processes that are ‘making a machine behave in ways that would be called intelligent if humans were so behaving’ in 1955. At the origin of the term, AI aligned humans directly with machines, expecting machines to behave symbolically like humans. Over time, programmers working on neural networks and machine learning have emphasised the cognitive rather than symbolic. Now, AI is seen to have comparable capabilities to humans in both routine and non-routine ways, leading to new possibilities for automation. This draws on huge amounts of data often produced originally by humans. In fact, every time we enter a search term on a computer we add to and train machinic ‘intelligence.’ Every day, billions of actions are captured as part of this process, contributing to the development of AI. In doing so, people provide under-recognised cognitive and immaterial labour.
Therefore, this streams looks at the conditions and circumstances whereby machines begin to have the capacity to influence and become integrated in to humans’ ways of thinking, decision-making, working. It also considers the possibilities of AI in resistance against neoliberal and even authoritarian capitalism in the global north and south. AI is a broad term that identifies the pinnacle of machine capabilities that have recently become possible based on the amount of a) extensive big data that has become available in organisations, b) data analytical tools where programmers can identify what to track based on this data and what algorithms will allow one to gain the insights of interest, c) machine learning, where patterns across data sets can be identified and d) AI, where the final frontier has become the ability of pattern recognition across myriad data sets that have already identified their own patterns. When applied to work and work design, the primary goals are efficiency, market capture, and control over workers.
The rise of autonomous machines leads to philosophical questions that Marx engaged with in theories of objectification and alienation. Later, critical theorists have dealt with these questions in labour process research, where technologies and digitalization have created unprecedented concerns for how workplaces and work design are structured and control and resistance are pursued. In particular, the gig economy has become the frontline of these new changes. Workers here are now facing automation of the management function, supervised and even fired (or “deactivated”) without human intervention nor interaction. This is creating intensified and precarious working conditions, leading to fragmentation over digital platforms and platform management methods (Moore and Joyce 2018), as well as new forms of resistance and solidarities. These are all happening while their own work is under the threat of digitalization, where control and resistance have taken new forms and humans are in danger of becoming resources for tools (see Moore 2018a, 2018b; Woodcock, 2017; Waters and Woodcock, 2017).
Ultimately, across the economy, technology and its integration may be leading to organisations that take on a life of their own. Human resource decisions are increasingly taken by algorithms, where new human resources techniques integrate machine learning to achieve a new technique called ‘people analytics’ where data patterns are used to make workplace decisions for hiring/firing/talent predictions, creating significant threats to the possibilities of workplace organising and social justice. Sometimes, AI-based decisions lead to automating aspects of the workplace, for example, in the case of wearable devices in factories that allow human resource calculations based on AI and location-management by GPS and RFID systems. In these ways and others, AI processes inform a number of decision-making processes and digitalized management methods that have led to significant changes to workplaces and working conditions. If machines can deal with ethically based questions and begin to mimic the nuances of experiences and human judgement, will they become participants in humans’ already manifest ‘learned helplessness’? While currently, humans train AI with the use of big data, could machines begin to train humans to be helpless?

This call builds upon the ‘Artificial Intelligence. A service revolution?’ stream that featured at the 36th ILPC conference in Buenos Aires. This year’s stream is intended as a forum to bring together researchers engaged with the topics of labour process, political economy, technology, and AI to discuss this topic. We invite submissions on the following topics (not limited to, but also considering the need not to overlap with other streams):
-The effect of AI on the labour process where control and resistance rub against debates about exploitation Vs empowerment
-The implication of algorithmic management and control on the labour process, work replacement, and/or intensification from the factory to the office
-The “black box” of AI and related practices, algorithmic decision support, people analytics, performance management
-The impact of AI on the Global South: geographies and variegation of AI implementation, direct and indirect impact on jobs and differential effects of diverse socio-political setups
-Resistance and organising against/with AI and social media

Special Issue: We are also considering a submission for a journal special issue (though contributions may be requested before the conference). Please email Phoebe Moore pm358@leicester.ac.uk immediately if this is of interest.

Stream Organisers:

  • Juan Grigera (CONICET, Universidad de Quilmes, Buenos Aires, Argentina),
  • Lydia Hughes (Ruskin College, Oxford, UK),
  • Phoebe Moore (University of Leicester, School of Business, UK),
  • Jamie Woodcock (Oxford Internet Institute, University of Oxford, UK)

Please feel free to contact the stream organisers with any informal inquiries.

For information on the ILPC 2019 and the Calls for Papers for the General Conference and the other Special Streams please go to https://www.ilpc.org.uk/

References
Moore, P. (2018a): The Quantified Self in Precarity: Work, Technology and What Counts, Advances in Sociology series (Abingdon, Oxon: Routledge).
Moore, P. (2018b): ‘The Threat of Physical and Psychosocial Violence and Harassment in Digitalized Work’ International Labour Organization, ACTRAV, Geneva: Switzerland.
Woodcock, J. (2017): Working the Phones: Control and Resistance in Call Centres, London: Pluto.
Waters, F. and Woodcock, J. (2017): ‘Far From Seamless: a Workers’ Inquiry at Deliveroo’, Viewpoint Magazine.

40 years of automation anxiety in the UK through BBC clips [video]

Industrial factory robot arms

I’ve just done a rough edit of some snippets from BBC programmes that I think shows an interesting pattern to the ways that automation has been discussed by the UK national broadcaster over the last 40 years. In each case, automation is a significant issue – it needs to be urgently addressed, but that hasn’t yet happened.

Out of the three programmes, the first two are fairly significant in their onward influence.

The first, the 1978 “Now the Chips Are Down” Horizon episode was reportedly a significant influence for the BBC’s own Computer Literacy Project, which spawned the BBC Micro Computer.

The second, the 1980 (middle of three) episode(s) of “The Silicon Factor” was a part of the Computer Literacy Project. Alongside the three-part series, the producers created a report (for the BBC Continuing Education Department): “Microelectronics“, which was commissioned by the outgoing late-1970s Labour government’s Department for Education and the Manpower Services Commission. I thoroughly recommend watching the programmes and looking at the report if you’re interested in the histories of computing and automation in the UK.

The third, as far as I know – less significant in it’s onward influence, is a 2015 episode of Panorama: “Could a Robot Do My Job?” Interesting here because the rhetoric is nearly identical to that of “The Silicon Factor” – we need to take advantage of the revolution.

I’ve got more to say about this and I need to do a bit more thinking but wanted to share, because I think it’s interesting! This video (or a revised version) will form a part of my paper for a double session on ‘New Geographies of Automation(?)‘ at the RGS-IBG conference in August 2018 concerning the ways that we imagine automation.