Robots that are repeatedly coming, still

Industrial factory robot arms

On Tim Harford’s second series of Fifty things that made the modern economy there is an interesting trend of highlighting how some of the ‘things’ tell wider stories about automation in some regard. There are two things I’d pull out here.

First, there’s the issue of job or task displacement. Harford argues that, for example, spreadsheets automate certain elements of accountancy but make accountancy that much more efficient that more accountancy takes place. Quite a nice concise story about automation. This is indicative of a wider argument that often gets made about automation, perhaps in contradistinction to the ‘robots are stealing jobs’ hysteria — that automation may involve technology replacing people in certain tasks but that it often results in new tasks, or new forms of work (e.g. in the WEF ‘Future of Jobs Report 2018‘).

Second, there’s the issue of us being told by those with particular interests in automation and robotics that robots are about to replace a particular kind of work. This is a story that get’s trotted out rather a lot. ‘The robots are coming’ is a phrase often repeated in newspaper and web headlines. There are host of ‘packages’ for modern, and not-so-modern, news programmes about a ‘new’ machine that is going to replace a particular kind of worker. Harford gives a great example right at the end of the programme about bricks. We get through a lot of bricks and laying them as walls and building those into buildings are labour-intensive. There is a ‘new’ robot to displace that work: Construction Robotics‘ Semi Automated Mason (SAM – great name eh?) works alongside builders to speed up building walls (video below).

The thing is – this is not actually new. As Harford points out in the ‘bricks‘ programme, this is a story that has been told before. In the 1960s Pathé news reported on a remarkably similar mechanical system: the ‘motor mason’ (video below).

We can see then that in Harford’s popular economics podcast, 50 things, automation is a common theme – just as it is in wider discussions about social and political-economic ‘progress’. Yet it also nicely demonstrates some recurring tropes. First, there are now fairly established narratives about automation in relation to ‘jobs’ that are told in different ways, depending upon your political or theoretical persuasion – job ‘replacement’ and/or ‘creation’. Second, there is a common subsequent narrative when the ‘replacement’ story is playing out – that of the clever machine that is going to do a particular worker, such as a brick layer, out of their job. Here we also see how that narrative can keep being repeated, the robot is always coming but, perhaps sometimes, not quite arriving.

"The robots are coming" headline from the Guardian in 1986
"The robots are coming" headline from the Guardian in 2019

“AI will displace 40 percent of world’s jobs in as soon as 15 years” – Kai-Fu Lee

Industrial factory robot arms

In a widely-trailed CBS ’60 minutes’ interview, the A.I-pioneer-cum-venture-capitalist Kai-Fu Lee makes the sorts of heady predictions about job replacement/displacement that the media like to lap up. The automative imagination of ‘automation as progress’ in full swagger…

We should perhaps see this in the context of, amongst other things, geopolitical machinations (i.e. China-USA) around trade and intellectual property; a recently published book; a wider trend for claims about robotic process automation (especially in relation to ‘offshoring‘); and a large investment fund predicated upon ‘disruption’.

CFP> International Labour Process Conference STREAM Artifical Intelligence, Technology and Work

Industrial factory robot arms

Via Phoebe Moore.

ILPC STREAM Artifical Intelligence, Technology and Work

INTERNATIONAL LABOUR PROCESS CONFERENCE

Artifical Intelligence, Technology and Work 

ILPC 2019 Special Stream No. 5

Please submit abstracts via the International Labour Process Conference website (ilpc.org.uk) by the deadline of 26 October 2018.

Of all the social changes occurring over the past six or seven decades, perhaps most fascinating is the integration of computers and machines into the fabric of our lives and organizations. Machines are rapidly becoming direct competitors with humans for intelligence and decision-making powers. This is important for research in international labour process because artificial intelligence (AI) brings about challenges and questions for how organizations, globally, are designed and established with respective human resources planning and execution and industrial relations negotiations. We start with John McCarthy’s term, who both invented and defined AI as processes that are ‘making a machine behave in ways that would be called intelligent if humans were so behaving’ in 1955. At the origin of the term, AI aligned humans directly with machines, expecting machines to behave symbolically like humans. Over time, programmers working on neural networks and machine learning have emphasised the cognitive rather than symbolic. Now, AI is seen to have comparable capabilities to humans in both routine and non-routine ways, leading to new possibilities for automation. This draws on huge amounts of data often produced originally by humans. In fact, every time we enter a search term on a computer we add to and train machinic ‘intelligence.’ Every day, billions of actions are captured as part of this process, contributing to the development of AI. In doing so, people provide under-recognised cognitive and immaterial labour.
Therefore, this streams looks at the conditions and circumstances whereby machines begin to have the capacity to influence and become integrated in to humans’ ways of thinking, decision-making, working. It also considers the possibilities of AI in resistance against neoliberal and even authoritarian capitalism in the global north and south. AI is a broad term that identifies the pinnacle of machine capabilities that have recently become possible based on the amount of a) extensive big data that has become available in organisations, b) data analytical tools where programmers can identify what to track based on this data and what algorithms will allow one to gain the insights of interest, c) machine learning, where patterns across data sets can be identified and d) AI, where the final frontier has become the ability of pattern recognition across myriad data sets that have already identified their own patterns. When applied to work and work design, the primary goals are efficiency, market capture, and control over workers.
The rise of autonomous machines leads to philosophical questions that Marx engaged with in theories of objectification and alienation. Later, critical theorists have dealt with these questions in labour process research, where technologies and digitalization have created unprecedented concerns for how workplaces and work design are structured and control and resistance are pursued. In particular, the gig economy has become the frontline of these new changes. Workers here are now facing automation of the management function, supervised and even fired (or “deactivated”) without human intervention nor interaction. This is creating intensified and precarious working conditions, leading to fragmentation over digital platforms and platform management methods (Moore and Joyce 2018), as well as new forms of resistance and solidarities. These are all happening while their own work is under the threat of digitalization, where control and resistance have taken new forms and humans are in danger of becoming resources for tools (see Moore 2018a, 2018b; Woodcock, 2017; Waters and Woodcock, 2017).
Ultimately, across the economy, technology and its integration may be leading to organisations that take on a life of their own. Human resource decisions are increasingly taken by algorithms, where new human resources techniques integrate machine learning to achieve a new technique called ‘people analytics’ where data patterns are used to make workplace decisions for hiring/firing/talent predictions, creating significant threats to the possibilities of workplace organising and social justice. Sometimes, AI-based decisions lead to automating aspects of the workplace, for example, in the case of wearable devices in factories that allow human resource calculations based on AI and location-management by GPS and RFID systems. In these ways and others, AI processes inform a number of decision-making processes and digitalized management methods that have led to significant changes to workplaces and working conditions. If machines can deal with ethically based questions and begin to mimic the nuances of experiences and human judgement, will they become participants in humans’ already manifest ‘learned helplessness’? While currently, humans train AI with the use of big data, could machines begin to train humans to be helpless?

This call builds upon the ‘Artificial Intelligence. A service revolution?’ stream that featured at the 36th ILPC conference in Buenos Aires. This year’s stream is intended as a forum to bring together researchers engaged with the topics of labour process, political economy, technology, and AI to discuss this topic. We invite submissions on the following topics (not limited to, but also considering the need not to overlap with other streams):
-The effect of AI on the labour process where control and resistance rub against debates about exploitation Vs empowerment
-The implication of algorithmic management and control on the labour process, work replacement, and/or intensification from the factory to the office
-The “black box” of AI and related practices, algorithmic decision support, people analytics, performance management
-The impact of AI on the Global South: geographies and variegation of AI implementation, direct and indirect impact on jobs and differential effects of diverse socio-political setups
-Resistance and organising against/with AI and social media

Special Issue: We are also considering a submission for a journal special issue (though contributions may be requested before the conference). Please email Phoebe Moore pm358@leicester.ac.uk immediately if this is of interest.

Stream Organisers:

  • Juan Grigera (CONICET, Universidad de Quilmes, Buenos Aires, Argentina),
  • Lydia Hughes (Ruskin College, Oxford, UK),
  • Phoebe Moore (University of Leicester, School of Business, UK),
  • Jamie Woodcock (Oxford Internet Institute, University of Oxford, UK)

Please feel free to contact the stream organisers with any informal inquiries.

For information on the ILPC 2019 and the Calls for Papers for the General Conference and the other Special Streams please go to https://www.ilpc.org.uk/

References
Moore, P. (2018a): The Quantified Self in Precarity: Work, Technology and What Counts, Advances in Sociology series (Abingdon, Oxon: Routledge).
Moore, P. (2018b): ‘The Threat of Physical and Psychosocial Violence and Harassment in Digitalized Work’ International Labour Organization, ACTRAV, Geneva: Switzerland.
Woodcock, J. (2017): Working the Phones: Control and Resistance in Call Centres, London: Pluto.
Waters, F. and Woodcock, J. (2017): ‘Far From Seamless: a Workers’ Inquiry at Deliveroo’, Viewpoint Magazine.

Some more A.I. links

Twiki the robot from Buck Rogers

This post contains some tabs I have had open in my browser for a while that I’m pasting here both to save them in a place I may remember to look and to share them with others that might find them of interest. I’m afraid I don’t have time, at present, to offer any cogent commentary or analysis – just simply to share…

Untold AI - Christopher NoesselUntold A.I. – “What stories are we not telling ourselves about A.I?”, Christopher Noessel: An interesting attempt to look at popular, sci-fi stories of A.I. and compare them to contemporary A.I. research manifestos and look at where we might not be telling ourselves stories about the things people are actually trying to do.

 

The ethics of crashes with self?driving cars: A roadmapSven Nyholm: A two-part series of papers [one and two ($$) / one and two (open)] published in Philosophy Compass concerning how to think through the ethical issues associated with self-driving cars. Nyholm recently talked about this with John Danaher on his podcast.

Cognitive Bias CodexWEF on the Toronto Declaration and the “cognitive bias codex”: A post on the World Economic Forum’s website about “The Toronto Declaration on Machine Learning” on guiding principles for protecting human rights in relation to automated systems. As part of the post they link to a nice diagram about cognitive bias – the ‘cognitive bias codex‘.

RSA public engagement with AI reportRSA report on public engagement with AI: “Our new report, launched today, argues that the public needs to be engaged early and more deeply in the use of AI if it is to be ethical. One reason why is because there is a real risk that if people feel like decisions about how technology is used are increasingly beyond their control, they may resist innovation, even if this means they could lose out on benefits.”

artificial unintelligence - broussardArtificial Unintelligence, Meredith Broussard: “In Artificial Unintelligence, Meredith Broussard argues that our collective enthusiasm for applying computer technology to every aspect of life has resulted in a tremendous amount of poorly designed systems. We are so eager to do everything digitally—hiring, driving, paying bills, even choosing romantic partners—that we have stopped demanding that our technology actually work.”

Data-driven discrimination: a new challenge for civil society: A blogpost on the LSE ‘Impact of Soc. Sci.’ blog: “Having recently published a report on automated discrimination in data-driven systems, J?drzej Niklas and Seeta Peña Gangadharan explain how algorithms discriminate, why this raises concerns for civil society organisations across Europe, and what resources and support are needed by digital rights advocates and anti-discrimination groups in order to combat this problem.”

‘AI and the future of work’ – talk by Phoebe Moore: Interesting talk transcript with links to videos. Snippet: “Human resource and management practices involving AI have introduced the use of big data to make judgements to eliminate the supposed “people problem”. However, the ethical and moral questions this raises must be addressed, where the possibilities for discrimination and labour market exclusion are real. People’s autonomy must not be forgotten.”

Government responds to report by Lords Select Committee on Artificial Intelligence: “The Select Committee on Artificial Intelligence receives the Government response to the report: AI in the UK: Ready, willing and able?, published on 16 April 2018.”

How a Pioneer of Machine Learning Became One of Its Sharpest Critics, Kevin Hartnett – The Atlantic: “Judea Pearl helped artificial intelligence gain a strong grasp on probability, but laments that it still can’t compute cause and effect.”

“The Rise of the Robot Reserve Army” – interesting working paper

Charlie Chaplin in Modern Times

Saw this via Twitter somehow…

The Rise of the Robot Reserve Army: Automation and the Future of Economic Development, Work, and Wages in Developing Countries – Working Paper 487

Lukas Schlogl and Andy Sumner

Employment generation is crucial to spreading the benefits of economic growth broadly and to reducing global poverty. And yet, emerging economies face a contemporary challenge to traditional pathways to employment generation: automation, digitalization, and labor-saving technologies. 1.8 billion jobs—or two-thirds of the current labor force of developing countries—are estimated to be susceptible to automation from today’s technological standpoint. Cumulative advances in industrial automation and labor-saving technologies could further exacerbate this trend. Or will they? In this paper we: (i) discuss the literature on automation; and in doing so (ii) discuss definitions and determinants of automation in the context of theories of economic development; (iii) assess the empirical estimates of employment-related impacts of automation; (iv) characterize the potential public policy responses to automation; and (v) highlight areas for further exploration in terms of employment and economic development strategies in developing countries. In an adaption of the Lewis model of economic development, the paper uses a simple framework in which the potential for automation creates “unlimited supplies of artificial labor” particularly in the agricultural and industrial sectors due to technological feasibility. This is likely to create a push force for labor to move into the service sector, leading to a bloating of service-sector employment and wage stagnation but not to mass unemployment, at least in the short-to-medium term.

CFP> ‘The Spectre of Artificial Intelligence’

Still from George Lucas' THX1138

An interesting CFP for Spheres: Journal of Digital Culture. Heard through CDC Leuphana:

‘The Spectre of Artificial Intelligence

Over the last years we have been witnessing a shift in the conception of artificial intelligence, in particular with the explosion in machine learning technologies. These largely hidden systems determine how data is gathered, analyzed, and presented or used for decision-making. The data and how it is handled are not neutral, but full of ambiguity and presumptions, which implies that machine learning algorithms are constantly fed with biases that mirror our everyday culture; what we teach these algorithms ultimately reflects back on us and it is therefore no surprise when artificial neural networks start to classify and discriminate on the basis of race, class and gender. (Blockbuster news regarding that women are being less likely to get well paid job offers shown through recommendation systems, a algorithm which was marking pictures of people of color as gorillas, or the delivery service automatically cutting out neighborhoods in big US cities where mainly African Americans and Hispanics live, show how trends of algorithmic classification can relate to the restructuring of the life chances of individuals and groups in society.) However, classification is an essential component of artificial intelligence, insofar as the whole point of machine learning is to distinguish ‘valuable’ information from a given set of data. By imposing identity on input data, in order to filter, that is to differentiate signals from noise, machine learning algorithms become a highly political issue. The crucial question in relation to machine learning therefore is: how can we systematically classify without being discriminatory?In the next issue of spheres, we want to focus on current discussions around automation, robotics and machine learning, from an explicitly political perspective. Instead of invoking once more the spectre of artificial intelligence – both in its euphoric as well as apocalyptic form – we are interested in tracing human and non-human agency within automated processes, discussing the ethical implications of machine learning, and exploring the ideologies behind the imaginaries of AI. We ask for contributions that deal with new developments in artificial intelligence beyond the idiosyncratic description of specific features (e.g. symbolic versus connectionist AI, supervised versus unsupervised learning) by employing diverse perspectives from around the world, particularly the Global South. To fulfil this objective, we would like to arrange the upcoming issue around three focal points:

  1. Reflections dealing with theoretical (re-)conceptualisations of what artificial intelligence is and should be. What history do the terms artificiality, intelligence, learning, teaching and training have and what are their hidden assumptions? How can human intelligence and machine intelligence be understood and how is intelligence operationalised within AI? Is machine intelligence merely an enhanced form of pattern recognition? Why do ’human’ prejudices re-emerge in machine learning algorithms, allegedly devised to be blind to them?
  2. Implications focusing on the making of artificial intelligence. What kind of data analysis and algorithmic classification is being developed and what are its parameters? How do these decisions get made and by whom? How can we hold algorithms accountable? How can we integrate diversity, novelty and serendipity into the machines? How can we filter information out of data without reinserting racist, sexist, and classist beliefs? How is data defined in the context of specific geographies? Who becomes classified as threat according to algorithmic calculations and why?
  3. Imaginaries revealing the ideas shaping artificial intelligence. How do pop-cultural phenomena reflect the current reconfiguration of human-machine-relations? What can they tell us about the techno-capitalist unconscious? In which way can artistic practices address the current situation? What can we learn from historical examples (e.g. in computer art, gaming, music)? What would a different aesthetic of artificial intelligence look like? How can we make the largely hidden processes of algorithmic filtering visible? How to think of machine learning algorithms beyond accuracy, efficiency, and homophily?

Deadlines

If you would like to submit an article or other, in particular artistic contribution (music, sound, video, etc.) to the issue, please get in touch with the editorial collective (contact details below) as soon as possible. We would be grateful if you would submit a provisional title and short abstract (250 words, max) by 15 May, 2018. We may have questions or suggestions that we raise at this point. Otherwise, final versions of articles and other contributions should please be submitted by 31 August, 2018. They will undergo review in accordance with the peer review process (s. About spheres). Any revisions requested will need to be completed so that the issue can be published in Winter 2018.

>> Read more

Scaring you into ‘digital safety’

I’ve been thinking about the adverts from Barclays about ‘digital safety’ in relation to some of the themes of the module about technology that I run. In particular, about the sense of risk and threat that sometimes gets articulated about digital media and how this maybe carries with it other kinds of narrative about technology, like versions of determinism for instance. The topics of the videos are, of course, grounded in things that do happen – people do get scammed and it can be horrendous. Nevertheless, from an ‘academic’ standpoint, it’s interesting to ‘read’ these videos as particular articulations of perceived norms around safety/risk and responsibility, for whom, and why in relation to ‘digital’ media… I could write more but I’ll just post a few of the videos for now…