Talk – Plymouth, 17 Oct: ‘New geographies of automation?’

Rachael in the film Blade Runner

I am looking forward to visiting Plymouth (tomorrow) the 17th October to give a Geography department research seminar. It’s been nearly twenty years (argh!) since I began my first degree, in digital art, at Plymouth so I’m looking forward to returning. I’ll be talking about a couple of aspects of ‘The Automative Imagination’ under a slightly different title – ‘New geographies of automation?’ The talk will take in archival BBC and newspaper automation anxieties, management consultant magical thinking (and the ‘Fourth Industrial Revolution’), gendered imaginings of domesticity (with the Jetsons amongst others) and some slightly under-cooked (at the moment) thoughts about how ‘agency’ (what kinds of ‘beings’ or ‘things’ can do what kinds of action).

Do come along if you’re free and happen to be in the glorious gateway to the South West that is Plymouth.

(More) Gendered imaginings of automata

My Cayla Doll

A few more bits on how automation gets gendered in particular kinds of contexts and settings. In particular, the identification of ‘home’ or certain sorts of intimacy with certain kinds of domestic or caring work that then gets gendered is something that has been increasingly discussed.

Two PhD researchers I am lucky enough to be working with, Paula Crutchlow (Exeter) and Kate Byron (Bristol), have approached some of these issues from different directions. Paula has had to wrangle with this in a number of ways in relation to the Museum of Contemporary Commodities but it was most visible in the shape of Mikayla, the hacked ‘My Friend Cayla Doll’. Kate is doing some deep dives on the sorts of assumptions that are embedded into the doing of AI/machine learning through the practices of designing, programming and so on. They are not, of course, alone. Excellent work by folks like Kate Crawford, Kate Devlin and Gina Neff (below) inform all of our conversations and work.

Here’s a collection of things that may provoke thought… I welcome any further suggestions or comments 🙂

Alexa, does AI have gender?


Alexa is female. Why? As children and adults enthusiastically shout instructions, questions and demands at Alexa, what messages are being reinforced? Professor Neff wonders if this is how we would secretly like to treat women: ‘We are inadvertently reproducing stereotypical behaviour that we wouldn’t want to see,’ she says.

Prof Gina Neff in conversation with Ruth Abrahams, OII.

Predatory Data: Gender Bias in Artificial Intelligence

it has been reported that female-sounding assistive chatbots regularly receive sexually charged messages. It was recently cited that five percent of all interactions with Robin Labs, whose bot platform helps commercial drivers with routes and logistics, is sexually explicit. The fact that the earliest female chatbots were designed to respond to these suggestions
deferentially or with sass was problematic as it normalised sexual harassment.

Vidisha Mishra and Madhulika Srikumar – Predatory Data: Gender Bias in Artificial Intelligence

The Gender of Artificial Intelligence

Chart showing that the gender of artificial intelligence (AI) is not neutral
The gendering, or not, of chatbots, digital assistants and AI movie characters – Tyler Schnoebelen

Consistently representing digital assistants as femalehard-codes a connection between a woman’s voice and subservience.

Stop Giving Digital Assistants Female Voices – Jessica Nordell, The New Republic

“The good robot”

Anki Vector personal robot

A fascinating and very evocative example of the ‘automative imagination’ in action in the form of an advertisement for the “Vector” robot from a company called Anki.

How to narrate or analyse such a robot? Well, there are lots of the almost-archetypical figures of ‘robot’ or automation. The cutesy and non-threatening pseudo-pet that the Vector invites us to assume it is, marks the first. This owes a lot to Wall-E (also, the robots in Batteries Not Included and countless other examples) and the doe-eyed characterisation of the faithful assistant/companion/servant. The second is the all-seeing surveillant machine uploading all your data to “the cloud”. The third is the two examples of quasi-military monsters with shades of “The Terminator”, with a little bit of helpless baby jeopardy for good measure. Finally, the brief nod to HAL 9000, and the flip of the master/slave that it represents, completes a whistle-stop tour of pop culture understandings of ‘robots’, stitched together in order to sell you something.

I assume that the Vector actually still does the kinds of surveillance it is sending up in the advert, but I have no evidence – there is no publicly accessible copy of the terms & conditions for the operation of the robot in your home. However, in a advertorial on ‘Robotics Business Review‘, there is a quote that sort of pushes one to suspect that Vector is yet another device that on the face of it is an ‘assistant’ but is also likely to be hoovering up everything it can about you and your family’s habits in order to sell that data on:

“We don’t want a person to ever turn this robot off,” Palatucci said. “So if the lights go off and it’s on your nightstand and he starts snoring, it’s not going to work. He really needs to use his sensors, his vision system, and his microphone to understand the context of what’s going on, so he knows when you want to interact, and more importantly, when you don’t.”

If we were to be cynical we might ask – why else would it need to be able to do all of this? –>

Anki Vector “Alive and aware”

Regardless, the advert is a useful example of how the bleed from fictional representations of ‘robots’ into contemporary commercial products we can take home – and perhaps even what we might think of as camouflage for the increasingly prevalent ‘extractive‘ business model of in-home surveillance.

Reblog> session on feminist digital geographies at AAG conference April 2019

Women Who Code

Via Gillian Rose. If you’re going to the AAG – this session is sure to be a good one.

Session on feminist digital geographies at AAG conference April 2019

This is a call for papers for a session at the next conference of the American Association of Geographers annual meeting in Washington DC 3-7 April next year on feminist digital geographies, organised by Agnieszka Leszczynski (Western University) and me. It’s sponsored by both the Digital Geographies and the Geographic Perspectives on Women Speciality Groups of the AAG.

In the context of a flurry of activities coalescing around digital geographies, we invite papers that consider the “enduring contours and new directions” of feminist theory, politics, and praxis for geographies concerned with the digital (Elwood and Leszczynski, 2018). We broadly welcome interventions that proceed from, utilize, and advance feminist epistemologies, methodologies, theory, critical practice, and activism.

We are open to submissions offering empirical, theoretical, critical, and methodological contributions across a range of topics, including but not limited to:

  • big data
  • digitally-mediated cities
  • artificial intelligence and algorithms
  • social media
  • feminist/digital/spatial theory
  • progressive alternatives and activism
  • feminist histories and genealogies

Please submit abstracts of no more than 200 words by October 15thto aleszczy@uwo.ca and gillian.rose@ouce.ox.ac.uk. Please include a title, your name, affiliation and email address in the abstract. We will respond to authors with confirmation by November 1st.

Reference:

Elwood S and Leszczynski A (2018) Feminist digital geographies. Gender, Place & Culture25(5): 629-644.

‘New geographies of automation?’ at the RGS-IBG conference

Industrial factory robot arms

All of a sudden the summer is nearly over, apparently, and the annual conference of the Royal Geographical Society with the Institute of British Geographers is fast approaching, this year in Cardiff.

I am convening a double session on the theme of ‘New geographies of automation?’, with two sessions of papers by some fantastic colleagues that promise to be really interesting. I am really pleased to have this opportunity to invite colleagues to collectively bring their work into conversation around a theme that is not only a contemporary topic in academic work but also, significantly, a renewed topic of interest in the wider public.

There are two halves of the session, broadly themed around ‘autonomy’ and ‘spacings’. Please find below the abstracts for the session.

Details: Sessions 92 & 123 (in slots 3 & 4 – 14:40-16:20 & 16:50-18:30) | Bates Building, Lecture Theatre 1.4

This information is also accessible, with all of the details of venue etc., on the RGS-IBG conference website: session 1 ‘autonomy’ and session 2 ‘spacings’.

New Geographies of Automation? (1): Autonomy

1.1 An Automative Imagination

Samuel Kinsley, University of Exeter

This paper sets out to review some of the key ways in which automation gets imagined – the sorts of cultural, economic and social forms of imagination that are drawn upon and generated when discussing how automation works and the kinds of future that may come as a result. The aim here is not to validate/invalidate particular narratives of automation – but instead to think about how they are produced and what they tell us about how we tell stories about what it means to be ‘human’, who/what has agency and what this may mean for how we think politically and spatially. To do this the concept of an ‘automative imagination’ is proposed as a means of articulating these different, sometimes competing – sometimes complementary, orientations towards automation.

 

1.2 The Future of Work: Feminist Geographical Engagements

Julie MacLeavy (Geographical Sciences, University of Bristol)

This paper considers the particular pertinence of feminist geographical scholarship to debates on the ‘future of work’. Drawing inspiration from Linda McDowell’s arguments that economic theories of epochal change rest on the problematic premise that economic and labour market changes are gender-neutral, it highlights the questions that are emerging from feminist economic geography research and commentary on the reorganisation of work, workers’ lives and labour markets. From this, the paper explores how feminist and anti-racist politics connect with the imagination of a ‘post-work’ world in which technological advancement is used to enable more equitable ways of practice (rather than more negative effects such as the intensification of work lifestyles). Political responses to the critical challenges that confront workers in the present moment of transformation are then examined, including calls for Universal Basic Income, which has the potential to reshape the landscape of labour-capital relations.

 

1.3 Narrating the relationship between automation and the changing geography of digital work

Daniel Cockayne, Geography and Environmental Management, University of Waterloo

Popular narratives about the relationship between automation and work often make a straightforward causal link between technological change and deskilling, job loss, or increased demand for jobs. Technological change – today, most commonly, automation and AI – is often scripted as threatening the integrity of labor, unionization, and traditional working practices or as creating more demand for jobs, in which the assumption is the more jobs the better. These narratives elide a close examination of the politics of work that include considerations of domestic and international racialized and gendered divisions of labor. Whether positive or negative, the supposed inevitability of technological transition positions labor as a passive victim of these changes, while diverting attention away from the workings of international financialized capital. Yet when juxtaposed against empirical data, straightforward cause and effect narratives become more complex. The unemployment rate in North America has been the lowest in 40 years (4.1% in the USA and 5.7% in Canada), which troubles the relationship between automation and job loss. Yet, though often touted by publications like The Economist as a marker of national economic well-being, unemployment rates ignore the kinds of work people are doing, effacing the qualitative changes in work practices over time. I examine these tropes and their relationship to qualitative changes in work practices, to argue that the link between technological change and the increasing precaratization of work is more primary than the diversionary relationship between technological change and job loss and gain or deskilling. 

 

1.4 Sensing automation

David Bissell, University of Melbourne

Processes of industrial automation are intensifying in many sectors of the economy through the development of AI and robotics. Conventional accounts of industrial automation stress the economic imperatives to increase economic profitability and safety. Yet such coherent snapped-to-grid understandings risk short-circuiting the complexity and richness of the very processes and events that compose automation. ­­­This paper draws from and reflects through a series of encounters with workers engaged in the increasingly automated mining sector in Australia. Rather than thinking these encounters solely through their representational dimensions with an aim to building a coherent image of what automation is, this paper is an attempt at writing how automation becomes differently disclosed through the aesthetic dimensions of encounters. It acknowledges how automation is always caught up in multiple affective and symbolic ecologies which create new depths of association. Developing post-phenomenological thought in cultural geography, this paper articulates some of the political and ethical stakes for admitting ambiguity, incoherence and confusion as qualities of our relations with technological change.

 

1.5 Technological Sovereignty, Post-Human Subjectivity, and the Production of the Digital-Urban Commons

Casey Lynch (School of Geography and Development, University of Arizona)

 As cities become increasingly monitored, planned, and controlled by the proliferation of digital technologies, urban geographers have sought to understand the role of software, big data, and connected infrastructures in producing urban space (French and Thrift 2002; Dodge, Kitchin, and Zook, 2009). Reflections on the “automatic production of space” have raised questions about the role and limitations of “human” agency in urban space (Rose 2017) and the possibilities for urban democracy. Yet, this literature largely considers the proliferation of digital infrastructures within the dominant capitalist, smart-city model, with few discussions of the possibilities for more radically democratic techno-urban projects. Engaging these debates, this paper considers alternative models of the techno-social production of urban space based around the collective production and management of a common digital-urban infrastructure. The paper reflects on the notion of “technological sovereignty” and the case of Guifinet, the world’s largest “community wireless network” covering much of Catalonia.  The paper highlights the way its decentralized, DIY mode of producing and maintaining digital urban infrastructure points to the possibilities for more radically democratic models of co-production in which urban space, technological infrastructures, and subjectivities are continually reshaped in relation. Through this, the paper seeks to contribute to broader discussions about the digitalization of urban space and the possibilities for a radical techno-politics.  

New Geographies of Automation? (2): Spacings

2.1 The urbanisation of robotics and automated systems – a research agenda
Andy Lockhart* (a.m.lockhart@sheffield.ac.uk), Aidan While* (a.h.while@sheffield.ac.uk), Simon Marvin (s.marvin@sheffield.ac.uk), Mateja Kovacic (m.kovacic@sheffield.ac.uk), Desiree Fields (d.fields@sheffield.ac.uk) and Rachel Macrorie (r.m.macrorie@sheffield.ac.uk) (Urban Institute, University of Sheffield)
*Attending authors
Pronouncements of a ‘fourth industrial revolution’ or ‘second machine age’ have stimulated significant public and academic interest in the implications of accelerating automation. The potential consequences for work and employment have dominated many debates, yet advances in robotics and automated systems (RAS) will have profound and geographically uneven ramifications far beyond the realm of labour. We argue that the urban is already being configured as a key site of application and experimentation with RAS technologies. This is unfolding across a range of domains, from the development of autonomous vehicles and robotic delivery systems, to the growing use of drone surveillance and predictive policing, to the rollout of novel assistive healthcare technologies and infrastructures. These processes and the logics underpinning them will significantly shape urban restructuring and new geographies of automation in the coming years. However, while there is growing research interest in particular domains, there remains little work to date which takes a more systemic view. In this paper we do three things, which look to address this gap and constitute the contours of a new urban research agenda. First, we sketch a synoptic view of the urbanisation of RAS, identifying what is new, what is being enabled as a result and what should concern critical scholars, policymakers and the wider public in debates about automation. Second, we map out the multiple and sometimes conflicting rationalities at play in the urbanisation of RAS, which have the potential to generate radically different urban futures, and may address or exacerbate existing socio-spatial inequalities and injustices. Third, and relatedly, we pose a series of questions for urban scholars and geographers, which constitute the basis for an urgent new programme of research and intervention.

 

2.2 Translating the signals: Utopia as a method for interrogating developments in autonomous mobility

Thomas Klinger1, 2
Brendan Doody2
Debbie Hopkins2
Tim Schwanen2
1. Institute of Human Geography, Goethe-University Frankfurt am Main
2. School of Geography and the Environment, University of Oxford

Connected and autonomous vehicles (CAVs) are often presented as technological ‘solutions’ to problems of road safety, congestion, fuel economy and the cost of transporting people, goods and services. In these dominant techno-economic narratives ‘non-technical’ factors such as public acceptance, legal and regulatory frameworks, cost and investment in testing, research and supporting infrastructure are the main ‘barriers’ to the otherwise steady roll-out of CAVs. Drawing on an empirical case study of traffic signalling, we trace the implications that advances in vehicle autonomy may have for such mundane and taken-for-granted infrastructure. We employ the three modes of analysis associated with Levitas’ (2013) ‘utopia as a method’. Starting with the architectural mode we identify the components, actors and visions underpinning ‘autonomobility’. The archaeological mode is then used to unpack the assumptions, contradictions and possible unintended effects that CAVs may have for societies. In the ontological mode we speculate upon the types of human and non-human subjectivities and agencies implied by alleged futures of autonomous mobility. Through this process we demonstrate that techno-economic accounts overemphasise the likely scale, benefits and impacts these advances may have for societies. In particular, they overlook how existing automobile-dependent mobility systems are the outcome of complex assemblages of social and technical elements (e.g., cars, car-drivers, roads, petroleum supplies, novel technologies and symbolic meanings) which have become interlinked in systemic and path-dependent ways over time. We conclude that utopia as method may provide one approach by which geographers can interrogate and opening up alarmist/boosterish visions of autonomobility and automation.

 

2.3 Automating the laboratory? Folding securities of malware
Andrew Dwyer, University of Oxford
andrew.dwyer@cybersecurity.ox.ac.uk

Folding, weaving, and stitching is crucial to contemporary analyses of malicious software; generated and maintained through the spaces of the malware analysis laboratory. Technologies entangle (past) human analysis, action, and decision into ‘static’ and ‘contextual’ detections that we depend on today. A large growth in suspect software to draw decisions on maliciousness have driven a movement into (seemingly omnipresent) machine learning. Yet this is not the first intermingling of human and technology in malware analysis. It draws on a history of automation, enabling interactions to ‘read’ code in stasis; build knowledges in more-than-human collectives; allow ‘play’ through a monitoring of behaviours in ‘sandboxed’ environments; and draw on big data to develop senses of heuristic reputation scoring.

Though we can draw on past automation to explore how security is folded, made known, rendered as something knowable: contemporary machine learning performs something different. Drawing on Louise Amoore’s recent work on the ethics of the algorithm, this paper queries how points of decision are now more-than-human. Automation has always extended the human, led to loops, and driven alternative ways of living. Yet the contours, the multiple dimensions of the neural net, produce the malware ‘unknown’ that have become the narrative of the endpoint industry. This paper offers a history of the automation of malware analysis from static and contextual detection, to ask how automation is changing how cyberspace becomes secured and made governable; and how automation is not something to be feared, but tempered with the opportunities and challenges of our current epoch.

 

2.4 Robots and resistance: more-than-human geographies of automation on UK dairy farms

Chris Bear (Cardiff University; bearck@cardiff.ac.uk)
Lewis Holloway (University of Hull; l.holloway@hull.ac.uk)

This paper examines the automation of milking on UK dairy farms to explore how resistance develops in emerging human-animal-technology relations. Agricultural mechanisation has long been celebrated for its potential to increase the efficiency of production. Automation is often characterised as continuing this trajectory; proponents point to the potential for greater accuracy, the removal of less appealing work, the reduction of risks posed by unreliable labour, and the removal of labour costs. However, agricultural mechanisation has never been received wholly uncritically; studies refer to practices of resistance that have developed due to fears around (for instance) impacts on rural employment, landscapes, ecologies and traditional knowledge practices. Drawing on interviews with farmers, observational work on farms and analysis of promotional material, this paper examines resistant relations that emerge around the introduction of Automated Milking Systems (AMS) on UK dairy farms. While much previous work on resistance to agricultural technologies has pitted humans against machines, we follow Foucault in arguing that resistance can be heterogeneous and directionally ambiguous, emerging through ‘the capillary processes of counter-conduct’ (Holloway and Morris 2012). These capillary processes can have complex geographies and emerge through more-than-human relations. Where similar conceptualisations have been developed previously, technologies continue to appear rather inert – they are often the tools by which humans attempt to exert influence, rather than things which can themselves ‘object’ (Latour 2000), or which are co-produced by other nonhumans rather than simply imposed or applied by humans. We begin, therefore, to develop a more holistic approach to the geographies of more-than-human resistance in the context of automation.

 

2.5 Fly-by-Wire: The Ironies of Automation and the Space-Times of Decision-Making

Sam Hind (University of Siegen; hind@locatingmedia.uni-siegen.de)

This paper presents a ‘prehistory’ (Hu 2015) of automobile automation, by focusing on ‘fly-by-wire’ control systems in aircraft. Fly-by-wire systems, commonly referred to as ‘autopilots’ work by translating human control gestures into component movements, via digital soft/hardware. These differ historically from mechanical systems in which pilots have direct steering control through a ‘yoke’ to the physical components of an aircraft (ailerons etc.), via metal rods or wires. Since the launch of the first commercial aircraft with fly-by-wire in 1988, questions regarding the ‘ironies’ or ‘paradoxes’ of automation (Bainbridge 1983) have continued to be posed. I look at the occurrence of ‘mode confusion’ in cockpits to tease out one of these paradoxes; using automation in the aviation industry as a heuristic lens to analyze automation of the automobile. I then proceed by detailing a scoping study undertaken at the Geneva Motor Show in March this year, in which Nissan showcased an autonomous vehicle system. Unlike other manufacturers, Nissan is pitching the need for remote human support when vehicles encounter unexpected situations; further complicating and re-distributing navigational labour in, and throughout, the driving-machine. I will argue that whilst such developments plan to radically alter the ‘space-times of decision-making’ (McCormack and Schwanen 2011) in the future autonomous vehicle, they also exhibit clear ironies or paradoxes found similarly, and still fiercely discussed, in the aviation industry and with regards to fly-by-wire systems. It is wise, therefore, to consider how these debates have played out – and with what consequences.

CFP> International Labour Process Conference STREAM Artifical Intelligence, Technology and Work

Industrial factory robot arms

Via Phoebe Moore.

ILPC STREAM Artifical Intelligence, Technology and Work

INTERNATIONAL LABOUR PROCESS CONFERENCE

Artifical Intelligence, Technology and Work 

ILPC 2019 Special Stream No. 5

Please submit abstracts via the International Labour Process Conference website (ilpc.org.uk) by the deadline of 26 October 2018.

Of all the social changes occurring over the past six or seven decades, perhaps most fascinating is the integration of computers and machines into the fabric of our lives and organizations. Machines are rapidly becoming direct competitors with humans for intelligence and decision-making powers. This is important for research in international labour process because artificial intelligence (AI) brings about challenges and questions for how organizations, globally, are designed and established with respective human resources planning and execution and industrial relations negotiations. We start with John McCarthy’s term, who both invented and defined AI as processes that are ‘making a machine behave in ways that would be called intelligent if humans were so behaving’ in 1955. At the origin of the term, AI aligned humans directly with machines, expecting machines to behave symbolically like humans. Over time, programmers working on neural networks and machine learning have emphasised the cognitive rather than symbolic. Now, AI is seen to have comparable capabilities to humans in both routine and non-routine ways, leading to new possibilities for automation. This draws on huge amounts of data often produced originally by humans. In fact, every time we enter a search term on a computer we add to and train machinic ‘intelligence.’ Every day, billions of actions are captured as part of this process, contributing to the development of AI. In doing so, people provide under-recognised cognitive and immaterial labour.
Therefore, this streams looks at the conditions and circumstances whereby machines begin to have the capacity to influence and become integrated in to humans’ ways of thinking, decision-making, working. It also considers the possibilities of AI in resistance against neoliberal and even authoritarian capitalism in the global north and south. AI is a broad term that identifies the pinnacle of machine capabilities that have recently become possible based on the amount of a) extensive big data that has become available in organisations, b) data analytical tools where programmers can identify what to track based on this data and what algorithms will allow one to gain the insights of interest, c) machine learning, where patterns across data sets can be identified and d) AI, where the final frontier has become the ability of pattern recognition across myriad data sets that have already identified their own patterns. When applied to work and work design, the primary goals are efficiency, market capture, and control over workers.
The rise of autonomous machines leads to philosophical questions that Marx engaged with in theories of objectification and alienation. Later, critical theorists have dealt with these questions in labour process research, where technologies and digitalization have created unprecedented concerns for how workplaces and work design are structured and control and resistance are pursued. In particular, the gig economy has become the frontline of these new changes. Workers here are now facing automation of the management function, supervised and even fired (or “deactivated”) without human intervention nor interaction. This is creating intensified and precarious working conditions, leading to fragmentation over digital platforms and platform management methods (Moore and Joyce 2018), as well as new forms of resistance and solidarities. These are all happening while their own work is under the threat of digitalization, where control and resistance have taken new forms and humans are in danger of becoming resources for tools (see Moore 2018a, 2018b; Woodcock, 2017; Waters and Woodcock, 2017).
Ultimately, across the economy, technology and its integration may be leading to organisations that take on a life of their own. Human resource decisions are increasingly taken by algorithms, where new human resources techniques integrate machine learning to achieve a new technique called ‘people analytics’ where data patterns are used to make workplace decisions for hiring/firing/talent predictions, creating significant threats to the possibilities of workplace organising and social justice. Sometimes, AI-based decisions lead to automating aspects of the workplace, for example, in the case of wearable devices in factories that allow human resource calculations based on AI and location-management by GPS and RFID systems. In these ways and others, AI processes inform a number of decision-making processes and digitalized management methods that have led to significant changes to workplaces and working conditions. If machines can deal with ethically based questions and begin to mimic the nuances of experiences and human judgement, will they become participants in humans’ already manifest ‘learned helplessness’? While currently, humans train AI with the use of big data, could machines begin to train humans to be helpless?

This call builds upon the ‘Artificial Intelligence. A service revolution?’ stream that featured at the 36th ILPC conference in Buenos Aires. This year’s stream is intended as a forum to bring together researchers engaged with the topics of labour process, political economy, technology, and AI to discuss this topic. We invite submissions on the following topics (not limited to, but also considering the need not to overlap with other streams):
-The effect of AI on the labour process where control and resistance rub against debates about exploitation Vs empowerment
-The implication of algorithmic management and control on the labour process, work replacement, and/or intensification from the factory to the office
-The “black box” of AI and related practices, algorithmic decision support, people analytics, performance management
-The impact of AI on the Global South: geographies and variegation of AI implementation, direct and indirect impact on jobs and differential effects of diverse socio-political setups
-Resistance and organising against/with AI and social media

Special Issue: We are also considering a submission for a journal special issue (though contributions may be requested before the conference). Please email Phoebe Moore pm358@leicester.ac.uk immediately if this is of interest.

Stream Organisers:

  • Juan Grigera (CONICET, Universidad de Quilmes, Buenos Aires, Argentina),
  • Lydia Hughes (Ruskin College, Oxford, UK),
  • Phoebe Moore (University of Leicester, School of Business, UK),
  • Jamie Woodcock (Oxford Internet Institute, University of Oxford, UK)

Please feel free to contact the stream organisers with any informal inquiries.

For information on the ILPC 2019 and the Calls for Papers for the General Conference and the other Special Streams please go to https://www.ilpc.org.uk/

References
Moore, P. (2018a): The Quantified Self in Precarity: Work, Technology and What Counts, Advances in Sociology series (Abingdon, Oxon: Routledge).
Moore, P. (2018b): ‘The Threat of Physical and Psychosocial Violence and Harassment in Digitalized Work’ International Labour Organization, ACTRAV, Geneva: Switzerland.
Woodcock, J. (2017): Working the Phones: Control and Resistance in Call Centres, London: Pluto.
Waters, F. and Woodcock, J. (2017): ‘Far From Seamless: a Workers’ Inquiry at Deliveroo’, Viewpoint Magazine.

Some more A.I. links

Twiki the robot from Buck Rogers

This post contains some tabs I have had open in my browser for a while that I’m pasting here both to save them in a place I may remember to look and to share them with others that might find them of interest. I’m afraid I don’t have time, at present, to offer any cogent commentary or analysis – just simply to share…

Untold AI - Christopher NoesselUntold A.I. – “What stories are we not telling ourselves about A.I?”, Christopher Noessel: An interesting attempt to look at popular, sci-fi stories of A.I. and compare them to contemporary A.I. research manifestos and look at where we might not be telling ourselves stories about the things people are actually trying to do.

 

The ethics of crashes with self?driving cars: A roadmapSven Nyholm: A two-part series of papers [one and two ($$) / one and two (open)] published in Philosophy Compass concerning how to think through the ethical issues associated with self-driving cars. Nyholm recently talked about this with John Danaher on his podcast.

Cognitive Bias CodexWEF on the Toronto Declaration and the “cognitive bias codex”: A post on the World Economic Forum’s website about “The Toronto Declaration on Machine Learning” on guiding principles for protecting human rights in relation to automated systems. As part of the post they link to a nice diagram about cognitive bias – the ‘cognitive bias codex‘.

RSA public engagement with AI reportRSA report on public engagement with AI: “Our new report, launched today, argues that the public needs to be engaged early and more deeply in the use of AI if it is to be ethical. One reason why is because there is a real risk that if people feel like decisions about how technology is used are increasingly beyond their control, they may resist innovation, even if this means they could lose out on benefits.”

artificial unintelligence - broussardArtificial Unintelligence, Meredith Broussard: “In Artificial Unintelligence, Meredith Broussard argues that our collective enthusiasm for applying computer technology to every aspect of life has resulted in a tremendous amount of poorly designed systems. We are so eager to do everything digitally—hiring, driving, paying bills, even choosing romantic partners—that we have stopped demanding that our technology actually work.”

Data-driven discrimination: a new challenge for civil society: A blogpost on the LSE ‘Impact of Soc. Sci.’ blog: “Having recently published a report on automated discrimination in data-driven systems, J?drzej Niklas and Seeta Peña Gangadharan explain how algorithms discriminate, why this raises concerns for civil society organisations across Europe, and what resources and support are needed by digital rights advocates and anti-discrimination groups in order to combat this problem.”

‘AI and the future of work’ – talk by Phoebe Moore: Interesting talk transcript with links to videos. Snippet: “Human resource and management practices involving AI have introduced the use of big data to make judgements to eliminate the supposed “people problem”. However, the ethical and moral questions this raises must be addressed, where the possibilities for discrimination and labour market exclusion are real. People’s autonomy must not be forgotten.”

Government responds to report by Lords Select Committee on Artificial Intelligence: “The Select Committee on Artificial Intelligence receives the Government response to the report: AI in the UK: Ready, willing and able?, published on 16 April 2018.”

How a Pioneer of Machine Learning Became One of Its Sharpest Critics, Kevin Hartnett – The Atlantic: “Judea Pearl helped artificial intelligence gain a strong grasp on probability, but laments that it still can’t compute cause and effect.”

WhatsApp Research Awards for Social Science and Misinformation

A person removing a mask

Via Moira Weigel. Deadline is 12/08/2018.

WhatsApp Research Awards for Social Science and Misinformation

WhatsApp cares about the safety of our users and is seeking to inform our understanding of the safety problems people encounter on WhatsApp and what more we can do within WhatsApp and in partnership with civil society to address the problem. For this first phase of our program, WhatsApp is commissioning a competitive set of awards to researchers interested in exploring issues that are related to misinformation on WhatsApp. We welcome proposals from any social science or related discipline that foster insights into the impact of technology on contemporary society in this problem space. The WhatsApp Research Awards will provide funding for independent research proposals that are designed to be shared with WhatsApp, Facebook, and wider scholarly and policy communities. These are unrestricted monetary awards that offer investigators the freedom to deepen and extend their existing research portfolio. Applications are welcome from individuals with established experience studying online interaction and information technologies, as well as from persons seeking to expand their existing research into these areas.

Core Areas of Exploration

We will seriously consider proposals from any social science and technological perspective that propose projects that enrich our understanding of the problem of misinformation on WhatsApp. High priority areas include (but are not limited to):

  • Information processing of problematic content: We welcome proposals that explore the social, cognitive, and information processing variables involved in the consumption of content received on WhatsApp, its relation to the content’s credibility, and the decision to promote that content with others. This includes social cues and relationships, personal value systems, features of the content, content source etc. We are interested in understanding what aspects of the experience might help individuals engage more critically with potentially problematic content.
  • Election related information: We welcome proposals that examine how political actors are leveraging WhatsApp to organize and potentially influence elections in their constituencies. WhatsApp is a powerful medium for political actors to connect and communicate with their constituents. However, it can also be misused to share inaccurate or inflammatory political content. We are interested in understanding this space both from the perspective of political actors and the voter base. This includes understanding the unique characteristics of WhatsApp for political activity and its place in the ecosystem of social media and messaging platforms, distribution channels for political content, targeting strategies, etc.
  • Network effects and virality: We welcome proposals that explore the characteristics of networks and content. WhatsApp is designed to be a private, personal communication space and is not designed to facilitate trends or virality through algorithms or feedback. However, these behaviors do organically occur along social dimensions. We are interested in projects that inform our understanding of the spread of information through WhatsApp networks.
  • Digital literacy and misinformation: We welcome proposals that explore the relation between digital literacy and vulnerability to misinformation on WhatsApp. WhatsApp is very popular in some emerging markets, and especially so among new to Internet and populations with lower exposure to technology. We are interested in research that informs our efforts to bring technology safely and effectively into underserved geographical regions. This includes studies of individuals, families and communities, but also wider inquiries into factors that shape the context for the user experience online.
  • Detection of problematic behavior within encrypted systems: We welcome proposals that examine technical solutions to detecting problematic behavior within the restrictions of and in keeping with the principles of encryption. WhatsApp’s end-to-end encrypted system facilitates privacy and security for all WhatsApp users, including people who might be using the platform for illegal activities. How might we detect illegal activity without monitoring the content of all our users? We are particularly interested in understanding and deterring activities that facilitate the distribution of verifiably false information.

Program Format

Our preference is for proposals based on independent research, in which the applicant develops conceptual tools, gathers and analyzes data, and/or investigates relevant issues. Each awardee will retain all intellectual property rights to their data and analyses. WhatsApp staff may provide guidance, but investigators are responsible for carrying out the scope of work.

The program will make unrestricted awards of up to $50,000 per research proposal. All applications will be reviewed by WhatsApp research staff, with consultation from external experts. Payment will be made to the proposer’s host university or organization as an unrestricted gift.

In addition to the award monies, WhatsApp invites award recipients to attend two workshops:

  1. The first workshop will provide awardees with a detailed introduction to how the WhatsApp product works as well as context on the focus area of misinformation. It will also enable participants to receive feedback from WhatsApp research staff and invited guests on their research proposals. We hope this will facilitate international collaborations across researchers and teams in this area. The tentative date for this event is October 29-30, in Menlo Park, CA.
  2. A second workshop will allow awardees to present their initial research findings to WhatsApp and other awardees, providing an opportunity to contextualize their findings with each other. Our hope is that upon completion of the research, award recipients will seek to share their research with the wider public. Tentative date is April 2019, exact date will be updated on this page at a later time.

WhatsApp will arrange and pay for the travel and accommodation of one representative from each awardee. This will be in addition to the research award amount.

Data

  • No WhatsApp data will be provided to award recipients;
  • All data from award research efforts will be owned by the researcher, and need not be shared with WhatsApp.

Applications, Eligibility & Participant Expectations

  • Applications must be written in English and include the following:
    • A research title, identification of the Principle Investigator (PI) and their institutional affiliation for the purposes of the proposed research;
    • A brief program statement (double-spaced, 12 point font, not to exceed 5 pages) that specifies the proposed work. This statement should include the following elements:
      • specification of question(s) being asked;
      • clear statement of the methodology together with examples of when/where this approach has given research insights;
      • plan for any data collection, analysis, and/or conceptual work;
      • description of the expected research outputs and findings;
      • relevance for our understanding of user experiences in online environments.
    • A 1-page bio and CV for the PI together with selected publication references. Summary bios of any other team members or collaborators.
    • A clear statement of the budget requested.
  • Preference will be given to research conducted in countries where WhatsApp is a prominent medium of communication (India, Brazil, Indonesia, Mexico, etc.).
  • Preference will be given to proposals from researchers, or collaborations with researchers, based in the country/countries being researched.
  • WhatsApp will accept applications from researchers who hold a PhD. In exceptional cases, we will review applications from individuals without PhD’s who have shown a high-level of achievement in social science or technological research.
  • The award is restricted to social science and technological research that contributes to generalized scientific knowledge and its application. Documentaries, journalism, and oral history projects are not eligible.
  • Awards will be made to an awardee’s university department, research institute or organization; all applicants must therefore be affiliated with an organization that supports research and can process external funding awards. All awards will be made in US dollars.
  • Proposals may be submitted by individuals with no prior experience in social media or Internet research. We welcome proposals from researchers who seek to expand their research portfolio into the area of information and communication technologies.
  • All award recipients are strongly encouraged to attend the two WhatsApp workshops associated with this program. Travel and accommodation will be arranged and paid for by WhatsApp.
  • The proposed research should be carried out by the date of the second workshop, in April 2019. Presentation materials that comprise the final report should be written in English and made available for WhatsApp and the other award recipients by the date of the final workshop. All rights to these materials will be held by the award recipient.
  • Once awardees have accepted their awards, WhatsApp will publicly share the details of the selected applicants by posting a summary of the results together with the PI’s name and the title of the proposal on the Facebook Research blog. This information may also be included in other presentations or posts relating to this effort.

By applying to this award, you are agreeing to the following:

  • You are affiliated with an institution that supports research and can process external funding awards.
  • If chosen, your institution will receive the award as a gift in US dollars and in the amount decided solely by WhatsApp.
  • You acknowledge that you have been invited to two, in-person, WhatsApp workshops (tentatively in October 2018 and April 2019).
  • You acknowledge that WhatsApp will publicly disclose your name and the proposal title as an award recipient.
  • You plan to attend and present the research findings at the second, WhatsApp workshop, likely to be held in Menlo Park, CA, USA in late April, 2019. The workshops and presentations will be conducted in English. Interpretation will be provided if needed. Note: airfare, hotel and transportation to be arranged and paid for by WhatsApp.

Timing and Dates

Applications are due by August 12, 2018, 11:59pm PST. Award recipients will be notified of the status of their application by email by September 14, 2018.

Questions

For all questions regarding these awards, please contact us.

“The Rise of the Robot Reserve Army” – interesting working paper

Charlie Chaplin in Modern Times

Saw this via Twitter somehow…

The Rise of the Robot Reserve Army: Automation and the Future of Economic Development, Work, and Wages in Developing Countries – Working Paper 487

Lukas Schlogl and Andy Sumner

Employment generation is crucial to spreading the benefits of economic growth broadly and to reducing global poverty. And yet, emerging economies face a contemporary challenge to traditional pathways to employment generation: automation, digitalization, and labor-saving technologies. 1.8 billion jobs—or two-thirds of the current labor force of developing countries—are estimated to be susceptible to automation from today’s technological standpoint. Cumulative advances in industrial automation and labor-saving technologies could further exacerbate this trend. Or will they? In this paper we: (i) discuss the literature on automation; and in doing so (ii) discuss definitions and determinants of automation in the context of theories of economic development; (iii) assess the empirical estimates of employment-related impacts of automation; (iv) characterize the potential public policy responses to automation; and (v) highlight areas for further exploration in terms of employment and economic development strategies in developing countries. In an adaption of the Lewis model of economic development, the paper uses a simple framework in which the potential for automation creates “unlimited supplies of artificial labor” particularly in the agricultural and industrial sectors due to technological feasibility. This is likely to create a push force for labor to move into the service sector, leading to a bloating of service-sector employment and wage stagnation but not to mass unemployment, at least in the short-to-medium term.