Algorithms in politics after Brexit

dystopian city

As Clive recently shared – Kuba Jablonowski, Clive and myself have been very fortunate to successfully apply for a grant in the ESRC’s ‘Governance after Brexit’ scheme. The project, which begins in January ’21, focuses on  ‘Algorithmic politics and administrative justice in the EU Settlement Scheme’ (The EUSS is the UK government scheme designed to determine the post-Brexit UK immigration status of EU citizens and their families who are currently living in the UK under EU free movement law).

The project will run from the start of 2021 through to the end of 2023. Here’s a quick summary from the application: 

“The research aims to analyse the process of administrative reform associated with Brexit, and the intersection of this process with the digitalisation of administration and governance in the UK. It takes the evolution of the EU Settlement Scheme (EUSS) as its empirical entry-point. By investigating how grievances and claims of injustice emerge from the operation of the EUSS and are monitored and challenged in the public sphere, the research will seek to understand how practices of administrative justice are reconfigured by the interaction of automated algorithmic systems with rights-based practices of monitoring, advocacy and litigation.”

I’m sure we’ll post further information as the project gets underway on our websites.

Future of labour governance – a podcast with Jennifer Bair

Glitched Rosie the Riveter poster


From the really interesting Futures of Work journal(?)/project/website…

In a world dominated by the emergence of global supply chains, where the state-based system of labour governance has struggled to deal with the expanding influence of transnational corporations, how can workers resist exploitative labour practices and organise a future (or futures) of regulation that would guarantee decent work for all?
Jennifer Bair joins Huw Thomas in the studio to discuss the challenges and opportunities of cross border labour governance and organisation in the contemporary global economy.

Jennifer Bair – Futures of Work

CFP> International Labour Process Conference STREAM Artifical Intelligence, Technology and Work

Industrial factory robot arms

Via Phoebe Moore.

ILPC STREAM Artifical Intelligence, Technology and Work

INTERNATIONAL LABOUR PROCESS CONFERENCE

Artifical Intelligence, Technology and Work 

ILPC 2019 Special Stream No. 5

Please submit abstracts via the International Labour Process Conference website (ilpc.org.uk) by the deadline of 26 October 2018.

Of all the social changes occurring over the past six or seven decades, perhaps most fascinating is the integration of computers and machines into the fabric of our lives and organizations. Machines are rapidly becoming direct competitors with humans for intelligence and decision-making powers. This is important for research in international labour process because artificial intelligence (AI) brings about challenges and questions for how organizations, globally, are designed and established with respective human resources planning and execution and industrial relations negotiations. We start with John McCarthy’s term, who both invented and defined AI as processes that are ‘making a machine behave in ways that would be called intelligent if humans were so behaving’ in 1955. At the origin of the term, AI aligned humans directly with machines, expecting machines to behave symbolically like humans. Over time, programmers working on neural networks and machine learning have emphasised the cognitive rather than symbolic. Now, AI is seen to have comparable capabilities to humans in both routine and non-routine ways, leading to new possibilities for automation. This draws on huge amounts of data often produced originally by humans. In fact, every time we enter a search term on a computer we add to and train machinic ‘intelligence.’ Every day, billions of actions are captured as part of this process, contributing to the development of AI. In doing so, people provide under-recognised cognitive and immaterial labour.
Therefore, this streams looks at the conditions and circumstances whereby machines begin to have the capacity to influence and become integrated in to humans’ ways of thinking, decision-making, working. It also considers the possibilities of AI in resistance against neoliberal and even authoritarian capitalism in the global north and south. AI is a broad term that identifies the pinnacle of machine capabilities that have recently become possible based on the amount of a) extensive big data that has become available in organisations, b) data analytical tools where programmers can identify what to track based on this data and what algorithms will allow one to gain the insights of interest, c) machine learning, where patterns across data sets can be identified and d) AI, where the final frontier has become the ability of pattern recognition across myriad data sets that have already identified their own patterns. When applied to work and work design, the primary goals are efficiency, market capture, and control over workers.
The rise of autonomous machines leads to philosophical questions that Marx engaged with in theories of objectification and alienation. Later, critical theorists have dealt with these questions in labour process research, where technologies and digitalization have created unprecedented concerns for how workplaces and work design are structured and control and resistance are pursued. In particular, the gig economy has become the frontline of these new changes. Workers here are now facing automation of the management function, supervised and even fired (or “deactivated”) without human intervention nor interaction. This is creating intensified and precarious working conditions, leading to fragmentation over digital platforms and platform management methods (Moore and Joyce 2018), as well as new forms of resistance and solidarities. These are all happening while their own work is under the threat of digitalization, where control and resistance have taken new forms and humans are in danger of becoming resources for tools (see Moore 2018a, 2018b; Woodcock, 2017; Waters and Woodcock, 2017).
Ultimately, across the economy, technology and its integration may be leading to organisations that take on a life of their own. Human resource decisions are increasingly taken by algorithms, where new human resources techniques integrate machine learning to achieve a new technique called ‘people analytics’ where data patterns are used to make workplace decisions for hiring/firing/talent predictions, creating significant threats to the possibilities of workplace organising and social justice. Sometimes, AI-based decisions lead to automating aspects of the workplace, for example, in the case of wearable devices in factories that allow human resource calculations based on AI and location-management by GPS and RFID systems. In these ways and others, AI processes inform a number of decision-making processes and digitalized management methods that have led to significant changes to workplaces and working conditions. If machines can deal with ethically based questions and begin to mimic the nuances of experiences and human judgement, will they become participants in humans’ already manifest ‘learned helplessness’? While currently, humans train AI with the use of big data, could machines begin to train humans to be helpless?

This call builds upon the ‘Artificial Intelligence. A service revolution?’ stream that featured at the 36th ILPC conference in Buenos Aires. This year’s stream is intended as a forum to bring together researchers engaged with the topics of labour process, political economy, technology, and AI to discuss this topic. We invite submissions on the following topics (not limited to, but also considering the need not to overlap with other streams):
-The effect of AI on the labour process where control and resistance rub against debates about exploitation Vs empowerment
-The implication of algorithmic management and control on the labour process, work replacement, and/or intensification from the factory to the office
-The “black box” of AI and related practices, algorithmic decision support, people analytics, performance management
-The impact of AI on the Global South: geographies and variegation of AI implementation, direct and indirect impact on jobs and differential effects of diverse socio-political setups
-Resistance and organising against/with AI and social media

Special Issue: We are also considering a submission for a journal special issue (though contributions may be requested before the conference). Please email Phoebe Moore pm358@leicester.ac.uk immediately if this is of interest.

Stream Organisers:

  • Juan Grigera (CONICET, Universidad de Quilmes, Buenos Aires, Argentina),
  • Lydia Hughes (Ruskin College, Oxford, UK),
  • Phoebe Moore (University of Leicester, School of Business, UK),
  • Jamie Woodcock (Oxford Internet Institute, University of Oxford, UK)

Please feel free to contact the stream organisers with any informal inquiries.

For information on the ILPC 2019 and the Calls for Papers for the General Conference and the other Special Streams please go to https://www.ilpc.org.uk/

References
Moore, P. (2018a): The Quantified Self in Precarity: Work, Technology and What Counts, Advances in Sociology series (Abingdon, Oxon: Routledge).
Moore, P. (2018b): ‘The Threat of Physical and Psychosocial Violence and Harassment in Digitalized Work’ International Labour Organization, ACTRAV, Geneva: Switzerland.
Woodcock, J. (2017): Working the Phones: Control and Resistance in Call Centres, London: Pluto.
Waters, F. and Woodcock, J. (2017): ‘Far From Seamless: a Workers’ Inquiry at Deliveroo’, Viewpoint Magazine.

Some more A.I. links

Twiki the robot from Buck Rogers

This post contains some tabs I have had open in my browser for a while that I’m pasting here both to save them in a place I may remember to look and to share them with others that might find them of interest. I’m afraid I don’t have time, at present, to offer any cogent commentary or analysis – just simply to share…

Untold AI - Christopher NoesselUntold A.I. – “What stories are we not telling ourselves about A.I?”, Christopher Noessel: An interesting attempt to look at popular, sci-fi stories of A.I. and compare them to contemporary A.I. research manifestos and look at where we might not be telling ourselves stories about the things people are actually trying to do.

 

The ethics of crashes with self?driving cars: A roadmapSven Nyholm: A two-part series of papers [one and two ($$) / one and two (open)] published in Philosophy Compass concerning how to think through the ethical issues associated with self-driving cars. Nyholm recently talked about this with John Danaher on his podcast.

Cognitive Bias CodexWEF on the Toronto Declaration and the “cognitive bias codex”: A post on the World Economic Forum’s website about “The Toronto Declaration on Machine Learning” on guiding principles for protecting human rights in relation to automated systems. As part of the post they link to a nice diagram about cognitive bias – the ‘cognitive bias codex‘.

RSA public engagement with AI reportRSA report on public engagement with AI: “Our new report, launched today, argues that the public needs to be engaged early and more deeply in the use of AI if it is to be ethical. One reason why is because there is a real risk that if people feel like decisions about how technology is used are increasingly beyond their control, they may resist innovation, even if this means they could lose out on benefits.”

artificial unintelligence - broussardArtificial Unintelligence, Meredith Broussard: “In Artificial Unintelligence, Meredith Broussard argues that our collective enthusiasm for applying computer technology to every aspect of life has resulted in a tremendous amount of poorly designed systems. We are so eager to do everything digitally—hiring, driving, paying bills, even choosing romantic partners—that we have stopped demanding that our technology actually work.”

Data-driven discrimination: a new challenge for civil society: A blogpost on the LSE ‘Impact of Soc. Sci.’ blog: “Having recently published a report on automated discrimination in data-driven systems, J?drzej Niklas and Seeta Peña Gangadharan explain how algorithms discriminate, why this raises concerns for civil society organisations across Europe, and what resources and support are needed by digital rights advocates and anti-discrimination groups in order to combat this problem.”

‘AI and the future of work’ – talk by Phoebe Moore: Interesting talk transcript with links to videos. Snippet: “Human resource and management practices involving AI have introduced the use of big data to make judgements to eliminate the supposed “people problem”. However, the ethical and moral questions this raises must be addressed, where the possibilities for discrimination and labour market exclusion are real. People’s autonomy must not be forgotten.”

Government responds to report by Lords Select Committee on Artificial Intelligence: “The Select Committee on Artificial Intelligence receives the Government response to the report: AI in the UK: Ready, willing and able?, published on 16 April 2018.”

How a Pioneer of Machine Learning Became One of Its Sharpest Critics, Kevin Hartnett – The Atlantic: “Judea Pearl helped artificial intelligence gain a strong grasp on probability, but laments that it still can’t compute cause and effect.”

Unfathomable Scale – moderating social media platforms

Facebook logo reflected in a human eye

There’s a really nice piece by Tarleton Gillespie in Issue 04 of Logic themed on “scale” that concerns the scale of social media platforms and how we might understand the qualitative as well as quantitative shifts that happen when things change in scale.

The Scale is just unfathomble

But the question of scale is more than just the sheer number of users. Social media platforms are not just big; at this scale, they become fundamentally different than they once were. They are qualitatively more complex. While these platforms may speak of their online “community,” singular, at a billion active users there can be no such thing. Platforms must manage multiple and shifting communities, across multiple nations and cultures and religions, each participating for different reasons, often with incommensurable values and aims. And communities do not independently coexist on a platform. Rather, they overlap and intermingle—by proximity, and by design.

The huge scale of the platforms has robbed anyone who is at all acquainted with the torrent of reports coming in of the illusion that there was any such thing as a unique case… On any sufficiently large social network everything you could possibly imagine happens every week, right? So there are no hypothetical situations, and there are no cases that are different or really edgy. There’s no such thing as a true edge case. There’s just more and less frequent cases, all of which happen all the time.

No matter how they handle content moderation, what their politics and premises are, or what tactics they choose, platforms must work at an impersonal scale: the scale of data. Platforms must treat users as data points, subpopulations, and statistics, and their interventions must be semi-automated so as to keep up with the relentless pace of both violations and complaints. This is not customer service or community management but logistics—where concerns must be addressed not individually, but procedurally.

However, the user experiences moderation very differently. Even if a user knows, intellectually, that moderation is an industrial-sized effort, it feels like it happens on an intimate scale. “This is happening to me; I am under attack; I feel unsafe. Why won’t someone do something about this?” Or, “That’s my post you deleted; my account you suspended. What did I do that was so wrong?”

Tarleton Gillespie on “Custodians”

Facebook logo reflected in a human eye

Custodians of the Internet – Tarleton GillespieOver on the Culture Digitally site Tarleton Gillespie discusses his new book Custodians of the Internet, reflecting on some of the meanings of “custodian” and how they variously relate to the topic of the book – content moderators for social media services. Gillespie is an astute observer and analyst of contemporary ‘digital culture’ (I struggle to think of another noun right now) and the issue of moderation is certainly timely.

 I thought I would explain the book’s title, particularly my choice of the word “custodians.” This title came unnervingly late in the writing process, and after many, many conversations with my extremely patient friend and colleague Dylan Mulvin. “Custodians of the Internet” captured, better than many, many alternatives, the aspirations of social media platforms, the position they find themselves in, and my notion for how they should move forward.

Read more on Culture Digitally.

John Danaher interview – Robot Sex: Social and Ethical Implications

Gigolo Jane and Gigolo Joe robots in the film A.I.

Via Philosophical Disquisitions.

Through the wonders of the modern technology, myself and Adam Ford sat down for an extended video chat about the new book Robot Sex: Social and Ethical Implications (MIT Press, 2017). You can watch the full thing above or on youtube. Topics covered include:

  • Why did I start writing about this topic?
  • Sex work and technological unemployment
  • Can you have sex with a robot?
  • Is there a case to be made for the use of sex robots?
  • The Campaign Against Sex Robots
  • The possibility of valuable, loving relationships between humans and robots
  • Sexbots as a social experiment

Be sure to check out Adam’s other videos and support his work.

AI Now report

My Cayla Doll

The AI Now Institute have published their second annual report with plenty of interesting things in it. I won’t try and summarise it or offer any analysis (yet). It’s worth a read:

The AI Now Institute, an interdisciplinary research center based at New York University, announced today the publication of its second annual research report. In advance of AI Now’s official launch in November, the 2017 report surveys the current use of AI across core domains, along with the challenges that the rapid introduction of these technologies are presenting. It also provides a set of ten core recommendations to guide future research and accountability mechanisms. The report focuses on key impact areas, including labor and automation, bias and inclusion, rights and liberties, and ethics and governance.

“The field of artificial intelligence is developing rapidly, and promises to help address some of the biggest challenges we face as a society,” said Kate Crawford, cofounder of AI Now and one of the lead authors of the report. “But the reason we founded the AI Now Institute is that we urgently need more research into the real-world implications of the adoption of AI and related technologies in our most sensitive social institutions. People are already being affected by these systems, be it while at school, looking for a job, reading news online, or interacting with the courts. With this report, we’re taking stock of the progress so far and the biggest emerging challenges that can guide our future research on the social implications of AI.”

There’s also a sort of Exec. Summary, a list of “10 Top Recommendations for the AI Field in 2017” on Medium too. Here’s the short version of that:

  1. 1. Core public agencies, such as those responsible for criminal justice, healthcare, welfare, and education (e.g “high stakes” domains) should no longer use ‘black box’ AI and algorithmic systems.
  2. 2. Before releasing an AI system, companies should run rigorous pre-release trials to ensure that they will not amplify biases and errors due to any issues with the training data, algorithms, or other elements of system design.
  3. 3. After releasing an AI system, companies should continue to monitor its use across different contexts and communities.
  4. 4. More research and policy making is needed on the use of AI systems in workplace management and monitoring, including hiring and HR.
  5. 5. Develop standards to track the provenance, development, and use of training datasets throughout their life cycle.
  6. 6. Expand AI bias research and mitigation strategies beyond a narrowly technical approach.
  7. 7. Strong standards for auditing and understanding the use of AI systems “in the wild” are urgently needed.
  8. 8. Companies, universities, conferences and other stakeholders in the AI field should release data on the participation of women, minorities and other marginalized groups within AI research and development.
  9. 9. The AI industry should hire experts from disciplines beyond computer science and engineering and ensure they have decision making power.
  10. 10. Ethical codes meant to steer the AI field should be accompanied by strong oversight and accountability mechanisms.

Which sort of reads, to me, as: “There should be more social scientists involved” 🙂

Scaring you into ‘digital safety’

I’ve been thinking about the adverts from Barclays about ‘digital safety’ in relation to some of the themes of the module about technology that I run. In particular, about the sense of risk and threat that sometimes gets articulated about digital media and how this maybe carries with it other kinds of narrative about technology, like versions of determinism for instance. The topics of the videos are, of course, grounded in things that do happen – people do get scammed and it can be horrendous. Nevertheless, from an ‘academic’ standpoint, it’s interesting to ‘read’ these videos as particular articulations of perceived norms around safety/risk and responsibility, for whom, and why in relation to ‘digital’ media… I could write more but I’ll just post a few of the videos for now…