Brave new-old world – gig economy as scientific management

They’re gonna be disrupted, yeah! Because your lives are being disrupted, yeah! This is the money you need to live!

An interesting article in the FT: “When your boss is an algorithm“, in which (if you ignore the sort of anthropomorphism of “the algorithm” and its apparently supreme agency) the author,  , draws out the similarity between the claims of efficiency etc. made for ‘gig economy’ -type work platforms, such as Uber and Deliveroo, are very similar to Taylorism:

Algorithmic management” might sound like the future but it has uncanny echoes from the past. A hundred years ago, a new theory called “scientific management” swept through the factories of America. It was the brainchild of Frederick W Taylor … Taylor wanted to replace this “rule of thumb” approach with “the establishment of many rules, laws and formulae which replace the judgment of the individual workman”. To that end, he sent managers with stopwatches and notebooks on to the shop floor. They observed, timed and recorded every stage of every job, and determined the most efficient way that each one should be done. “Perhaps the most prominent single element in modern scientific management is the task idea,” Taylor wrote in his 1911 book The Principles of Scientific Management. “This task specifies not only what is to be done but how it is to be done and the exact time allowed for doing it.”

Exemplified by the following excerpt articulating the experience of a Deliveroo driver, Kyaw, which is, in turn, of course similar to the kinds of working conditions of other delivery drivers and Amazon warehouse pickers (as has been covered widely in the press)…

Kyaw whips out his phone. The app expects him to respond to new orders within 30 seconds. The screen shows a map and address for the local Carluccio’s, an Italian restaurant chain. A swipe bar says “Accept delivery”. That is the only option. The algorithm will not tell him the delivery address until he has picked up the food from Carluccio’s. Deliveroo couriers are assigned fairly small geographic areas but Kyaw says sometimes the delivery address is way outside his allocated zone. You can only decline an order by phoning the driver support line. “They say, ‘No, you have to do it, you already collected the food.’ If you want to return the food to the restaurant they mark it as a driver refusal — that’s bad.”

‘Ways of Being in a Digital Age’ scoping

I’ve only just caught on here, but the ESRC’s “Ways of Being in a Digital Age” scoping review, for their new theme of the same name, has been awarded to the Liverpool Institute of Cultural Capital (a collaboration between Liverpool and Liverpool John Moores) in a partnership with 17 other institutions (a core of eight in the UK apparently). They say:

The project will undertake a Delphi review of expert opinion and a systematic literature review and overall synthesis to identify gaps in current research.

The project will also run a programme of events to build and extend networks among the academic community, other stakeholders and potential funding partners.

There’s a website, so you can read more there…

“Do you hear voices? You do. So you are possessed” – @mikedotphillips talk @exetergeography

I’m really pleased to share that Prof. Mike Phillips (i-DAT, Plymouth) will be speaking  next week as part of the Exeter Geography seminar series. Mike is a founder of the Institute of Digital Art and Technology and one of the founders of the undergraduate programme I studied MediaLab Arts, which is now called  Digital Media Design.

Details: Thursday 16th March, 12:30: Amory 417. All welcome!

Automation in financial services and the ongoing re-imagination of work

From Technology outsmarts the human investor – FT

“It just gets harder and harder and harder,” reflected one money manager this week. His is the predicament of other professionals — anything done by a person that follows a pattern and can be coded into a form that a computer understands will soon get squeezed. Technology also has the advantage identified in 1970: algorithms stay constantly alert.

It does not imply the complete death — or automation — of the investment manager. A professional can still undertake original research on a company or a security that provides insight. As more of the market becomes automated, originality becomes rarer and more valuable: an idiosyncratic investor should achieve higher returns by standing out from the robotic crowd.

Nor can algorithmic efficiency be wholly divorced from human intelligence, as the Oregon study showed — the point was that humans needed to set parameters for computers to follow. Many asset managers use analysts and researchers to build investment models that then trade securities automatically; others blend their active risk-taking with passive elements.

But these difficulties demonstrate how automation eats into professions, not by taking away all the jobs in one day but by unbundling them — dividing them between tasks that only humans can perform and those of which an algorithm is quite capable. Then the boundary relentlessly shifts.

The last paragraph is key – there seems to be a growing consensus that automation doesn’t simply ‘destroy’ jobs, it makes particular aspects of or kinds of role redundant and the implementation and development of automated systems requires the remaining workers to fit around those systems in different ways. In many ways, then, automation is a company or institution-specifc organisational or administrative problem as well as a wider political economic problem.

AI under feminist scrutiny

Searching for something else, I came across a 1999 book review by Tiziana Terranova of Alison Adam’s book Artifical Knowing in New Media and Society. As with much of Terranova’s incisive writing, there is a set of observations about the ontological politics of technology studies, in this case a really interesting reflection on how we might understand the issue of embodiment in relation to mediation:

…although embodiment is a foundational feminist category, it is far from being a resolved question. It is much easier to charge malestream science and technology with disembodiment than to come up with a model of embodiment which won’t stir the opposition of this or that feminist quarter. In many ways, the debate about cyberculture and technology keeps stumbling against the same block: if disembodiment is preferential bias of cybernetic technologies, how do we anser that? How do we bring back the body in cyberspace without essentializing it? Interestingly… if [we see] in Harraway’s cyborg (directly inspired by the skewed, labarynthine technologic of cybernetics) a machinic assemblage of organic and inorganics, of identity and difference, maybe [we] could have come up with a constructive, non-essentialist, embodied model of AI. Maybe it is a sad testimony of the overuse of the term ‘cyborg’ that [some are] all to ready to dismiss it as another ruse of an a-political postmodern feminism. It is a pity indeed since it seems to me that the charge of disembodiment more and more often levelled at digital culture (in all its manifestiations) has turned into the final word on the latter rather the beginning of a really different understanding of technology and subjectivity (or in this case, ‘intelligence’). Maybe we need a model of embodiment which is more about connections and partialities, more akin to cybernetics itself, to make AI work for feminists. (p. 142)


Former colleagues of mine at UWE are developing an interesting project, which you may have seen/heard about through the BBC’s Click programme, called Echoborg.

An echoborg is a hybrid agent composed of the body of a real person and the “mind” (or, rather, the words) of a conversational agent; the words the echoborg speaks are determined by the conversational agent, transmitted to the person via a covert audio-relay apparatus, and articulated by the person through speech shadowing[1].

Recently, the project team have demoed the project as part of an AHRC-funded network on Automation Anxiety and have written this up on the project website, here’s a snippet – it sounds like it is really compelling (I’ve not seen this in action):

Four people were interviewed by the AI which increasingly displayed an interest in eliciting help to reprogram itself. Proceedings were visible on a projector screen and the ‘audience’ of applicants gradually began to discuss the situation of the Echoborg and how to change it. At a certain point their reflections passed a threshold and the group fired into collective action, experimenting with various methods to bring the situation to a head in some way. The lively inventiveness of the group and the individual interviewees went a long way to confirming the interactive potential of this format of the work. It also gave Rik and Phil much to work with in considering the further development of the AI/Chatbot, the restricted delivery of narrative by the human Echoborg and the staging. This event also trialled a secondary, higher level, Echoborg character as part of the slow process of unfolding the potential for this Echoborg recruitment event to be a disruptive and thought and emotion provoking experience for all players.


  1. Corti, Kevin and Gillespie, Alex (2015) Offscreen and in the chair next to you: conversational agents speaking through actual human bodies. Lecture Notes in Computer Science, 9238 . pp. 405-417.

Tony Sampson on neuroculture

From a Conversation piece on Huxley, dystopia and how we might think about Facebook etc. in relation to Huxley’s “College of Emotional Engineering”, this concise evocation of his understanding of ‘neuroculture’ is interesting:

The origins of neuroculture begin in early anatomical drawings and subsequent neuron doctrine in the late 1800s. This was the first time that the brain was understood as a discontinuous network of cells connected by what became known as synaptic gaps. Initially, scientists assumed these gaps were connected by electrical charges, but later revealed the existence of neurochemical transmissions. Brain researchers went on to discover more about brain functionality and subsequently started to intervene in underlying chemical processes.

Interpretation of Cajal’s anatomy of a Purkinje neuron, by Dorota Piekorz.

On one hand, these chemical interventions point to possible inroads to understanding some crucial issues, relating to mental health, for example. But on the other, they warn of the potential of a looming dystopian future. Not, as we may think, defined by the forceful invasive probing of the brain in Room 101, but via much more subtle intermediations.