HKW Speaking to Racial Conditions Today [video]

racist facial recognition

This video of a panel session at HKW entitled “Speaking to Racial Conditions Today” is well-worth watching.

Follow this link (the video is not available for embedding here).

Inputs, discussions, Mar 15, 2018. With Zimitri Erasmus, Maya Indira Ganesh, Ruth Wilson Gilmore, David Theo Goldberg, Serhat Karakayali, Shahram Khosravi, Françoise Vergès
English original version

“The Rise of the Robot Reserve Army” – interesting working paper

Charlie Chaplin in Modern Times

Saw this via Twitter somehow…

The Rise of the Robot Reserve Army: Automation and the Future of Economic Development, Work, and Wages in Developing Countries – Working Paper 487

Lukas Schlogl and Andy Sumner

Employment generation is crucial to spreading the benefits of economic growth broadly and to reducing global poverty. And yet, emerging economies face a contemporary challenge to traditional pathways to employment generation: automation, digitalization, and labor-saving technologies. 1.8 billion jobs—or two-thirds of the current labor force of developing countries—are estimated to be susceptible to automation from today’s technological standpoint. Cumulative advances in industrial automation and labor-saving technologies could further exacerbate this trend. Or will they? In this paper we: (i) discuss the literature on automation; and in doing so (ii) discuss definitions and determinants of automation in the context of theories of economic development; (iii) assess the empirical estimates of employment-related impacts of automation; (iv) characterize the potential public policy responses to automation; and (v) highlight areas for further exploration in terms of employment and economic development strategies in developing countries. In an adaption of the Lewis model of economic development, the paper uses a simple framework in which the potential for automation creates “unlimited supplies of artificial labor” particularly in the agricultural and industrial sectors due to technological feasibility. This is likely to create a push force for labor to move into the service sector, leading to a bloating of service-sector employment and wage stagnation but not to mass unemployment, at least in the short-to-medium term.

Reblog> WIAS Workshop: Academic Labour, Digital Media and Capitalism 31/01/18

Glitched screenshot of Antony Sher in Malcolm Bradbury's The History Man

Saw this via Phoebe Moore:

WIAS Workshop: Academic Labour, Digital Media and Capitalism

by Westminster Institute for Advanced Studies

This workshop marks the publication of the special issue “Academic Labour, Digital Media and Capitalism” in tripleC: Communication, Capitalism & Critique. We will hear presentations by experts who have contributed to the issue: guest editor Thomas Allmer (University of Stirling), Karen Gregory (University of Edinburgh) and Jamie Woodcock (LSE).

Modern universities have always been embedded in capitalism in political, economic and cultural terms. In 1971, at the culmination of the Vietnam War, a young student pointed a question towards Noam Chomsky: “How can you, with your very courageous attitude towards the war in Vietnam, survive in an institution like MIT, which is known here as one of the great war contractors and intellectual makers of this war?” Chomsky had to admit that his workplace was a major organisation conducting war research, thereby strengthening the political contradictions and inequalities in capitalist societies.

Today, universities are positioning themselves as active agents of global capital, transforming urban spaces into venues for capital accumulation and competing for international student populations for profit. Steep tuition fees are paid for precarious futures. Increasingly, we see that the value of academic labour is measured in capitalist terms and therefore subject to new forms of control, surveillance and productivity measures. Situated in this economic and political context, the new special issue of tripleC (edited by Thomas Allmer and Ergin Bulut) is a collection of critical contributions that examine universities, academic labour, digital media and capitalism.

Workshop presentations:

Anger in Academic Twitter: Sharing, Caring, and Getting Mad Online
Karen Gregory, University of Edinburgh

Digital Labour in the University: Understanding the Transformations of Academic Work in the UK
Jamie Woodcock, LSE

Theorising and Analysing Academic Labour
Thomas Allmer, University of Stirling

The workshop will be chaired by WIAS Director and tripleC co-editor Christian Fuchs. WIAS invites everybody interested to attend this afternoon of talks and discussions tackling the question of academic labour in the age of digital capitalism. A coffee break is provided.

Thomas Allmer is Lecturer in Digital Media at the University of Stirling, Scotland, UK, and a member of the Unified Theory of Information Research Group, Austria. His publications include Towards a Critical Theory of Surveillance in Informational Capitalism (Peter Lang, 2012) and Critical Theory and Social Media: Between Emancipation and Commodification (Routledge, 2015). For more information, see Thomas’ website.

Karen Gregory is a Lecturer in Digital Sociology at the University of Edinburgh, a digital sociologist and ethnographer. She researches the relationship between work, technology, and emerging forms of labour, exploring the intersection of work and labor, social media use, and contemporary spirituality. She is the co-editor of the book Digital Sociologies (Policy Press, 2017).

Jamie Woodcock is a fellow at the LSE and author of Working The Phones. His current research focuses on digital labour, the sociology of work, the gig economy, resistance, and videogames. He has previously worked as a postdoc on a research project about videogames, as well as another on the crowdsourcing of citizen science. Jamie completed his PhD in sociology at Goldsmiths, University of London and has held positions at Goldsmiths, University of Leeds, University of Manchester, Queen Mary, NYU London, and Cass Business School.

Christian Fuchs is Professor at the University of Westminster. He is the Director of the Communication and Media Research Institute (CAMRI) and Westminster Institute for Advanced Studies (WIAS). His fields of expertise are critical digital & social media studies, Internet & society, political economy of media and communication, information society theory, social theory and critical theory. He co-edits the open-access journal triple:Communication, Capitalism & Critique with Marisol Sandoval.

CFP – Beyond measure, ephemera journal

Generative Artwork Forkbomb, by Alex McLean, glitched

This looks interesting…

Beyond measure

submission deadline 1 March 2018

PDF icon CfP Beyond measure.pdf

Issue editors: Nick Butler, Helen Delaney, Emilie Hesselbo and Sverre Spoelstra

Measurement is a central task of capitalist organization. From the days of the industrial factory, when labour first came to be measured in hours, through to the time-motion studies under Taylorist regimes, measurement has involved the optimization of surplus value extraction from labour. During the 20th century, these techniques of measurement were complemented by more intrusive forms of quantification such as the use of psychological testing in the human relations school.

The will to quantify continues today with balanced scorecards and activity-based costing (Power, 2004), the discourse of employability (Chertkovskaya, et al., 2013), the monitoring of work in the service economy (Dowling, 2007), and the performativity of economics (Callon, 1998). At the same time, others point to the impossibility of measuring affective work and immaterial labour (Hardt and Negri, 2000). More generally, ‘trust in numbers’ (Porter, 1995) – based on a longstanding infatuation with the ideal of objectivity (Stengers, 2000) – is becoming characteristic of a totally quantified society in which we keep track of our diet, fitness, sleeping habits, and menstrual cycles via digital tracking technologies (Charitsis, 2016).

Quantification also lies at the heart of knowledge production in the business school (Zyphur et al., 2016). Ever since the early influence of Paul Lazarsfeld (1993) in the post-war years, management science has been preoccupied with the measurement of ‘objects’, ranging from things that are straightforwardly measurable (e.g. the height of employees in leadership positions) to things that are difficult, if not impossible, to quantify (e.g. charisma, authenticity, ethics). Despite a half-century of criticism directed at the positivist tradition in the social sciences, management science still holds to the McNamara fallacy: ‘If it can’t be measured, it doesn’t exist’. The politics of measurement in management theory and practice – and its link to the logic of capitalist exploitation – therefore deserve sustained critical scrutiny.

For this ephemera special issue, we invite papers that explore the stakes of measuring organizations and their members – especially in contested zones of quantification. For instance, what happens when employees are measured not just in terms of productivity but also their health and well-being (Cederström and Spicer, 2015)? What happens when leaders are measured not just in terms of bottom-line performance but also their authenticity or spirituality (Ford and Harding, 2011)? Closer to home, what happens when academics are measured not just in terms of the quality of their scholarship but also their citation rate and H-index (Nkomo, 2009)?

But we are also interested in what is beyond measure – that is, the relation between organizing and the immeasurable. Here, religion and spirituality come into view. One may think of themes such as the call for a ‘higher purpose’ in work, the role of faith and spirituality in business, and the presence of organizational figures who defy measurement (idols, spirits, ghosts, monsters, etc.). Deleuze (1995: 181) famously said that the idea that organizations have a soul is ‘the most terrifying news in the world’. For us, this is no longer news but perhaps all the more terrifying for it.

Efforts to quantify aspects of our organizational lives give rise to new and complex ethical questions around work, identity and politics. We therefore invite submissions that may include, but are not limited to, the following themes:

  • Measuring organizations, management and leadership
  • The excessive, the limitless and the infinite
  • Big data and algorithmic management
  • The quantified self and digital measurement technologies
  • The turn to ‘objectivity’ in the social sciences
  • Zero and nothingness
  • The performativity of measures
  • Value theory and the immeasurability of labour
  • The reevaluation of values
  • Faith and spirituality in business
  • Time-motion studies and their contemporary equivalents
  • Death and ‘the great beyond’ in organization
  • The use of psychometric instruments in management theory and practice
  • Commensuration and incommensurability in organizational theory and practice
  • The politics of performance audits
  • Measuring the immeasurable

Deadline and further information

The deadline for submissions is 1st March 2018. All submissions should be sent to one of the special issue editors: Nick Butler (, Helen Delaney (, Emilie Hesselbo ( or Sverre Spoelstra ( contributions in a variety of formats including articles, notes, interviews, book reviews, photo essays and other experimental modes of representation. The submissions will undergo a double-blind review process. All submissions should follow ephemera’s submission guidelines, which are available at: (see ‘Abc of formatting’ guide). For further information, please contact one of the special issue editors.

Dallas Smythe and ‘the consciousness industry’, or why ‘attention’ isn’t ‘labour’

Facial tracking system, showing gaze direction, emotion scores and demographic profiling

‘I’ll consume my consumers, [with] no sense of humour’ — Grace Jones, Corporate Cannibal

I keep failing to write up a talk I did over 18 months ago in Bristol for the Politics and Economics of Attention seminar convened by Jessica Pykett. My main argument is that the ways that a thing called ‘the attention economy’ gets figured by a lot of folks, including Christian Marazzi, and Bernard Stiegler to an extent, is that ‘attention’ is ‘work’ and therefore it can be factored through the marxian labour theory of value. Indeed, Jonathan Beller flips this and suggests that, rather, work is a form of attention and therefore it should be the ‘attention  theory of value’.

There’s another version of this story, which is also based in marxian theory. This is perhaps best articulated (in my limited reading) by Dallas Smythe regarding advertising and the broadcast media but can also be accessed from a rather different angle via David Harvey’s work on wine. In this version of the economisation of attention, attention is not ‘work’, instead attention is a commodity. The advertiser rents an  audience from the broadcaster, which is more-or-less demographically specific due to profiling. Smythe refers to this as a kind of ‘consciousness industry’ (presumably in a similar vein to the ‘culture industry’). This is more precisely the case with online advertising where, through all sorts of techniques, the profiling can appear to be a lot more specific. As Smythe has it in a 1977 paper for the Canadian Journal of Political and Social Theory:

What do advertisers buy with their advertising expenditures? … they are not paying for advertising for nothing… I suggest that hat they buy are the services of audiences with predictable specifications who will pay attention in predictable numbers … at particular times to a particular means of communication… As collectives these audiences are commodities. … Such markets establish prices in the familiar mode of monopoly capitalism.

A sub-industry sector of the consciousness industry checks to determine [that the audience pays attention]. … The behaviour of the members of the audience product under the impact of the advertising … content is the object of market research by a large number of independent market research agencies as well as by… the advertising corporation and in media enterprises.

The most important resource employed in producing the audience commodity are [sic.] the individuals and families in the nations which permit advertising.

So, while it may be useful for some to consider that the ways in which we are addressed as an audience and the kinds of ways we ‘pay attention’ can be figured as ‘labour’ that has a value and therefore exploits apparent leisure time as, instead, a continuation of work by other means, this does not perhaps tell the whole story. The advertising businesses are, certainly, using all sorts of ways to profile us and in some senses individualise a potential target for an advert but this is in order to serve up aggregates of profiles to advertisers as a commodity from which they extract rent. I should note Smythe does not see it that way but instead analyses the whole system of monopoly capitalism in which advertising is operating and in which there is, what he expertly articulates as:

…the contradictions produced within the audience commodity [which] should be understood more clearly … as between audience members serving a producers’ goods in the marketing of mass produced consumer goods and their work in producing and reproducing labour power.

Maybe I should write this up… it’s quite an interesting argument and as I said in my talk some time ago I think it highlights the issues with contradiction in the ‘attention commodity’ between it’s apparent uniqueness and a reproducibility as a ‘type’. There are a few avenues of critique open as a result. One, is to think the categorisation techniques as a pharmakon, perhaps to interrogate the processes of calculation/categorisation as rethink-able, another might be to look at ‘derivative’ nature of the categorisations, and how this produces a form of financialisation ( for e.g. following Louise Amoore) and another still might be to explore alternative valuations – whereby people take an active role in ‘selling’ themselves into this commodity market (for e.g. following Sarah Gold).

Tragedy mistaken for management theory

statue of a man holding his head with his right hand

From the Verso blog, a piece by Sarah Brouillette on Kazuo Ishiguro as Nobel laureate and the ‘literary industry’.

The Remains of the Day is one of Jeff Bezos’s favourite books. He claims it is the foundation of his “regret-minimization framework” and helped him to find the courage to start Amazon. If he has noticed that the novel is about how class subordination ruins people’s lives, he hasn’t said so. The heart of the novel is the protagonist’s — and before him, his father’s — dependence on waged work. The story traces the process by which we begin to lose the ability to separate ourselves from our professional roles. It was published in 1989, and its concern with the subsumption of life by work was clearly occasioned in part by the circulation of images of the 1980s corporate crunch, with all those people working so much they forgot how to “really live.” It also denounces the British imperial project’s dependence on classed relationships: how much of the empire’s daily operation depended on people feeling that they didn’t have a right to object to their employers’ imperatives, or better, couldn’t fathom how to find another source of wealth that would allow them to say no?

Bezos wants new Amazon employees to do what Stevens never does: live life to the fullest, seize the day. He means that they should do all this at work, of course. Or, more accurately, he can assume there is no distinction for those he hires: work is life, life is work. Real leisure will just make them better employees, as will the feeling that they are pursuing their passions in all things. Bezos is glad to think that what Ishiguro’s novel fears has come to pass: the person and person-performing-at-work are now one. His use of the novel as a corporate management tool proves how easily a “follow your heart” mantra can be recuperated. Bezos isn’t reading Ishiguro right, of course. The novel concludes with a lament about precisely such recuperation. Stevens has been reading too much into Miss Kenton’s (now Mrs. Benn’s) letter; she won’t come back to Darlington Hall with him, and the love story is over. So, he plans to return to work, the only difference being that he will now practice “bantering,” which his new American employer would enjoy. This bantering for him symbolizes de-sublimation, freedom from constraint — a certain “human warmth,” he calls it, which he now admits he lacks. It is precisely by operationalizing the injunction to “enjoy life” that he will be able to keep working. It’s a tragic ending.

Getting in ‘the zone’: Luxury & Paranoia, Access & Exclusion – Capital and Public Space

Uber surge pricing in LA

Another interesting ‘long form’ essay on the Institute of Network Cultures site. This piece by Anastasia Kubrak and Sander Manse directly addresses some contemporary themes in geographyland – access, ‘digital’-ness, exclusion, ‘rights to the city’, technology & urbanism and ‘verticality’. The piece turns around an exploration of the idea of a ‘zone’ – ‘urban zoning’, ‘special economic zones’, ‘export processing zones’, ‘free economic/enterprise zones’, ‘no-go zones’. Some of this, of course, covers familiar ground for geographers but its interesting to see the argument play out. It seems to resonate, for example, with Matt Wilson’s book New Lines

Here’s some blockquoted bits (all links are in the original).

Luxury & Paranoia, Access & Exclusion On Capital and Public Space

We get into an Uber car, and the driver passes by the Kremlin walls, guided by GPS. At the end of the ride, the bill turns out to be three times as expensive than usual. What is the matter? We check the route, and the screen shows that we travelled to an airport outside of Moscow. Impossible. We look again: the moment we approached the Kremlin, our location automatically jumped to Vnukovo. As we learned later, this was caused by a GPS fence set up to confuse and disorient aerial sensors, preventing unwanted drone flyovers.

How can we benefit as citizens from the increase in sensing technologies, remote data-crunching algorithms, leaching geolocation trackers and parasite mapping interfaces? Can the imposed verticality of platform capitalism by some means enrich the surface of the city, and not just exploit it? Maybe our cities deserve a truly augmented reality – reality in which value generated within urban space actually benefits its inhabitants, and is therefore ‘augmented’ in the sense of increased or made greater. Is it possible to consider the extension of zoning not only as an issue, but also as a solution, a way to create room for fairer, more social alternatives? Can we imagine the sprawling of augmented zones today, still of accidental nature, being utilized or artificially designed for purposes other than serving capital?

Gated urban enclaves also proliferate within our ‘normal’ cities, perforating through the existing social fabric. Privatization of urban landscape affects our spatial rights, such as simply the right of passage: luxury stores and guarded residential areas already deny access to the poor and marginalized. But how do these acts of exclusion happen in cities dominated by the logic of platform capitalism? What happens when more tools become available to scan, analyze and reject citizens on the basis of their citizenship or credit score? Accurate user profiles come in handy when security is automated in urban space: surveillance induced by smart technologies, from electronic checkpoints to geofencing, can amplify more exclusion.

This tendency becomes clearly visible with Facebook being able to allow for indirect urban discrimination through targeted advertising. This is triggered by Facebook’s ability to exclude entire social groups from seeing certain ads based on their user profile, so that upscale housing-related ads might be hidden from them, making it harder for them to leave poorer neighborhoods. Meanwhile Uber is charging customers based on the prediction of their wealth, varying prices for rides between richer and poorer areas. This speculation on value enabled by the aggregation of massive amounts of data crystallizes new forms of information inequality in which platforms observe users through a one-way mirror.

If platform economies take the city as a hostage, governmental bodies of the city can seek how to counter privatization on material grounds. The notorious Kremlin’s GPS spoofing fence sends false coordinates to any navigational app within the city center, thereby also disrupting the operation of Uber and Google Maps. Such gaps on the map, blank spaces are usually precoded in spatial software by platforms, and can expel certain technologies from a geographical site, leaving no room for negotiation. Following the example of Free Economic Zones, democratic bodies could gain control over the city again by artificially constructing such spaces of exception. Imagine rigorous cases of hard-line zoning such as geofenced Uber-free Zones, concealed neighborhoods on Airbnb, areas secured from data-mining or user-profile-extraction.

Vertical zoning can alter the very way in which capital manifests itself. TheBristol pound is an example of city-scale local currency, created specifically to keep added value in circulation within one city. It is accepted by an impressive number of local businesses and for paying monthly wages and taxes. Though the Bristol Pound still circulates in paper, today we can witness a global sprawl of blockchain based community currencies, landing within big cities or even limited to neighborhoods. Remarkably, Colu Local Digital Wallet can be used in Liverpool, the East London area, Tel Aviv and Haifa – areas with a booming tech landscape or strong sense of community.

Metrics Noir – measurement and stupidity

The unfit-bit parody

This review essay by Christopher Newfield and Heather Steffen makes for an entertaining and incisive read. The review is of three books: O’Neil’s Weapons of Math Destruction, Espeland & Sauder’s Engines of Anxiety and Merry’s The Seductions of Quantification, with a focus on how these come to bear on the turbocharged audit culture of academia. The diagnosis, through a review of the three books of the shortcomings of what might be called the latest ‘quantitative revolution’ of ‘data science’ is pushed further into a reflexive diagnosis of a genre of talking about such things as Metrics Noir. A lovely term – I hope it gains traction. I definitely think there’s folk writing ‘Metrics Noir’ in geographyland.

I’ve blockquoted a nicely chewy bit below but I recommend reading the whole thing.

All of these scholars are well aware of the value of numbers. Numbers allow for abstract picturing of groups, societies, and cities. They regularize anomalies and exceptions, and allow us access to invisible worlds, social and physical alike. Numbers support distributed cognition and collective intelligence. Both are desperately needed in a world damaged by human stupidity. But quantification in its many forms now operates within a complex metrics culture — a contradictory and contested battleground, as these three books explain. Together, they offer an understory that we could call metrics noir.

In the first place, numerical measurement can too readily take on an unquestioned objectivity. It’s an easy mistake to make, because scientists and other experts have a longstanding reputation for unbiased handling of facts, insured by methodological procedures not accessible to the layperson. This objectivity bias is hardened by the production of indicators via expert negotiations hidden from public view, which means that metrics aren’t seen as emerging from the intellectual compromises and culturally conditioned choices that go into their making. The public can remain blissfully ignorant of their baked-in assumptions — say, the idea that the poor are more likely than the middle class to commit crimes. Criticism is easily dismissed as resting on shaky subjective grounds.

Second, metrics culture reinforces the perceived inadequacy of qualitative expertise, of the “liberal professions” that rely on interpretive skills grounded in social, philosophical, and historical learning. If a dean can make promotion or funding decisions by looking at a dashboard of indicators that compare her faculty members to those across the campus and the country (grant dollar totals, prizes, publication rates, citation counts), then he or she need not weigh complex quasi-imponderables and judge the strange mixture of ingredients that make up careers and disciplines. Twenty years ago, Michael Power noted a subtle but determinate feature of the “audit society”: audit slowly weakens judgment, and management becomes a matter of applying formulae whose opacity supplies a false objectivity.

With indicators ascendant over judgment itself, and tied to complicated, obscure, or proprietary procedures, metrics can pacify the interpretive powers of the public and professionals alike. The subjects of assessment rarely interact with quantitative procedures and never demand their abolition. This is a third tendency of metrics culture. Merry discusses “data inertia,” and all these authors note the near-impossibility of putting a finished indicator back in the oven. Policymakers have no stomach for revising indicators beyond the routine tweaking of weightings one sees in U.S. News and similar rankings. Very few scholars analyze the politics of such interventions or detail the losses they create for institutions, scholars, or students. Understanding the history of indicator formation is a minority knowledge project whose negative implications can be brushed aside even when their validity is acknowledged. Although reformers demand that metrics be used only in context, in conjunction with other information, and in collaboration with those being evaluated, metrics weaken the validity of exactly the forms of knowledge that are meant to check them. We thus encounter a Foucauldian nightmare, in which critiques of the ranking system only serve to make it stronger.

Fourth, indicators help create the inequality they measure, while assuring their consumers that the inequality is a natural, preexisting fact. They do this by ignoring distinctive qualities that cannot be quantified and compared. For example, not only is a legal clinic that focuses on the problems faced by recovering opioid addicts not likely to be esteemed or even seen in standard rankings, but the training for such work will be devalued if it is not already a regular component of the top law programs — its very uniqueness will make it incomparable across programs. To put this two-stage process somewhat formally: the set of relevant qualities is narrowed to a common denominator associated with the top schools, and the quantified hierarchy that results then overwhelms the underlying particularities of each school. The gap between the indicators and the actual qualities of a given school is ignored in favor of the gaps among the various institutions. The dominant quality of each school becomes its place in the hierarchy.

The wider effect of all this is particularly damaging in education: ranking renders a large share of any sector — community colleges, chemistry doctoral programs, business schools — inferior to the top programs, and therefore implicitly defective. The deficiencies that rankings always create then justify unequal respect and, more importantly, unequal funding. Rankings undermine the general provision across institutions that created the famous quality of the US public university system, encouraging instead more investment at the top. The general effect is that the rich get richer, which is precisely what has happened in American higher education in the three decades since the U.S. News rankings first appeared. The rise of rankings didn’t cause the breakdown in public funding, but it has naturalized the inequality that results.

The good news, as these books show, is that numbers don’t need to be used as we use them now. But for real change to take place, the wider society has to become involved in the conversation. These books do an excellent job of helping make that happen.

Remaking the University: Metrics Noir – Christopher Newfield and Heather Steffen

65% of future non-existent jobs (which doesn’t exist) 70% of jobs automated (just not yet)

Twiki the robot from Buck Rogers

The future of work is the work of imagination. We are, repeatedly, and have been for a while, bombarded with (pseudo-)facts about what the future of work will bring. These are, of course, part of well-known, long-standing, narratives about ‘innovation’, ‘growth’, technological advance and, of course, ‘automation’.

Martin shared a good post by , on his site Long View on Education, about some persistent kinds of story around the nature of work our schools are preparing children for, or not. Here’s  an abridged, and selective, version of the story…

“The top 10 in demand jobs in 2010 did not exist in 2004. We are currently preparing students for jobs that don’t exist yet, using technologies that haven’t been invented, in order to solve problems we don’t even know are problems yet.”

Shift Happens videos (2007).

People repeat the claim again and again, but in slightly different forms. Sometimes they remove the dates and change the numbers; 65% is now in fashion. Respected academics who study education, such as Linda Darling-Hammond (1:30), have picked up and continue to repeat a mutated form of the factoid, as has the World Economic Forum and the OECD.


“By one popular estimate 65% of children entering primary schools  today will ultimately work in new job types and  functions that currently don’t yet exist. Technological  trends such as the Fourth Industrial Revolution will  create many new cross-functional roles for which  employees will need both technical and social and analytical skills. Most existing education systems at all levels provide highly siloed training and continue a  number of 20th century practices that are hindering  progress on today’s talent and labour market issues.  “¦  Businesses should work closely with governments,  education providers and others to imagine what a true 21st century curriculum might look like.”

The WeF Future of Jobs report


Cathy Davidson (May 2017) explains up how she came to the factoid:

“I first read this figure in futurist Jim Carroll’s book, Ready, Set, Done (2007). I tracked his citation down to an Australian website where the “65%” figure was quoted with some visuals and categories of new jobs that hadn’t existed before. “Genetic counseling” was the one I cited in the book.

After Now You See It appeared, that 65% figure kept being quoted so I attempted to contact the authors of the study to be able to learn more about their findings but with no luck.  By then, the site was down and even the Innovation Council of Australia had been closed by a new government.”

The BBC radio programme More or Less picks up the story from here, demonstrating how it most likely has no factual basis derived from any identifiable source (there never was an Innovation Council of Australia, for example).

Davidson sort of defends this through dissimulation, in an interview for More or Less, by saying she believes that 100% of jobs have been affected by ‘the digital era we now live in’.

As Audrey Watters has highlighted, statistics like this and the appeal for a ‘disruption’ of education by the tech sector to teach ‘the skills of the future’ etc. can be reasonably interpreted as a marketing smoke screen – ‘the best way to predict the future is to issue a press release’.

An allied claim, that fall within the same oeuvre as the “65%” of not-existing jobs (or should that be non-existent?), is the various statistics for the automation of job roles, with varying timescales. A canonical example, from another “thought leader” (excuse me while I just puke in this bin), is from WIRED maven Kevin Kelly:

There are an awful lot of variations on this theme, focusing on particular countries, especially the USA, or particular sectors, or calculating likelihoods for particular kinds of jobs and so on and so on. This is, of course, big business in and of itself – firms like Deloitte, McKinsey and others sell this bullshit to anyone willing to pay.

What should we make of all this..?

There are a few interpretations we can make of this genre of ‘foresight’. Alongside several other academics I have written about particular ways of communicating possible futures, making them malleable-yet-certain in some way, as a ‘politics of anticipation‘. This politics has various implications, some banal some perhaps more troubling.

First, you might say it’s a perfectly understandable tendency, of pretty much all of us, to try and lend some certainty to the future. So, in our adolescent know-it-all way, we are all wont to lend our speculations some authority, and statistics, however spurious, is a key tool for such a task.

Second, and perhaps allied to the first, is the sense in which methods for speculation become formalised and normative – they’re integrated into various parts of institutional life. So, it becomes normal to talk about speculative (spurious?!) statistics about a future of work, education etc. in the same tone, with the same seriousness, and the same confidence as statistics about a firm’s current inventory, or last year’s GDP trends. Of course, all statistics, all facts, have conditions and degrees of error and so if the calculation of trends for past events is open to change, the rationale might be, perhaps future trends are just as reliable (there’s all sorts of critique available here but I’m not going to delve into that). In this way, consultancies can package up ‘foresight’ as a product/service that can be sold to others. “Futures” are, of course, readily commodified.

Third, an ideological critique might be that it is precisely these forms of storytelling about the redundancy or insufficiency of the labour force that allows those with the large concentrations of capital to accrue more by demeaning the nature of work itself and privatising profits upwards. If we are repeatedly told that the work that generates the good and services that move through our economy is worth less – because it can be automated, because it is ‘out-dated’, because there are other kinds of superior ‘skilled’ work – then it perhaps becomes easier to suppress wage growth, to chip away at labour rights and render work more precarious. Gloomy I know. However, some data (oh no! statistics!) Doxtdator has in his blogpost (and the kinds of data David Harvey uses in his books, such as The Engima of Captial) could be seen as backing up such arguments. For example (source):

These sorts of graphs, tell a different story about yesterday’s future – which didn’t lead to families reaping the rewards of automation and increased productivity by profiting from a share in increased leisure time (following JM Keynes), but rather delivered the profits of these trends to the “1%” (or even the “0.1%”) by massively increasing top executive salaries while keeping wider wage growth comparatively low, if not stagnant. I’m not an economist, so I don’t want to push my luck arguing this point but there are folk out there who argue such points pretty convincingly, such as David Harvey (though see also economic critiques of the ‘zombie’ automation type of argument).

Ultimately, I am, personally, less interested in the numbers themselves – who knows if 65% of today’s school children will be doing new jobs that represent only 70% of the total work we currently undertake?!  I’m more interested in the kinds of (speculative) truth-making or arguing practices they illustrate. The forms of speculative discourse/practice/norms about technology and work we’re all involved in reproducing. It seems to me that if we can’t fathom those things, we’re less able to care for those of us materially affected by what such speculation does, because, of course, sometimes speculation is self-fulfilling.

To try to advance some discussions about the kinds of technological and economic future that get proposed, gain momentum and become something like “truths”, I’ve been puzzling over the various ways we might see the creation of these economic statistics, the narrating of technological ‘innovation’ in particular ways, and the kinds of stories ‘critical’ academics then tell in analysing these things as making up collectively some form of collective imagination. I started out with ‘algorithms’ but I think that’s merely one aspect of a wider set of discourses about automation that I increasingly feel need to be addressed. My placeholder term for the moment is an “automative imaginary” ~ a collective set of discourses and practices by which particular versions of automation, in the present and the future, are brought into being.