“Reconstructing the economy by rediscovering the value of knowledge” an interview with Bernard Stiegler [translation]

Bernard Stiegler being interviewed

In this short interview published inLibérationin March 2017, Bernard Stiegler reprises his argument for a contributory income, as is being trialled in the Plaine-Commune experiment. This is more or less the same argument and ideas presented in previous interviews I’ve translated, such as the Humanité interview, in which Stiegler attempts to provide the answer (albeit rather sweeping) to an incredibly gloomy prognosis of unemployment through full automation and peniary for the majority and with it the ever increasing loss of knowledge. Stiegler’ solution is the economic recognition of the value of work that currently is not captured economically. The device to achieve this is the contributory income, which unlike the Universal Basic Income seems to have a vague set of conditions attached.

The main idea here is Stiegler’s interesting distinction between what you get paid for and what you *do* – your employment [emploi] and your work [travail] – which gets to the heart of a whole host of debates (some that are quite long-running) around what we do that is or is not ‘work’ and how/whether or not it gets economically valued. It also builds on longstanding discussion about knowledge and care as a therapeutic relation [with each other, society, technology and so on] by Stiegler [see, for example, the Disbelief and Discredit series]. I think this may be useful for some of the critical attention to the gig economy and the ways in which people are responding to the current bout of automation anxiety/ ‘rise of the robots’ hand-wringing.

What is interesting to me about this interview is how little it moves on from Stiegler’s past articulations of this argument. There’s some sweeping generalisations about the extent and impact of automation based on questionable and contested sources (which I think does a disservice to Stiegler’s intellectual project). It is curious that the contributory income is still talked about in such vague terms. It is supposedly an active experiment in Plaine-Commune, so surely there’s a little more detail that could be elaborated? It would be interesting to see some more detailed discussion about this [sorry if I’ve missed it somewhere!].

The principle basis of the contributory income seems to be a fairly institutional (and as far as I can tell – peculiarly French) state programme for supporting workers in the creative industries. As in previous written work and interviews, Stiegler uses the idea of the “intermittents” or “intermittents du spectacle” to signify the idea of work that is subsidised through some form of state administered allowance, such as unemployment benefit. In France people working in the performing arts are entitled to claim for social security benefits designed for people without regular employment [as per the definition provided by Larousse online.]

So, here’s a fairly quick translation of the interview. As usual clarifications or queries about terms are in [square brackets]. I welcome comments or corrections!

Bernard Stiegler: “Reconstructing the economy by rediscovering the value of knowledge”

The philosopher Bernard Stiegler determines a difference between employment, which today is largely proletarianised, and work, which transforms the world through knowledge, and thus cultivates wealth.

Philosopher, Director of the Institute of Research and Innovation (IRI) of the Georges-Pompidou centre and founder of the Ars Industrialis association, Bernard Stiegler has for several years concerned himself with the effects of automation and robotisation. He has notably published The Automatic Society 1: The Future of Work (Fayard 2015) and In Disruption: How not to go mad?(Les liens qui libèrent, 2016). Today he is deeply engaged with a project which beings together nine towns in the Plaine-Commune territory, in Seine-Saint-Denis, to develop and experiment with a “contributive income” which would fund activities that go unrecognised but are useful to the community.

Amazon intends to gain a foothold in the groceries sector with cashier-less convenience stores, like in Seattle. Is automation destined to destroy jobs?

For 47% of jobs in the US, the response from the Massachusetts Institute of Technology is potentially “yes”. The remaining 53% cannot be automated because they are professional roles. They are not proletarianised: they are valued for their knowledge, which gives a capacity for initiative. What makes a profession is what is not reducible to computation, or rather reducible to the processing of data by algorithms. Not all jobs can therefore be automated. But this does not mean they are entirely removed from the processes of automation: everyone will be integrated into automatisms.

For you “employment” and “work” are not the same thing…

For two hundred and fifty years the model of industrial employment has been proletarianised employment, which has continued to grow. At first it was only manual workers, today it has largely exceeded the tertiary sector and affects nearly every task. More and more functions of supervision and even analysis have been proletarianised, for example by ‘big data’, doctors have been proletarianised – which means they are performing less and less of their profession. Proletarianised employment is sublimated by closed and immovable system. Work, on the contrary, transforms the world. So there is employment that does not produce work in this sense, rather the reverse: there is work outside of employment.

The big question for tomorrow is that of the link between automatisms, work outside of employment, and the new types of employment that enable the valroisation of work. The aim of contributive income is precisely to enable the reorganisation of the wealth produced by work in all its forms and cultivate, using the time freed up by automation, the forms of knowledge that the economy will increasingly demand in the anthropocene, this era in which the human has become a major geological factor. It is a case of surpassing the limits through an economy founded on the deproletarianisation of employment.

Optimistic discussions would like automation to free individuals, by eliminating arduous, alienating jobs. But today, it is mostly perceived as a threat…

It is a threat as long as we do not put in place the macroeconomic evolution required by deproletarianisation. The macroeconomy in which we have lived since the conservative revolution is a surreptitious, hypocritical and contradictory transformation of the Keynesian macroeconomics established in 1933. Contradictory because employment remains the central function of redistribution, whereas its reduction, and that of wages, drives the system as a whole towards insolvency.

Employment can no longer be the model for the redistribution of value. And this can no longer be limited to the relationship between use value and exchange value. Use value has become a value of usury, that is to say a disposable value that “trashes” the world – goods become waste, as do people, societies and cultures. The old American Way of Life no longer works: which is why Trump was elected… However if employment is destroyed it is necessary to redistribute not only purchasing power but also purchasing knowledge by reorganising all alternative employment and work.

In what way?

We must rebuild the economy by restoring value to knowledge. Proletarianised employment will disappear with full automation. We must create new kinds of employment, what we call irregular employment [emplois intermittents]. They will constitute intermittent periods in which instances of work that are not instances of employment are economically valued. The work itself will be remunerated by a contributory income allocated under conditions of intermittent employment, as is already the case in the creative industries.

In Intermittents et précaires [something like ~ Intermittent and precarious workers] Antonella Corsani and Maurizio Lazzarato that irregular workers in the creative industries [intermittents du spectacle] work mostly when they are not employed: employment is foremost a moment of implementing the knowledge that they cultivate outside of employment. We must encourage the winning back of knowledge, in every area [of work]. This implies on the one hand to evolve the relationship between individuals and education systems as well as professional associations lifelong learning and so on.

And, on the other hand, to precisely distinguish between information and knowledge. Automated systems have transformed knowledge into information. But this is only dead knowledge. To overcome the anthropocene we must resuscitate knowledge by inteligently practising information – through alternative periods of work and employment. Only in this way can we re-stabilise the economy, where the problems induced by climate change, for example, are only just beginning, and where vital constraint [contrainte vitale] is going to be exercised more and more as a criterion of value. It is a long term objective… But today, what should we do? Nobody can pull a rabbit out of their hat to solve the problem. We must therefore experiment. This is what we are doing with the Plaine-Commune project in Seine-Saint-Denis, which in particular aims to gradually introduce a contributory income according to the model of irregular work [l’intermittence]. With the support of the Fondation de France, we are working with residents in partnership with the Établissement public territorial [something like a unitary authority area?], Orange, Dassault Systèmes and the Maison des sciences de l’homme Paris-Nord [a local Higher Education Institution]–and through them the universities Paris 8 and Paris 13– in dialogue with small and medium enterprises, associations, cooperatives and mutual associations [les acteurs de l’économie sociale et solidaire], artists, cultural institutions. It’s a ten-year project.

How can we engage the ethics of data science in practice? barocas & boyd

Stereotypical white male figure of a data scientist

From “Engaging the ethics of data science in practice” published in Communications of the ACM, available here.

The critical writing on data science has taken the paradoxical position of insisting that normative issues pervade all work with data while leaving unaddressed the issue of data scientists’ ethical agency. Critics need to consider how data scientists learn to think about and handle these trade-offs, while practicing data scientists need to be more forthcoming about all of the small choices that shape their decisions and systems.

Technical actors are often far more sophisticated than critics at understanding the limits of their analysis. In many ways, the work of data scientists is a qualitative practice: they are called upon to parse an amorphous problem, wrangle a messy collection of data, and make it amenable to systematic analysis. To do this work well, they must constantly struggle to understand the contours and the limitations of both the data and their analysis. Practitioners want their analysis to be accurate and they are deeply troubled by the limits of tests of validity, the problems with reproducibility, and the shortcomings of their methods.

Many data scientists are also deeply disturbed by those who are coming into the field without rigorous training and those who are playing into the hype by promising analyses that are not technically or socially responsible. In this way, they should serve as allies with critics. Both see a need for nuances within the field. Unfortunately, universalizing critiques may undermine critics’ opportunities to work with data scientists to address meaningfully some of the most urgent problems.

Great opportunity > Internship with the Social Media Collective (Microsoft)

Twitter

Via Nancy Baym:

Call for applications! 2018 summer internship, MSR Social Media Collective

APPLICATION DEADLINE: JANUARY 19, 2018

Microsoft Research New England (MSRNE) is looking for advanced PhD students to join the Social Media Collective (SMC) for its 12-week Internship program. The Social Media Collective (in New England, we are Nancy Baym, Tarleton Gillespie, and Mary Gray, with current postdocs Dan Greene and Dylan Mulvin) bring together empirical and critical perspectives to understand the political and cultural dynamics that underpin social media technologies. Learn more about us here.

MSRNE internships are 12-week paid stays in our lab in Cambridge, Massachusetts. During their stay, SMC interns are expected to devise and execute their own research project, distinct from the focus of their dissertation (see the project requirements below). The expected outcome is a draft of a publishable scholarly paper for an academic journal or conference of the intern’s choosing. Our goal is to help the intern advance their own career; interns are strongly encouraged to work towards a creative outcome that will help them on the academic job market.

The ideal candidate may be trained in any number of disciplines (including anthropology, communication, information studies, media studies, sociology, science and technology studies, or a related field), but should have a strong social scientific or humanistic methodological, analytical, and theoretical foundation, be interested in questions related to media or communication technologies and society or culture, and be interested in working in a highly interdisciplinary environment that includes computer scientists, mathematicians, and economists.

Primary mentors for this year will be Nancy Baym and Tarleton Gillespie, with additional guidance offered by other members of the SMC. We are looking for applicants working in one or more of the following areas:

  1. Personal relationships and digital media
  2. Audiences and the shifting landscapes of producer/consumer relations
  3. Affective, immaterial, and other frameworks for understanding digital labor
  4. How platforms, through their design and policies, shape public discourse
  5. The politics of algorithms, metrics, and big data for a computational culture
  6. The interactional dynamics, cultural understanding, or public impact of AI chatbots or intelligent agents

Interns are also expected to give short presentations on their project, contribute to the SMC blog, attend the weekly lab colloquia, and contribute to the life of the community through weekly lunches with fellow PhD interns and the broader lab community. There are also natural opportunities for collaboration with SMC researchers and visitors, and with others currently working at MSRNE, including computer scientists, economists, and mathematicians. PhD interns are expected to be on-site for the duration of their internship.

Applicants must have advanced to candidacy in their PhD program by the time they start their internship. (Unfortunately, there are no opportunities for Master’s students or early PhD students at this time). Applicants from historically marginalized communities, underrepresented in higher education, and students from universities outside of the United States are encouraged to apply.

PEOPLE AT MSRNE SOCIAL MEDIA COLLECTIVE

The Social Media Collective is comprised of full-time researchers, postdocs, visiting faculty, Ph.D. interns, and research assistants. Current projects in New England include:

  • How does the use of social media affect relationships between artists and audiences in creative industries, and what does that tell us about the future of work? (Nancy Baym)
  • How are social media platforms, through their algorithmic design and user policies, taking up the role of custodians of public discourse? (Tarleton Gillespie)
  • What are the cultural, political, and economic implications of crowdsourcing as a new form of semi-automated, globally-distributed digital labor? (Mary L. Gray)
  • How do public institutions like schools and libraries prepare workers for the information economy, and how are they changed in the process? (Dan Greene)
  • How are media standards made, and what do their histories tell us about the kinds of things we can represent? (Dylan Mulvin)

SMC PhD interns may also have the opportunity to connect with our sister Social Media Collective members in New York City. Related projects in New York City include:

  • What are the politics, ethics, and policy implications of artificial intelligence and data science? (Kate Crawford, MSR-NYC)
  • What are the social and cultural issues arising from data-centric technological development? (danah boyd, Data & Society Research Institute)

For more information about the Social Media Collective, and a list of past interns, visit the About page of our blog. For a complete list of all permanent researchers and current postdocs based at the New England lab, see: http://research.microsoft.com/en-us/labs/newengland/people/bios.aspx

Read more.

The Economist ‘Babbage’ podcast: “Deus Ex Machina”

Glitched still from the film "Her"

An interesting general (non-academic, non-technical) discussion about what “AI” is, what it means culturally and how it is variously thought about. Interesting to reflect on the way ideas about computation, “algorithms”, “intelligence” and so on play out… something that maybe isn’t discussed enough… I like the way the discussion turns around “thinking” and the suggestion of the word “reckoning”. Worth a listen…

AI Now report

My Cayla Doll

The AI Now Institute have published their second annual report with plenty of interesting things in it. I won’t try and summarise it or offer any analysis (yet). It’s worth a read:

The AI Now Institute, an interdisciplinary research center based at New York University, announced today the publication of its second annual research report. In advance of AI Now’s official launch in November, the 2017 report surveys the current use of AI across core domains, along with the challenges that the rapid introduction of these technologies are presenting. It also provides a set of ten core recommendations to guide future research and accountability mechanisms. The report focuses on key impact areas, including labor and automation, bias and inclusion, rights and liberties, and ethics and governance.

“The field of artificial intelligence is developing rapidly, and promises to help address some of the biggest challenges we face as a society,” said Kate Crawford, cofounder of AI Now and one of the lead authors of the report. “But the reason we founded the AI Now Institute is that we urgently need more research into the real-world implications of the adoption of AI and related technologies in our most sensitive social institutions. People are already being affected by these systems, be it while at school, looking for a job, reading news online, or interacting with the courts. With this report, we’re taking stock of the progress so far and the biggest emerging challenges that can guide our future research on the social implications of AI.”

There’s also a sort of Exec. Summary, a list of “10 Top Recommendations for the AI Field in 2017” on Medium too. Here’s the short version of that:

  1. 1. Core public agencies, such as those responsible for criminal justice, healthcare, welfare, and education (e.g “high stakes” domains) should no longer use ‘black box’ AI and algorithmic systems.
  2. 2. Before releasing an AI system, companies should run rigorous pre-release trials to ensure that they will not amplify biases and errors due to any issues with the training data, algorithms, or other elements of system design.
  3. 3. After releasing an AI system, companies should continue to monitor its use across different contexts and communities.
  4. 4. More research and policy making is needed on the use of AI systems in workplace management and monitoring, including hiring and HR.
  5. 5. Develop standards to track the provenance, development, and use of training datasets throughout their life cycle.
  6. 6. Expand AI bias research and mitigation strategies beyond a narrowly technical approach.
  7. 7. Strong standards for auditing and understanding the use of AI systems “in the wild” are urgently needed.
  8. 8. Companies, universities, conferences and other stakeholders in the AI field should release data on the participation of women, minorities and other marginalized groups within AI research and development.
  9. 9. The AI industry should hire experts from disciplines beyond computer science and engineering and ensure they have decision making power.
  10. 10. Ethical codes meant to steer the AI field should be accompanied by strong oversight and accountability mechanisms.

Which sort of reads, to me, as: “There should be more social scientists involved” 🙂

Paul Dourish at Oxford – Exploring the Materialities of Digital Information

A huge array of overhead wires on a street

If you happen to be vaguely near Oxford and interested in digital-type things, then Paul Dourish’s lecture at the OII may be of interest. I’m guessing it’s related to his latest book… It’d be interesting to see if he cites any geographers.

Bellwether Lecture: Exploring the Materialities of Digital Information

Speakers: Paul Dourish

    • 26 October 2017 17:15 – 18:45

Location: Faculty of Classics, Ioannou Centre for Classical & Byzantine Studies, 66 St Giles, Oxford, OX1 3LY

 

The Oxford Internet Institute is excited to welcome Paul Dourish from the University of California, Irvine for the Bellwether talk ‘Exploring the Materialities of Digital Information’.

The talk will be followed by a short drinks reception.

Abstract

Social theorists of various stripes have countered the rhetoric of immateriality in the domain of the digital by pointing to the material foundations of digital systems, including their infrastructures and the material resources necessary to produce them. I will present a materialist account of digital forms and representations themselves, and show how practices around digital information are shaped and constrained by representational considerations. These digital materialities interpose themselves into processes of encoding and acting with information. I will illustrate the approach using examples from organizational decision-making and internet protocols.

Brian Cox, cyberpunk

Man with a colander on his head attached to electrodes

Doing public comms of science is hard, and it’s good to have people trying to make things accessible and good to excite and interest people about finding things out about the world… but it can tip over into being daft pretty easily.

Here’s the great D:ream-er Brian Cox going all cyberpunk on brain/mind uploads… (note the lad raising his eyes to the ceiling at 0:44 🙂 )

This made me wonder how Hubert Dreyfus would attempt to dispel the d:ream (don’t all groan at once!) as the ‘simulation of brains/minds’ is precisely the version of AI that Dreyfus was critiquing in the 1970s. If you’re interested in further discussion of ‘mind uploading’, and not my flippant remarks, see John Danaher’s writing on this on his excellent blog.

Idle digital signage

When you forget some essentials in your supermarket delivery and find yourself in the 24-hour supermarket at 10 o-clock at night you can see some vaguely (or not) interesting things… Here’s some “digital signage” (look, it says so – see?) that seems to have nothing to signify but it tells you some stuff… it doesn’t seem to be ‘online’, it has a measly 5Gb of storage and it has a hardware ID. I sort of think it’s quite nice to see the crap-ness of things like this when we get buffeted with promises/warnings of digital signage that profiles us, our cars and so on…

Reblog> Humans and machines at work

A warehouse worker and robot

Via Phoebe Moore. Looks good >>

Humans and Machines coverHumans and machines at work: monitoring, surveillance and automation in contemporary capitalism edited by Phoebe V. Moore, Martin Upchurch and Xanthe Whittaker.
This edited collection is now in production/press (Palgrave, Dynamics of Virtual Work series editors Ursula Huws and Rosalind Gill). This is the results of the symposium I organised for last year’s International Labour Process Conference (ILPC). We are so fortunate to have 9 women and 3 men authors from all over the world including Chinese University Hong Kong, Harvard, WA University St Louis, Milan, Sheffield, Lancaster, King’s College, Greenwich, and Middlesex researchers, two trade unionists from UNI Global Union and Institute for Employment Rights, early career and more advanced contributors.

In the era of the so-called Fourth Industrial Revolution, we increasingly work with machines in both cognitive and manual workplaces. This collection provides a series of accounts of workers’ local experiences that reflect the ubiquity of work’s digitalisation. Precarious gig economy workers ride bikes and drive taxis in China and Britain; domestic workers’ timekeeping and movements are documented; call centre workers in India experience invasive tracking but creative forms of worker subversion are evident; warehouse workers discover that hidden data has been used for layoffs; academic researchers see their labour obscured by a ‘data foam’ that does not benefit us; and journalists suffer the algorithmic curse. These cases are couched in historical accounts of identity and selfhood experiments seen in the Hawthorne experiments and the lineage of automation. This collection will appeal to scholars in the sociology of work and digital labour studies and anyone interested in learning about monitoring and surveillance, automation, the gig economy and quantified self in workplaces.

Table of contents:

Chapter 1: Introduction. Phoebe V. Moore, Martin Upchurch, Xanthe Whittaker

Chapter 2: Digitalisation of work and resistance. Phoebe V. Moore, Pav Akhtar, Martin Upchurch

Chapter 3: Deep automation and the world of work. Martin Upchurch, Phoebe V. Moore

Chapter 4: There is only one thing in life worse than being watched, and that is not being watched: Digital data analytics and the reorganisation of newspaper production. Xanthe Whittaker

Chapter 5: The electronic monitoring of care work – the redefinition of paid working time. Sian Moore and L. J. B. Hayes

Chapter 6: Social recruiting: control and surveillance in a digitised job market. Alessandro Gandini and Ivana Pais

Chapter 7: Close watch of a distant manager:  Multisurveillance by transnational clients in Indian call centres. Winifred R. Poster

Chapter 8: Hawthorne’s renewal: Quantified total self. Rebecca Lemov

Chapter 9: ‘Putting it together, that’s what counts’: Data foam, a Snowball and researcher evaluation. Penny C. S. Andrews

Chapter 10: Technologies of control, communication, and calculation: Taxi drivers’ labour in the platform economy. Julie Yujie Chen