Scaring you into ‘digital safety’

I’ve been thinking about the adverts from Barclays about ‘digital safety’ in relation to some of the themes of the module about technology that I run. In particular, about the sense of risk and threat that sometimes gets articulated about digital media and how this maybe carries with it other kinds of narrative about technology, like versions of determinism for instance. The topics of the videos are, of course, grounded in things that do happen – people do get scammed and it can be horrendous. Nevertheless, from an ‘academic’ standpoint, it’s interesting to ‘read’ these videos as particular articulations of perceived norms around safety/risk and responsibility, for whom, and why in relation to ‘digital’ media… I could write more but I’ll just post a few of the videos for now…

Ellen Ullman’s Life in Code

Interesting account of author of Close to the Machine Ellen Ullman’s most recent book Life in Code, which sounds fantastic and very much worth a read (just like Close to the Machine), and something of its context. From the NYT:


A Personal History of Technology

By Ellen Ullman

Illustrated. 306 pp. Farrar, Straus & Giroux.

As milestone years go, 1997 was a pretty good one. The computers may have been mostly beige and balky, but certain developments were destined to pay off down the road. Steve Jobs returned to a floundering Apple after years of corporate exile, IBM’s Deep Blue computer finally nailed the world-champion chess master Garry Kasparov with a checkmate, and a couple of Stanford students registered the domain name for a new website called Nineteen ninety-seven also happened to be the year that the software engineer Ellen Ullman published “Close to the Machine: Technophilia and Its Discontents,” her first book about working as a programmer in a massively male-dominated field.

That slender volume became a classic of 20th-century digital culture literature and was critically praised for its sharp look at the industry, presented in a literary voice that ignored the biz-whiz braggadocio of the early dot-com era. The book had obvious appeal to technically inclined women — desktop-support people like myself then, computer-science majors, admirers of Donna J. Haraway’s feminist cyborg manifesto, those finding work in the newish world of website building — and served as a reminder that someone had already been through it all and took notes for the future.

Then Ullman retired as a programmer, logging out to go write two intense character-driven thriller novels and the occasional nonfiction essay. The digital economy bounced back after the Epic Fail of 2000 and two decades later, those techno-seeds planted back in 1997 have bloomed. Just look at all those smartphones, constantly buzzing with news alerts and calendar notifications as we tell the virtual assistant to find us Google Maps directions to the new rice-bowl place. What would Ullman think of all this? We can now find out, as she’s written a new book, “Life in Code: A Personal History of Technology,” which manages to feel like both a prequel and a sequel to her first book.

Read the rest on the NYT website.

Reblog> Whither the Creative City? The Comeuppance of Richard Florida

Nice post from Jason Luger:

Whither the Creative City? The Comeuppance of Richard Florida

Talent, Technology, and Tolerance, said Florida (2002), were the pre-conditions for a successful urban economy. Florida’s ‘creative class’ theory, much copied, emulated and critically maligned, delineated urban regions with ‘talent’ (PhDs); ‘technology’ (things like patents granted); and ‘tolerance’ (represented by a rather arbitrary ‘gay index’ of same-sex households in census data).

This combination, according to Florida’s interpretation of his data, indicated urban creative ‘winners’ versus urban ‘losers’: blue collar cities with more traditional economies and traditional worldviews. Creative people want to be around other creative people, wrote Florida, so failing to provide an ideal urban environment for them will result in their ‘flight’ (2005) and the loss of all the benefits of the creative economy. Therefore, to win in the ‘new economy’ (Harvey, 1989), cities need to compete for, and win the affections of, the ‘creative class’. Or so Florida then-believed.

Read the full post.

Growing criticism of Stiegler

I’ve begun to see some interesting criticism of Bernard Stiegler’s more recent activities, both in terms of books and thePlaine Commune project, which I confess resonate with some misgivings I have had for a little while [e.g. Chatonsky, Moatti, SilberzahnVial]. No doubt there are responses to these criticisms and so I’m not going to attempt to reiterate them wholesale or vouch for them – not least because I haven’t read the more recent work. Let me be clear in this – I think critical reflection on an argument or project is not only healthy but a necessary part of the humanities and science. I am neither seeking to ‘write off’ Stiegler’s work nor dogmatically defend it.

Nevertheless, I think there are two particular, related, points that occur in a number of different criticisms online that it seems to me may hold some water. These are:

(1) Increasingly in recent work, which is being produced at a breakneck speed, there is a quickness with concepts that has drawn criticism. Howells and Moore (2013) in their introduction to the first anglophone secondary text on Stiegler’s work say: “In contrast to the patient, diligent deep-readings we have come to expect since Derrida, the image that emerges of Stiegler is perhaps a thinker who zooms in and out of texts, skimming them to fit what he is looking to find” (p. 9). Howells and Moore, in the end, see this as a strength. Nevertheless, there is a growing ‘jargon’ that loosens and perhaps undermines the analytical purchase within arguments Stiegler attempts to make. This is particularly evident in the kinds of names for the over-arching project that have begun to emerge, for example: “organology“, “neganthropology” and “pharmacosophy“. Furthermore, it has been argued that, especially in relation to the use of the ideas of “disruption” and “entropy”, Stiegler makes all-too-quick analogies and equivalences between different meanings or applications of a given term that waters down or perhaps even undermines its use. So, for example, in terms of the increasing use of philosophical concepts, for which he stands accused of creating ever-more impenetrable jargon, we might look to his recent conjugation of the ideas of automation, anthropocene and entropy/negentropy. In an extensive and rather forthright blogpost by Alexandre Moatti, an example is taken from The Automatic Society 1 (I’ve provided the full quote, whereas Moatti just takes a snippet):

“All noetic bifurcation, that is to say all quasi-causal bifurcation, derives from a cosmic potlatch that indeed destroys very large quantities of differences and orders, but  it does so by projecting a very great difference on another plane, constituting another ‘order of magnitude’ against the disorder of a kosmos in becoming, a kosmos that, without this projection of a yet-to-come from the unknown, would be reduced to a universe without singularity. A neganthropological singularity (which does not submit to any ‘anthropology’) is a negentropic bifurcation in entropic becoming, of which it becomes the quasi-cause, and therein a point of origin – through this improbable singularity that establishes it and from which, intermittently and incrementally, it propagates a process of individuation” [p. 246].

I cannot honestly say that I can confidently interpret or translate the meaning of this passage, perhaps someone will comment below with their version. However, I am confident that the translator (from French), Dan Ross, will have done a thorough job at trying to capture the sense of the passage as best he can. Nevertheless, and even with a knowledge of the various sources the terminology implies (made more or less explicit in the preceding parts of the book) the prose is perhaps problematic. Here’s my take, for what it’s worth:

What I think is being suggested in the quote above is that all life we call ‘human’ is supported by language, writing and other prostheses (‘noetic life’) and that when these social systems shift and are split (bifurcation) they destroy productive forms of  difference – different forms of understanding, different ways of knowing and different ways of living perhaps (‘differences and orders’). In so doing, this projects a bigger hiving off of the potential for life (‘difference on another plane’ and ‘disorder of a kosmos‘) to change in various ways and possibly prevents particular kinds of future (‘yet-to-come’ ~ quite similar to Derrida’s distinction between l’avenir and futur) and an impoverished form of being/life (‘a universe without singularity’). We can only really recognise these points of rupture after the fact, because the ruptures themselves are also the seeds of the moment of realisation. To positively and sustainably act (‘negentropic bifurcation in entropic becoming’) both for a positive projection of possible futures (‘improbable singularity’) and in response to these various kinds of ‘undoing’ of potential we attempt to create new possibilities (‘a process of individuation’).

Another related, perhaps more serious, criticism is that Stiegler quickly moves between and analogises things that might be considered to push the bounds of credulity. For example, the strategies of Daech/IS, management consultants and ‘GAFA’ (Google, Amazon, Facebook, Apple) are considered analogous by Stiegler when discussing ‘disruption’ as “a phenomenon of the acceleration of innovation, which is the basis of the strategy [of disruption] developed in Silicon Valley” [Here’s a strong response to that argument, in French]. Of course the idea of ‘disruptive practices’ as a force that can be diagnosed as ‘destroying social equilibrium; in different domains is seductive. Nevertheless, isn’t part of that seduction in an over-generalisation of ‘disruption’ that elides the mixing of people, objects and strategies that are simply too different? Isn’t there a danger that ‘disruption’ becomes yet another ‘neoliberalism’ – a catchall diagnosis of all things bad against which we should (ineffectually) rail? It sometimes seems to me that there is a peculiar, slightly snobby, anti-americanism that undergirds some of this, which if so does Stiegler’s work a disservice.

Taking this further, the analogies and the systemisation of the ‘jargon’-like concepts creates what Alexandre Moatti, in the Zilsel blogpost, calls two families of antonyms:

  • (automation) entropy, anthropisation, the Anthropocene era or the ‘Entropocene’, anthropocenologists
  • (deautomation) negentropy, neganthropy, neganthropisation and the ‘Neganthropocene’ era

There is a bit of a pattern here – to create systems of binaries. In the Disbelief and Discredit series it was Otium and Negotium. In Taking Care it was psycho- vs. noo-: politics, power, techniques and technologies. Perhaps this is what it means to put the pharmakon into practice for Stiegler? I worry that this habit of rendering systems of ideas that can be wielded authoritatively has the potential for a fairly negative implementation by ‘Stieglerian’ scholars – binary systems can be used as dogma. I think we’ve seen in enough of that in the social sciences to be wary here.

I recognise that sometimes ‘difficult’ language is needed to get at difficult ideas. There’s been previous controversy around these sorts of themes in anglophone scholarship in the past, not least in relation to the work of Judith Butler, and it’s possible to read about that elsewhere. I neither want to ‘throw stones’ nor ‘sit on a fence’ here, I admire aspects of Stiegler’s writing and thinking because it does, it seems to me anyway, get at some interesting and thorny issues. Nevertheless, the blizzard of concepts, the increasingly long and hard to follow sentences and I think the quickness of fairly sweeping arguments, especially when they write-off big chunks of other peoples’ work (which is the case in the chapter that passage is from in relation to Levi-Strauss), feels to me like a series of ungenerous moves. There may be all sorts of reasons for this but it feels like a shame to me…

(2) Following from (1), there is a sense in which the increasing use of philosophical jargon, analogies and the fairly rigid system of concepts that is used to interlink many of these themes creates what Moatti calls a kind of closed Stieglerian environment of thought, which is mutually reinforcing but perhaps then limits participation, through the jargon and activities of its thinking: “there is a Stieglerian environment: that of its association Ars Industrialis [and the Institute de Recherche et Innovation at the Centre Pompidou], of the summer academies it organizes in the country residence of Epineuil-le-Fleuriel… but also of certain contemporary authors whom he cites  and who gravitate around him (very often the same group)” [Moatti]. There is a danger that both in the systematisation of the concepts, as discussed above, and in the sort of informal cabal-like behaviour of the associations, groups and schools that there is a closed group.

There have also been concerns expressed about the nature of the support of the programmes and the work being undertaken by Ars Industrialis and IRI and how this feeds into the work itself. In particular, it has been noted that the corporate sponsorship (including Orange and Dassault Systems) of the Plaine Commune experiment [discussed in this interview] in producing a ‘contributory territory’, in the vein of Stiegler’s notion of the ‘economy of contribution‘, and the channeling of funds into some rather plum roles, such as the creation of a relatively well-paid and cooshy “Participatory Research Chair” (see: Faire de Plaine Commune en Seine-St-Denis) based at MSH Paris-Nord, in the territory, yes, but arguably not of it perhaps.

A corollary to this is the observation articulated by others, and something for which I think other philosophers have also been guilty, to draw upon a narrow and fairly specific set of studies to support sweeping generalisations. For Stiegler this has been the case in relation attention, with the use of one particular paper by N. Katherine Hayles, selectively drawing upon a particular scientific study; for ‘the anthropocene’, for which he significantly relies on Bonneuil and Fressoz (2016); and for automation, with the use of a particular quote by Bill Gates [1] and a widely cited but also contested speculative study by Frey and Osborne (2013) from the Oxford Martin School. This habit, in particular, I fear rather undermines the cogency of Stiegler’s arguments.

Where does this leave someone interested in these ideas? Or, more specifically, given this is a post on a personal blog through which I have significantly drawn upon Stiegler’s work: where does it leave my own interest? I don’t think I would go as far as some to declare that the recent work by Bernard Stiegler should be disregarded, or, in the extreme case, that Stiegler should no longer be considered worthy of the title ‘philosopher’ [See: Stéphane Vial’s Bernard Stiegler : la fin d’un philosophe]. Nevertheless, I think that, sadly, there probably is cause to be reticent in engaging with Stiegler’s work on ‘the anthropocene’ and ‘automation’ for all of the reasons discussed above. I still think there are plenty of interesting arguments and ideas expressed by Bernard Stiegler, I just think, personally, I will need to take much greater care in working through the provenance of these ideas from now on.


1. see the Business Insider article ‘Bill Gates: People Don’t Realize How Many Jobs Will Soon Be Replaced By Software Bots‘, quoting from the 2014 conversation with Bill Gates at the American Enterprise Institute: From poverty to prosperity: A conversation with Bill Gates [approx 46-minutes in]. Quote: ‘Capitalism, in general, will over time create more inequality and technology, over time, will reduce demand for jobs, particularly at the lower end of the skill set. “¦ Twenty years from now labour demand for lots of skill sets will be substantially lower and i don’t think people have that in their mental model’.

“Racist soap dispenser” and artifactual politics

'Racist' soap dispenser

Some videos have been widely shared concerning the soap dispensers and taps in various public or restaurant toilets that appear to have been calibrated to work with light skin colour and so subsequently appear to not work with darker skin. See the below for a couple of example videos.

Of course, there are (depressingly) all sorts of examples of technologies being calibrated to favour people who conform to a white racial appearance, from the Kodak’s “Shirley” calibration cards, to Nikon’s “Did someone blink?” filter, to HP’s webcam face tracking software. There are unfortunately more examples, which I won’t list here, but to suffice it to say this demonstrates an important aspect of artefactual and technological politics – things often carry the political assumptions of their designers. Even if this was an ‘innocent’ mistake such as a result of a manufacturing error, skewing the calibration etc., it demonstrates the sense in which there remains a politics to the artefact/technology in question because the agency of the object remains skewed along lines of difference.

There are perhaps two sides to this politics, if we resurrect Langdon Winner’s (1980) well-known argument about artefactual politics and the resulting discussion. First, like the well-known story (cited by Winner, gleaned from Caro) of Robert Moses’ New York bridges“someone wills a specific social state, and then subtly transfers this vision into an artefact” (Joerges 1999: p. 412). What Joerges (1999) calls the design-led version of ‘artefacts-have-politics’, following Winner (I am not condoning Joerges’ rather narrow reading of Winner, just using a useful short-hand).

Second, following Winner, artefacts can have politics by virtue of the kinds of economic, political social (and so on) systems upon which they are predicated. There is the way in which such a deliberate or mistaken development, such as the tap sensor, is facilitated or at the least tolerated by virtue of the kinds of standards that are used to govern the design, manufacture and sale or implementation of a given artefact/technology. So, the fact that a bridge that apparently excludes particular groups on people by virtue of preventing their most likely means of travel, a bus, to pass under it, or a tap only works with lighter skin colour, can pass into circulation, or socialisation perhaps, by virtue of normative and bureaucratic frameworks of governance.

In this sense, and again following Winner, we might think about the ways these outcomes transcend “the simple categories of ‘intended’ and ‘unintended’ altogether”. Rather, they represent “instances in which the very process of technical development is so thoroughly biased in a particular direction that it regularly produces results heralded as wonderful breakthroughs by some social interests and crushing setbacks by others” (Winner 1980: p. 125-6)

So, even when considered the results of error, and especially when the mechanism for regulating such errors is considered to be ‘the market’—with the expectation that if the thing doesn’t work it won’t sell and the manufacturer will be forced to change it—the assumptions behind the rectification of the ‘error’ carry a politics too (perhaps in the sense of Weber’s loaded value judgements).

Third, there is the what Woolgar (1991 – in a critical response to Winner) calls the ‘contingent and contestable versions of the capacity of various technologies’, which might include the ‘manufacturing mistakes’ but would also include the videos produced and their support or contestation through responses in other videos and in media coverage.

This analysis might become further complicated by widening our consideration of the ways in which contingencies render a given artefact/ technology political.

Take, for example, an ‘Internet of Things’ device that might seem innocuous, such as a ‘smart thermostat’ that ‘learns’ when you use the heating and begins to automatically schedule your heating. There are immediate technical issues that might render such a device political, such as in terms of the strength of the security settings, and so whether or not it could be hacked and whether or not you as the ‘owner’ of the device would know and what you may be able to do in response.

Further, there are privacy issues if the ‘smart’ element is actually not embedded in the device but enabled through remote services ‘in the cloud’, do you know where your data is, how it is being used, does it identify you? etc. etc. Further still, the device might appear to be a one-off expense but may actually require a further payment or subscription to work in the way you expected. For example, I bought an Amazon Kindle that had advertising as the ‘screen saver’ and I had to pay an additional £10 to remove it.

Even further, it may be that even if the security, privacy and payment systems are all within the bounds of what one might consider to be politically or ethically acceptable, it may still be that there are political contingencies that exclude or disproportionately effect particular groups of people. The thermostat might only work with particular boilers or may require a ‘smart’ meter, so it may also only work with particular energy subscription plans. Such plans, even if they’re no more expensive might require good credit ratings to access them or other pre-conditions, which are not immediately obvious. Likewise, the thermostat may not work with pre-payment meter-driven systems, which necessarily disadvantages those without a choice – renting for example.

The thermostat may require a particular kind of smart phone to access its functionality, which again may require particular kinds of phone contract and these may require credit ratings and so on. The manufacturer of the thermostat might cease to trade, or get bought out, and the ‘smart’ software ‘in the cloud’ may cease to function – you may therefore find yourself without a thermostat. If the thermostat was installed in a ‘vulnerable’ person’s home in order to enable remote monitoring by concerned family members this might create anxiety and risk.

As apparently individual, or discrete, artefacts/technologies become apparently more entangled in sociotechnical systems of use (as Kline says) with concomitant contingencies the politics of these things has the potential to become more opaque.

So, all artefacts have politics and the examples within this post might be considered useful if troubling contemporary examples for discussion in research projects and in the classroom (as well as, one might hope, the committee rooms of regulators, or parliaments).

P.S. I think this now is a chunk of a lecture rewritten for my “Geographies of Technology” module at Exeter, heh.

Reblog> Angela Walch on the misunderstandings of blockchain technology

Another excellent, recent, episode of John Danaher’s podcast. In a wide-ranging discussion of blockchain technologies with Angela Walch there’s lots of really useful explorations of some of the confusing (to me anyway) aspects of what is meant by ‘blockchain’.

Episode #28 – Walch on the Misunderstandings of Blockchain Technology

In this episode I am joined by Angela Walch. Angela is an Associate Professor at St. Mary’s University School of Law. Her research focuses on money and the law, blockchain technologies, governance of emerging technologies and financial stability. She is a Research Fellow of the Centre for Blockchain Technologies of University College London. Angela was nominated for “Blockchain Person of the Year” for 2016 by Crypto Coins News for her work on the governance of blockchain technologies. She joins me for a conversation about the misleading terms used to describe blockchain technologies.

You can download the episode here. You can also subscribe on iTunes or Stitcher.

Show Notes

  • 0:00 – Introduction
  • 2:06 – What is a blockchain?
  • 6:15 – Is the blockchain distributed or shared?
  • 7:57 – What’s the difference between a public and private blockchain?
  • 11:20 – What’s the relationship between blockchains and currencies?
  • 18:43 – What is miner? What’s the difference between a full node and a partial node?
  • 22:25 – Why is there so much confusion associated with blockchains?
  • 29:50 – Should we regulate blockchain technologies?
  • 36:00 – The problems of inconsistency and perverse innovation
  • 41:40 – Why blockchains are not ‘immutable’
  • 58:04 – Why blockchains are not ‘trustless’
  • 1:00:00 – Definitional problems in practice
  • 1:02:37 – What is to be done about the problem?

Relevant Links

Another new book from Bernard Stiegler – Neganthropocene

Bernard Stiegler being interviewed

Open Humanities has a(nother!) new book from Bernard Stiegler, blurb pasted below. This is an edited version of Stiegler’s public lectures in various places over the last three or so years, hence Dan Ross’ byline. Dan has done some fantastic work of corralling the fast-moving blizzard of Stiegler’s concepts and sometimes flitting engagements with a wide range of other thinkers and I am sure that this book surfaces this work.

It would be interesting to see some critical engagement with this, it seems that Stiegler simply isn’t as trendy as Latour and Sloterdijk or the ‘bromethean‘ object-oriented chaps for those ‘doing’ the ‘anthropocene’ for some reason. I’m not advocating his position especially, I have various misgivings if I’m honest (and maybe one day I’ll write them down) but it is funny that there’s a sort of anglophone intellectually snobbery about some people’s work…


by Bernard Stiegler
Edited and translated by Daniel Ross


As we drift past tipping points that put future biota at risk, while a post-truth regime institutes the denial of ‘climate change’ (as fake news), and as Silicon Valley assistants snatch decision and memory, and as gene-editing and a financially-engineered bifurcation advances over the rising hum of extinction events and the innumerable toxins and conceptual opiates that Anthropocene Talk fascinated itself with–in short, as ‘the Anthropocene’ discloses itself as a dead-end trap–Bernard Stiegler here produces the first counter-strike and moves beyond the entropic vortex and the mnemonically stripped Last Man socius feeding the vortex.

In the essays and lectures here titled Neganthropocene, Stiegler opens an entirely new front moving beyond the dead-end “banality” of the Anthropocene. Stiegler stakes out a battleplan to proceed beyond, indeed shrugging off, the fulfillment of nihilism that the era of climate chaos ushers in. Understood as the reinscription of philosophical, economic, anthropological and political concepts within a renewed thought of entropy and negentropy, Stiegler’s ‘Neganthropocene’ pursues encounters with Alfred North Whitehead, Jacques Derrida, Gilbert Simondon, Peter Sloterdijk, Karl Marx, Benjamin Bratton, and others in its address of a wide array of contemporary technics: cinema, automation, neurotechnology, platform capitalism, digital governance and terrorism. This is a work that will need be digested by all critical laborers who have invoked the Anthropocene in bemused, snarky, or pedagogic terms, only to find themselves having gone for the click-bait of the term itself–since even those who do not risk definition in and by the greater entropy.

Author Bio

Bernard Stiegler is a French philosopher who is director of the Institut de recherche et d’innovation, and a doctor of the Ecole des Hautes Etudes en Sciences Sociales. He has been a program director at the Collège international de philosophie, senior lecturer at Université de Compiègne, deputy director general of the Institut National de l’Audiovisuel, director of IRCAM, and director of the Cultural Development Department at the Centre Pompidou. He is also president of Ars Industrialis, an association he founded in 2006, as well as a distinguished professor of the Advanced Studies Institute of Nanjing, and visiting professor of the Academy of the Arts of Hangzhou, as well as a member of the French government’s Conseil national du numérique. Stiegler has published more than thirty books, all of which situate the question of technology as the repressed centre of philosophy, and in particular insofar as it constitutes an artificial, exteriorised memory that undergoes numerous transformations in the course of human existence.

Daniel Ross has translated eight books by Bernard Stiegler, including the forthcoming In the Disruption: How Not to Go Mad?(Polity Press). With David Barison, he is the co-director of the award-winning documentary about Martin Heidegger, The Ister, which premiered at the Rotterdam Film Festival and was the recipient of the Prix du Groupement National des Cinémas de Recherche (GNCR) and the Prix de l’AQCC at the Festival du Nouveau Cinéma, Montreal (2004). He is the author of Violent Democracy (Cambridge University Press, 2004) and numerous articles and chapters on the work of Bernard Stiegler.

Kathi Weeks interview – Feminism & the refusal of work

Glitched Rosie the Riveter poster

Interesting interview with Kathi Weeks, whose book The Problem with Work is really good. Follow the link to the whole interview, but please find a snippet below:

Marxist feminists went a long way towards demystifying the so-called “private” practices, relations, and institutions.

…let me offer a crude but I think useful distinction between two periods of Marxist feminist work, one past and one present.
First the past. In the 1970s, Anglo-American Marxist feminists focused on mapping the relationship bewteen two systems of domination: capitalism and patriarchy.  One could characterize this phase as the attempt to bring a Marxist critique of work into the field of domestic labor and the familial relations of production. By examining domestic based caring work, housework, consumption work, and community-creation work as forms of reproductive labor upon which productive labor more narrowly conceived depends, and by viewing the household as a workplace and the family as a regime that organizes, distributes and manages that labor, Marxist feminists went a long way towards demystifying these so-called “private” practices, relations, and institutions.  On the one hand, they were concerned with the theoretical question of how to understand the relationship between capitalism and patriarchy: were they best conceived as two related systems or as one fully intertwined system?  On the the other hand, they were also focused on the closely related practical question of alliances: should feminist groups be autonomous from or integrated with other anticapitalist (and often antifeminist) movements?

Today we find ourselves in a different situation that holds new possibilities for the relationship between Marxism and feminism. Whereas 1970s feminists struggled to bring a Marxist analytic tailored to the study of waged labor to a very different kind of unwaged laboring practice that had not been considered part of capitalist production, today I think that in order to grasp new forms of waged work we need to draw on the older feminist analyses of waged and unwaged “women’s work.”

Some describe the present moment in terms of the “feminization of labor.”  It’s not my favorite term, but what I understand by it is a way to describe how in neoliberal post-Fordist economies more and more of waged jobs come to resemble traditional forms of feminized domestic work. This is particularly evident in the rise of precarious forms of low-wage, part-time, informal, and insecure forms of employment, and in the growth of service sector jobs that draw on workers’ emotional, caring, and communicative capacities that are undervalued and difficult to measure.

Feminist theory is no longer only optional for Marxist critique.

To confront this changing landscape of work, instead of using an unreconstructed Marxist analytic to study unwaged forms of domestic work, we need today to draw on Marxist feminist analyses of gendered forms of both waged and unwaged work for their insights into how these forms are exploited and how they are experienced. The practical implication of this is that, if we want to both understand and resist  contemporary forms of exploitation, Marxists can no longer remain ignorant of or separated from feminist theories and practices. As I see it, feminist theory is no longer optional for Marxist critique.

Read the whole interview on Political Critique

CFP: Workshop on Trustworthy Algorithmic Decision-Making

Not sure where I found this, but it may be of interest…

Workshop on Trustworthy Algorithmic Decision-Making
Call for Whitepapers

We seek participants for a National Science Foundation sponsored workshop on December 4-5, 2017 to work together to better understand algorithms that are currently being used to make decisions for and about people, and how those algorithms and decisions can be made more trustworthy. We invite interested scholars to submit whitepapers of no more than 2 pages (excluding references); attendees will be invited based on whitepaper submissions. Meals and travel expenses will be provided.

Online algorithms, often based on data-driven machine-learning approaches, are increasingly being used to make decisions for and about people in society. One very prominent example is the Facebook News Feed algorithm that ranks posts and stories for each person, and effectively prioritizes what news and information that person sees. Police are using “predictive policing” algorithms to choose where to patrol, and courts are using algorithms that predict the likelihood of repeat offending in sentencing. Face recognition algorithms are being implemented in airports in lieu of ID checks. Both Uber and Amazon use algorithms to set and adjust prices. Waymo/Google’s self-driving cars are using Google maps not just as a suggestion, but to actually make route choices.

As these algorithms become more integrated into people’s lives, they have the potential to have increasingly large impacts. However, if these algorithms cannot be trusted to perform fairly and without undue influences, then there may be some very bad unintentional effects. For example, some computer vision algorithms have mis-labeled African Americans as “gorillas”, and some likelihood of repeat offending algorithms have been shown to be racially biased. Many organizations employ “search engine optimization” techniques to alter the outcomes of search algorithms, and “social media optimization” to improve the ranking of their content on social media.

Researching and improving the trustworthiness of algorithmic decision-making will require a diverse set of skills and approaches. We look to involve participants from multiple sectors (academia, industry, government, popular scholarship) and from multiple intellectual and methodological approaches (computational, quantitative, qualitative, legal, social, critical, ethical, humanistic).


To help get the conversation started and to get new ideas into the workshop, we solicit whitepapers of no more than two pages in length that describe an important aspect of trustworthy algorithmic decision-making. These whitepapers can motivate specific questions that need more research; they can describe an approach to part of the problem that is particularly interesting or likely to help make progress; or they can describe a case study of a specific instance in the world of algorithmic decision-making and the issues or challenges that case brings up.

Some questions that these whitepapers can address include (but are not limited to):

  • What does it mean for an algorithm to be trustworthy?
  • What outcomes, goals, or metrics should be applied to algorithms and algorithm-made decisions (beyond classic machine-learning accuracy metrics)?
  • What does it mean for an algorithm to be fair? Are there multiple perspectives on this?
  • What threat models are appropriate for studying algorithms? For algorithm-made decisions?
  • What are ways we can study data-driven algorithms when researchers don’t always have access to the algorithms or to the data, and when the data is constantly changing?
  • Should algorithms that make recommendations be held to different standards than algorithms that make decisions? Should filtering algorithms have different standards than ranking or prioritization algorithms?
  • When systems use algorithms to make decisions, are there ways to institute checks and balances on those decisions? Should we automate those?
  • Does transparency really achieve trustworthiness? What are alternative approaches to trusting algorithms and algorithm-made decisions?

Please submit white papers along with a CV or current webpage by October 9, 2017 via email to We plan to post whitepapers publicly on the workshop website (with authors’ permission) to facilitate conversation ahead of, at, and after the workshop. More information about the workshop can be found at

We have limited funding for PhD students interested in these topics to attend the workshop. Interested students should also submit a whitepaper with a brief description of their research interests and thoughts on these topics, and indicate in their email that they are PhD students.

CFP: Theorising digital space

glitches image of a 1990s NASA VR experience

In another of a series of what feels dangerously like back-to-the-1990s moments as some geographers attempt to wrangle ‘digital geographies’ as a brand, which I find problematic, I saw the below CFP for the AAG.

I am sorry if it seems like I’m picking on this one CFP, I have no doubt that it was written with the best of intentions and if I were able to attend the conference I would apply to speak and attend it. I hope others will too. In terms of this post it’s simply the latest in a line of conference sessions that I think unfortunately seem to miss, or even elide, long-standing debates in geography about mediation.

Maybe my reaction is in part because I cannot attend (I’m only human, I’d quite like to go to New Orleans!), but it is also in part because I am honestly shocked at the inability for debates within what is after all a fairly small discipline to move forward in terms of thinking about ‘space’ and mediation. This stands out because it follows from ‘digital’ sessions at the AAG last year that made similar sorts of omissions.

In the late 1990s a whole host of people theorised place/space in relation to what we’re now calling ‘the digital’. Quite a few were geographers. There exists a significant and, sometimes, sophisticated literature that lays out these debates, ranging from landmark journal articles to edited books and monographs that all offer different views on how to understand mediation spatially (some of this work features in a bibliography I made ages ago).

Ironically, perhaps, all of this largely accessible ‘online’, you only need search for relevant key terms, follow citation chains using repositories – much of it is there, many of the authors are accessible ‘digitally’ too. And yet, periodically, we see what is in effect the same call for papers asking similar questions: is there a ‘physical’/’digital’ binary [no], what might it do, how do we research the ‘digital’, ‘virtual’ etc. etc.

We, all kinds of geographers, are not only now beginning to look at digital geographies, it’s been going on for some time and it would be great if that were acknowledged in the way that Prof. Dorothea Kleine did with rare clarity in her introduction to the RGS Digital Geographies Working Group symposium earlier this year (skip to 03:12 in this video).

So, I really hope that some of those authors of books like “Virtual Geographies“, to take just one example (there are loads more – I’m not seeking to be canonical!), might consider re-engaging with these discussions to lend some of perspective that they have helped accrue over the last 20+ years and speak at, or at least attend, sessions like this.

I hope that others will consider speaking in this session, to engage productively and to open out debate, rather than attempt to limit it in a kind of clique-y brand.

Theorizing Place and Space in Digital Geography: The Human Geography of the Digital Realm

In 1994 Doreen Massey released Space, Place and Gender, bringing together in a single volume her thoughts on many of the key discussions in geography in the 1980s and early 1990s. Of note was the chapter, A global sense of place, and the discussion on what constitutes a place. Massey argues that places, just like people, have multiple identities, and that multiple identities can be placed on the same space, creating multiple places inside space. Places can be created by different people and communities, and it is through social practice, particularly social interaction, that place is made. Throughout this book, Massey also argues that places are processional, they are not frozen moments, and that they are not clearly defined through borders. As more and more human exchanges in the ‘physical realm’ move to, or at least involve in some way, the ‘digital realm’, how should we understand the sites of the social that happen to be in the digital? What does a human geography, place orientated understanding of the digital sites of social interaction tell us about geography? Both that in the digital and physical world.

Massey also notes that ‘communities can exist without being in the same place – from networks of friends with like interests, to major religious, ethnic or political communities’. The ever-evolving mobile technologies, the widening infrastructures that support them and the increasing access to smartphones, thanks in part to new smart phone makers in China releasing affordable yet powerful smartphones around the world, has made access to the digital realm, both fixed in place (through computers) and, as well as more often, through mobile technologies a possibility for an increasing number of people worldwide. How do impoverished or excluded groups use smart technologies to (re)produce place or a sense of place in ways that include links to the digital realm? From rural farming communities to refugees fleeing Syria and many more groups, in what ways does the digital realm afford spatial and place making opportunities to those lacking in place or spatial security?

How are we to understand the digital geographies of platforms and the spaces that they give us access to? Do platforms themselves even have geographies? Recently geographers such as Mark Graham have begun a mapping of the dark net, but how should we understand the geographies of other digital spaces, from instant messaging platforms to social media or video streaming websites? What is visible and what is obscured? And what can we learn about traditional topics in social science, such as power and inequality, when we begin to look at digital geographies?

In this paper session for 5 papers we are looking for papers exploring:

  • Theories of place and space in the digital realm, including those that explore the relationship between the digital and physical realms
  • Research on the role of digital realm in (re)producing physical places, spaces and communities, or creating new places, spaces and communities, both in the digital realm and outside of it.
  • Papers considering relationship between physical and digital realms and accounts of co-production within them.
  • The role of digital technologies in providing a sense of space and place, spatial security and secure spaces and places to those lacking in these things.
  • Research exploring the geographies of digital platforms, websites, games or applications, particularly qualitative accounts that examine the physical and digital geographies of platforms, websites, games or applications.
  • Research examining issues of power, inequality, visibility and distance inside of the digital realm.