The first instalment is: Arguing with theory.
This will be worth following!
Thinking back to Jason Luger’s engaging blogpost about ‘the comeuppance’ of Florida’s evolution of his “creative class” thesis, this interview in the grauniad with Florida is sort of interesting to read… ‘Everything is gentrification now’: but Richard Florida isn’t sorry.
“I’m not sorry,” he barks, sitting in a hotel lobby in Mayfair, wearing a leather jacket and black T-shirt. “I will not apologise. I do not regret anything.”
Danaher offers his incisive analysis of recent additions to the debate on legal personhood re. “robots”. Seems interesting (to me) in relation to two things: agency, and how we imagine automation – since we don’t actually have such ‘robots’ at the moment. It’s quite a long post (so only a snippet below), but worth working through… not least because in geographyland it seems to me many of us have a very narrow understanding of these sorts of things from a fairly narrow (simplified) post-structuralist account of ‘subjectivity‘.
Via Clive. This will be worth going to if you’re going to the AAG in 2018…
The AI Now Institute have published their second annual report with plenty of interesting things in it. I won’t try and summarise it or offer any analysis (yet). It’s worth a read:
The AI Now Institute, an interdisciplinary research center based at New York University, announced today the publication of its second annual research report. In advance of AI Now’s official launch in November, the 2017 report surveys the current use of AI across core domains, along with the challenges that the rapid introduction of these technologies are presenting. It also provides a set of ten core recommendations to guide future research and accountability mechanisms. The report focuses on key impact areas, including labor and automation, bias and inclusion, rights and liberties, and ethics and governance.
“The field of artificial intelligence is developing rapidly, and promises to help address some of the biggest challenges we face as a society,” said Kate Crawford, cofounder of AI Now and one of the lead authors of the report. “But the reason we founded the AI Now Institute is that we urgently need more research into the real-world implications of the adoption of AI and related technologies in our most sensitive social institutions. People are already being affected by these systems, be it while at school, looking for a job, reading news online, or interacting with the courts. With this report, we’re taking stock of the progress so far and the biggest emerging challenges that can guide our future research on the social implications of AI.”
There’s also a sort of Exec. Summary, a list of “10 Top Recommendations for the AI Field in 2017” on Medium too. Here’s the short version of that:
- 1. Core public agencies, such as those responsible for criminal justice, healthcare, welfare, and education (e.g “high stakes” domains) should no longer use ‘black box’ AI and algorithmic systems.
- 2. Before releasing an AI system, companies should run rigorous pre-release trials to ensure that they will not amplify biases and errors due to any issues with the training data, algorithms, or other elements of system design.
- 3. After releasing an AI system, companies should continue to monitor its use across different contexts and communities.
- 4. More research and policy making is needed on the use of AI systems in workplace management and monitoring, including hiring and HR.
- 5. Develop standards to track the provenance, development, and use of training datasets throughout their life cycle.
- 6. Expand AI bias research and mitigation strategies beyond a narrowly technical approach.
- 7. Strong standards for auditing and understanding the use of AI systems “in the wild” are urgently needed.
- 8. Companies, universities, conferences and other stakeholders in the AI field should release data on the participation of women, minorities and other marginalized groups within AI research and development.
- 9. The AI industry should hire experts from disciplines beyond computer science and engineering and ensure they have decision making power.
- 10. Ethical codes meant to steer the AI field should be accompanied by strong oversight and accountability mechanisms.
Which sort of reads, to me, as: “There should be more social scientists involved” 🙂
Another interesting ‘long form’ essay on the Institute of Network Cultures site. This piece by Anastasia Kubrak and Sander Manse directly addresses some contemporary themes in geographyland – access, ‘digital’-ness, exclusion, ‘rights to the city’, technology & urbanism and ‘verticality’. The piece turns around an exploration of the idea of a ‘zone’ – ‘urban zoning’, ‘special economic zones’, ‘export processing zones’, ‘free economic/enterprise zones’, ‘no-go zones’. Some of this, of course, covers familiar ground for geographers but its interesting to see the argument play out. It seems to resonate, for example, with Matt Wilson’s book New Lines…
Here’s some blockquoted bits (all links are in the original).
We get into an Uber car, and the driver passes by the Kremlin walls, guided by GPS. At the end of the ride, the bill turns out to be three times as expensive than usual. What is the matter? We check the route, and the screen shows that we travelled to an airport outside of Moscow. Impossible. We look again: the moment we approached the Kremlin, our location automatically jumped to Vnukovo. As we learned later, this was caused by a GPS fence set up to confuse and disorient aerial sensors, preventing unwanted drone flyovers.
How can we benefit as citizens from the increase in sensing technologies, remote data-crunching algorithms, leaching geolocation trackers and parasite mapping interfaces? Can the imposed verticality of platform capitalism by some means enrich the surface of the city, and not just exploit it? Maybe our cities deserve a truly augmented reality – reality in which value generated within urban space actually benefits its inhabitants, and is therefore ‘augmented’ in the sense of increased or made greater. Is it possible to consider the extension of zoning not only as an issue, but also as a solution, a way to create room for fairer, more social alternatives? Can we imagine the sprawling of augmented zones today, still of accidental nature, being utilized or artificially designed for purposes other than serving capital?
Gated urban enclaves also proliferate within our ‘normal’ cities, perforating through the existing social fabric. Privatization of urban landscape affects our spatial rights, such as simply the right of passage: luxury stores and guarded residential areas already deny access to the poor and marginalized. But how do these acts of exclusion happen in cities dominated by the logic of platform capitalism? What happens when more tools become available to scan, analyze and reject citizens on the basis of their citizenship or credit score? Accurate user profiles come in handy when security is automated in urban space: surveillance induced by smart technologies, from electronic checkpoints to geofencing, can amplify more exclusion.
This tendency becomes clearly visible with Facebook being able to allow for indirect urban discrimination through targeted advertising. This is triggered by Facebook’s ability to exclude entire social groups from seeing certain ads based on their user profile, so that upscale housing-related ads might be hidden from them, making it harder for them to leave poorer neighborhoods. Meanwhile Uber is charging customers based on the prediction of their wealth, varying prices for rides between richer and poorer areas. This speculation on value enabled by the aggregation of massive amounts of data crystallizes new forms of information inequality in which platforms observe users through a one-way mirror.
If platform economies take the city as a hostage, governmental bodies of the city can seek how to counter privatization on material grounds. The notorious Kremlin’s GPS spoofing fence sends false coordinates to any navigational app within the city center, thereby also disrupting the operation of Uber and Google Maps. Such gaps on the map, blank spaces are usually precoded in spatial software by platforms, and can expel certain technologies from a geographical site, leaving no room for negotiation. Following the example of Free Economic Zones, democratic bodies could gain control over the city again by artificially constructing such spaces of exception. Imagine rigorous cases of hard-line zoning such as geofenced Uber-free Zones, concealed neighborhoods on Airbnb, areas secured from data-mining or user-profile-extraction.
Vertical zoning can alter the very way in which capital manifests itself. The‘Bristol pound’ is an example of city-scale local currency, created specifically to keep added value in circulation within one city. It is accepted by an impressive number of local businesses and for paying monthly wages and taxes. Though the Bristol Pound still circulates in paper, today we can witness a global sprawl of blockchain based community currencies, landing within big cities or even limited to neighborhoods. Remarkably, Colu Local Digital Wallet can be used in Liverpool, the East London area, Tel Aviv and Haifa – areas with a booming tech landscape or strong sense of community.
I’ve begun to see some interesting criticism of Bernard Stiegler’s more recent activities, both in terms of books and thePlaine Commune project, which I confess resonate with some misgivings I have had for a little while [e.g. Chatonsky, Moatti, Silberzahn, Vial]. No doubt there are responses to these criticisms and so I’m not going to attempt to reiterate them wholesale or vouch for them – not least because I haven’t read the more recent work. Let me be clear in this – I think critical reflection on an argument or project is not only healthy but a necessary part of the humanities and science. I am neither seeking to ‘write off’ Stiegler’s work nor dogmatically defend it.
Nevertheless, I think there are two particular, related, points that occur in a number of different criticisms online that it seems to me may hold some water. These are:
(1) Increasingly in recent work, which is being produced at a breakneck speed, there is a quickness with concepts that has drawn criticism. Howells and Moore (2013) in their introduction to the first anglophone secondary text on Stiegler’s work say: “In contrast to the patient, diligent deep-readings we have come to expect since Derrida, the image that emerges of Stiegler is perhaps a thinker who zooms in and out of texts, skimming them to fit what he is looking to find” (p. 9). Howells and Moore, in the end, see this as a strength. Nevertheless, there is a growing ‘jargon’ that loosens and perhaps undermines the analytical purchase within arguments Stiegler attempts to make. This is particularly evident in the kinds of names for the over-arching project that have begun to emerge, for example: “organology“, “neganthropology” and “pharmacosophy“. Furthermore, it has been argued that, especially in relation to the use of the ideas of “disruption” and “entropy”, Stiegler makes all-too-quick analogies and equivalences between different meanings or applications of a given term that waters down or perhaps even undermines its use. So, for example, in terms of the increasing use of philosophical concepts, for which he stands accused of creating ever-more impenetrable jargon, we might look to his recent conjugation of the ideas of automation, anthropocene and entropy/negentropy. In an extensive and rather forthright blogpost by Alexandre Moatti, an example is taken from The Automatic Society 1 (I’ve provided the full quote, whereas Moatti just takes a snippet):
“All noetic bifurcation, that is to say all quasi-causal bifurcation, derives from a cosmic potlatch that indeed destroys very large quantities of differences and orders, but it does so by projecting a very great difference on another plane, constituting another ‘order of magnitude’ against the disorder of a kosmos in becoming, a kosmos that, without this projection of a yet-to-come from the unknown, would be reduced to a universe without singularity. A neganthropological singularity (which does not submit to any ‘anthropology’) is a negentropic bifurcation in entropic becoming, of which it becomes the quasi-cause, and therein a point of origin – through this improbable singularity that establishes it and from which, intermittently and incrementally, it propagates a process of individuation” [p. 246].
I cannot honestly say that I can confidently interpret or translate the meaning of this passage, perhaps someone will comment below with their version. However, I am confident that the translator (from French), Dan Ross, will have done a thorough job at trying to capture the sense of the passage as best he can. Nevertheless, and even with a knowledge of the various sources the terminology implies (made more or less explicit in the preceding parts of the book) the prose is perhaps problematic. Here’s my take, for what it’s worth:
What I think is being suggested in the quote above is that all life we call ‘human’ is supported by language, writing and other prostheses (‘noetic life’) and that when these social systems shift and are split (bifurcation) they destroy productive forms of difference – different forms of understanding, different ways of knowing and different ways of living perhaps (‘differences and orders’). In so doing, this projects a bigger hiving off of the potential for life (‘difference on another plane’ and ‘disorder of a kosmos‘) to change in various ways and possibly prevents particular kinds of future (‘yet-to-come’ ~ quite similar to Derrida’s distinction between l’avenir and futur) and an impoverished form of being/life (‘a universe without singularity’). We can only really recognise these points of rupture after the fact, because the ruptures themselves are also the seeds of the moment of realisation. To positively and sustainably act (‘negentropic bifurcation in entropic becoming’) both for a positive projection of possible futures (‘improbable singularity’) and in response to these various kinds of ‘undoing’ of potential we attempt to create new possibilities (‘a process of individuation’).
Another related, perhaps more serious, criticism is that Stiegler quickly moves between and analogises things that might be considered to push the bounds of credulity. For example, the strategies of Daech/IS, management consultants and ‘GAFA’ (Google, Amazon, Facebook, Apple) are considered analogous by Stiegler when discussing ‘disruption’ as “a phenomenon of the acceleration of innovation, which is the basis of the strategy [of disruption] developed in Silicon Valley” [Here’s a strong response to that argument, in French]. Of course the idea of ‘disruptive practices’ as a force that can be diagnosed as ‘destroying social equilibrium; in different domains is seductive. Nevertheless, isn’t part of that seduction in an over-generalisation of ‘disruption’ that elides the mixing of people, objects and strategies that are simply too different? Isn’t there a danger that ‘disruption’ becomes yet another ‘neoliberalism’ – a catchall diagnosis of all things bad against which we should (ineffectually) rail? It sometimes seems to me that there is a peculiar, slightly snobby, anti-americanism that undergirds some of this, which if so does Stiegler’s work a disservice.
Taking this further, the analogies and the systemisation of the ‘jargon’-like concepts creates what Alexandre Moatti, in the Zilsel blogpost, calls two families of antonyms:
There is a bit of a pattern here – to create systems of binaries. In the Disbelief and Discredit series it was Otium and Negotium. In Taking Care it was psycho- vs. noo-: politics, power, techniques and technologies. Perhaps this is what it means to put the pharmakon into practice for Stiegler? I worry that this habit of rendering systems of ideas that can be wielded authoritatively has the potential for a fairly negative implementation by ‘Stieglerian’ scholars – binary systems can be used as dogma. I think we’ve seen in enough of that in the social sciences to be wary here.
I recognise that sometimes ‘difficult’ language is needed to get at difficult ideas. There’s been previous controversy around these sorts of themes in anglophone scholarship in the past, not least in relation to the work of Judith Butler, and it’s possible to read about that elsewhere. I neither want to ‘throw stones’ nor ‘sit on a fence’ here, I admire aspects of Stiegler’s writing and thinking because it does, it seems to me anyway, get at some interesting and thorny issues. Nevertheless, the blizzard of concepts, the increasingly long and hard to follow sentences and I think the quickness of fairly sweeping arguments, especially when they write-off big chunks of other peoples’ work (which is the case in the chapter that passage is from in relation to Levi-Strauss), feels to me like a series of ungenerous moves. There may be all sorts of reasons for this but it feels like a shame to me…
(2) Following from (1), there is a sense in which the increasing use of philosophical jargon, analogies and the fairly rigid system of concepts that is used to interlink many of these themes creates what Moatti calls a kind of closed Stieglerian environment of thought, which is mutually reinforcing but perhaps then limits participation, through the jargon and activities of its thinking: “there is a Stieglerian environment: that of its association Ars Industrialis [and the Institute de Recherche et Innovation at the Centre Pompidou], of the summer academies it organizes in the country residence of Epineuil-le-Fleuriel… but also of certain contemporary authors whom he cites and who gravitate around him (very often the same group)” [Moatti]. There is a danger that both in the systematisation of the concepts, as discussed above, and in the sort of informal cabal-like behaviour of the associations, groups and schools that there is a closed group.
There have also been concerns expressed about the nature of the support of the programmes and the work being undertaken by Ars Industrialis and IRI and how this feeds into the work itself. In particular, it has been noted that the corporate sponsorship (including Orange and Dassault Systems) of the Plaine Commune experiment [discussed in this interview] in producing a ‘contributory territory’, in the vein of Stiegler’s notion of the ‘economy of contribution‘, and the channeling of funds into some rather plum roles, such as the creation of a relatively well-paid and cooshy “Participatory Research Chair” (see: Faire de Plaine Commune en Seine-St-Denis) based at MSH Paris-Nord, in the territory, yes, but arguably not of it perhaps.
A corollary to this is the observation articulated by others, and something for which I think other philosophers have also been guilty, to draw upon a narrow and fairly specific set of studies to support sweeping generalisations. For Stiegler this has been the case in relation attention, with the use of one particular paper by N. Katherine Hayles, selectively drawing upon a particular scientific study; for ‘the anthropocene’, for which he significantly relies on Bonneuil and Fressoz (2016); and for automation, with the use of a particular quote by Bill Gates  and a widely cited but also contested speculative study by Frey and Osborne (2013) from the Oxford Martin School. This habit, in particular, I fear rather undermines the cogency of Stiegler’s arguments.
Where does this leave someone interested in these ideas? Or, more specifically, given this is a post on a personal blog through which I have significantly drawn upon Stiegler’s work: where does it leave my own interest? I don’t think I would go as far as some to declare that the recent work by Bernard Stiegler should be disregarded, or, in the extreme case, that Stiegler should no longer be considered worthy of the title ‘philosopher’ [See: Stéphane Vial’s Bernard Stiegler : la fin d’un philosophe]. Nevertheless, I think that, sadly, there probably is cause to be reticent in engaging with Stiegler’s work on ‘the anthropocene’ and ‘automation’ for all of the reasons discussed above. I still think there are plenty of interesting arguments and ideas expressed by Bernard Stiegler, I just think, personally, I will need to take much greater care in working through the provenance of these ideas from now on.
1. see the Business Insider article ‘Bill Gates: People Don’t Realize How Many Jobs Will Soon Be Replaced By Software Bots‘, quoting from the 2014 conversation with Bill Gates at the American Enterprise Institute: From poverty to prosperity: A conversation with Bill Gates [approx 46-minutes in]. Quote: ‘Capitalism, in general, will over time create more inequality and technology, over time, will reduce demand for jobs, particularly at the lower end of the skill set. “¦ Twenty years from now labour demand for lots of skill sets will be substantially lower and i don’t think people have that in their mental model’.