The unapologetic ‘rockstar of regeneration’

A bike café

Thinking back to Jason Luger’s engaging blogpost about ‘the comeuppance’  of Florida’s evolution of his “creative class” thesis, this interview in the grauniad with Florida is sort of interesting to read… ‘Everything is gentrification now’: but Richard Florida isn’t sorry.

“I’m not sorry,” he barks, sitting in a hotel lobby in Mayfair, wearing a leather jacket and black T-shirt. “I will not apologise. I do not regret anything.”

*cough*

Reblog> Should robots be granted the status of legal personhood?

Twiki the robot from Buck Rogers

From John Danaher’s Philosophical Disquisitions.

Danaher offers his incisive analysis of recent additions to the debate on legal personhood re. “robots”. Seems interesting (to me) in relation to two things: agency, and how we imagine automation – since we don’t actually have such ‘robots’ at the moment. It’s quite a long post (so only a snippet below), but worth working through… not least because in geographyland it seems to me many of us have a very narrow understanding of these sorts of things from a fairly narrow (simplified) post-structuralist account of ‘subjectivity‘.

Should robots be granted the status of legal personhood?

The EU parliament attracted a good deal of notoriety in 2016 when its draft report on civil liability for robots suggested that at least some sophisticated robots should be granted the legal status of ‘electronic personhood’. British tabloids were quick to seize upon the idea — the report came out just before the Brexit vote — as part of their campaign to highlight the absurdity of the EU. But is the idea really that absurd? Could robots ever count as legal persons?

A recent article by Bryson, Diamantis and Grant (hereinafter ‘BDG’) takes up these questions. In ‘Of, for and by the people: the legal lacuna or synthetic persons’, they argue that the idea of electronic legal personhood is not at all absurd. It is a real but dangerous possibility — one that we should actively resist. Robots can, but should not, be given the legal status of personhood.

BDG’s article is the best thing I have read on the topic of legal personhood for robots. I believe it presents exactly the right framework for thinking about and understanding the debate. But I also think it is misleading on a couple of critical points. In what follows, I will set out BDG’s framework, explain their central argument, and present my own criticisms thereof.

Read the full blogpost.

Reblog> Author-Meets-More-or-Less-Friendlies: The Priority of Injustice at AAG 2018

The Priority of Injustice – Clive Barnett

Via Clive. This will be worth going to if you’re going to the AAG in 2018…

Author-Meets-More-or-Less-Friendlies: The Priority of Injustice at AAG 2018

I’m delighted to announce that the very wonderful Michael Samers has arranged an Author Meets Critics session on The Priority of Injustice, my new book (did I mention that?) at the annual meeting of the Association of American Geographers in New Orleans in April. It’s a great panel, with Joshua Barkan (U. of Georgia), Jennifer Fluri (U. of Colorado, Boulder), Leila Harris (UBC), and Kirsi Kallio (University of Tampere) all commenting on the book. The session is sponsored by AAG’s Political Geography Specialty Group and Ethics, Justice, and Human Rights Specialty Group. There’s a nice symmetry about the prospect of discussing the book in New Orleans – the last time the conference was there, in 2003, I presented a paper on theories of radical democracy that was my first post-Culture and Democracy effort at articulating the limits of broadly post-structuralist approaches to that topic, an effort that led eventually to the shape of The Priority of Injustice (yes, I’m a slow thinker).

Reblog > The Priority of Injustice

The Priority of Injustice – Clive Barnett

My colleague Prof. Clive Barnett’s excellent new book is out. He introduces it in a recent blogpost:

The Priority of Injustice

So, finally, the book that I have been writing, on and off, for the last four years, The Priority of Injustice, has been published – or at least, it’s real, since the formal publication date is next month (so I reserve the right to blog further about it as and when). It arrived earlier this week – a rather hectic week, which has oddly meant I have been too busy to experience the strange sense of anti-climax that often accompanies the arrival of the finished form of something that you have been making for so long.

This is, in one sense, my Exeter book – the first thing I did in my very first week here, four years ago, was write the proposal and send it off to prospective publishers, It’s also, though, my Swindon book, a book which attempts to articulate an approach to theorising in an ordinary spirit which has been published just a few weeks after moving away from that very ordinary town where I have lived while writing it.

It’s a beautiful object, with a great cover image, by Helen Burgess (I bought one of her pictures once, in one of those open-house art trail events that you get in places like Bishopston in Bristol, so that’s why I knew of her work; it turns out she is part of a geography-friendly network of artists). And I am honoured and humbled to have the book published in University Georgia Press’s very excellent Geographies of Justice and Social Transformation series.

I’m now faced with the challenge of promoting the book. I’m quite fond of the Coetzee-esque principle that books should have to make their own way in the world without the help of the author; on the other hand, I have some sense of responsibility towards the argument made in the book, a responsibility to help project it into the world. I’ve already realised that it’s not the sort of book that lends itself to an easy press release – ‘THEORY COULD BE THEORISED DIFFERENTLY’, SAYS THEORY-BOY doesn’t really work as a headline, does it?.

Read the full blogpost.

AI Now report

My Cayla Doll

The AI Now Institute have published their second annual report with plenty of interesting things in it. I won’t try and summarise it or offer any analysis (yet). It’s worth a read:

The AI Now Institute, an interdisciplinary research center based at New York University, announced today the publication of its second annual research report. In advance of AI Now’s official launch in November, the 2017 report surveys the current use of AI across core domains, along with the challenges that the rapid introduction of these technologies are presenting. It also provides a set of ten core recommendations to guide future research and accountability mechanisms. The report focuses on key impact areas, including labor and automation, bias and inclusion, rights and liberties, and ethics and governance.

“The field of artificial intelligence is developing rapidly, and promises to help address some of the biggest challenges we face as a society,” said Kate Crawford, cofounder of AI Now and one of the lead authors of the report. “But the reason we founded the AI Now Institute is that we urgently need more research into the real-world implications of the adoption of AI and related technologies in our most sensitive social institutions. People are already being affected by these systems, be it while at school, looking for a job, reading news online, or interacting with the courts. With this report, we’re taking stock of the progress so far and the biggest emerging challenges that can guide our future research on the social implications of AI.”

There’s also a sort of Exec. Summary, a list of “10 Top Recommendations for the AI Field in 2017” on Medium too. Here’s the short version of that:

  1. 1. Core public agencies, such as those responsible for criminal justice, healthcare, welfare, and education (e.g “high stakes” domains) should no longer use ‘black box’ AI and algorithmic systems.
  2. 2. Before releasing an AI system, companies should run rigorous pre-release trials to ensure that they will not amplify biases and errors due to any issues with the training data, algorithms, or other elements of system design.
  3. 3. After releasing an AI system, companies should continue to monitor its use across different contexts and communities.
  4. 4. More research and policy making is needed on the use of AI systems in workplace management and monitoring, including hiring and HR.
  5. 5. Develop standards to track the provenance, development, and use of training datasets throughout their life cycle.
  6. 6. Expand AI bias research and mitigation strategies beyond a narrowly technical approach.
  7. 7. Strong standards for auditing and understanding the use of AI systems “in the wild” are urgently needed.
  8. 8. Companies, universities, conferences and other stakeholders in the AI field should release data on the participation of women, minorities and other marginalized groups within AI research and development.
  9. 9. The AI industry should hire experts from disciplines beyond computer science and engineering and ensure they have decision making power.
  10. 10. Ethical codes meant to steer the AI field should be accompanied by strong oversight and accountability mechanisms.

Which sort of reads, to me, as: “There should be more social scientists involved” 🙂

Getting in ‘the zone’: Luxury & Paranoia, Access & Exclusion – Capital and Public Space

Uber surge pricing in LA

Another interesting ‘long form’ essay on the Institute of Network Cultures site. This piece by Anastasia Kubrak and Sander Manse directly addresses some contemporary themes in geographyland – access, ‘digital’-ness, exclusion, ‘rights to the city’, technology & urbanism and ‘verticality’. The piece turns around an exploration of the idea of a ‘zone’ – ‘urban zoning’, ‘special economic zones’, ‘export processing zones’, ‘free economic/enterprise zones’, ‘no-go zones’. Some of this, of course, covers familiar ground for geographers but its interesting to see the argument play out. It seems to resonate, for example, with Matt Wilson’s book New Lines

Here’s some blockquoted bits (all links are in the original).

Luxury & Paranoia, Access & Exclusion On Capital and Public Space

We get into an Uber car, and the driver passes by the Kremlin walls, guided by GPS. At the end of the ride, the bill turns out to be three times as expensive than usual. What is the matter? We check the route, and the screen shows that we travelled to an airport outside of Moscow. Impossible. We look again: the moment we approached the Kremlin, our location automatically jumped to Vnukovo. As we learned later, this was caused by a GPS fence set up to confuse and disorient aerial sensors, preventing unwanted drone flyovers.

How can we benefit as citizens from the increase in sensing technologies, remote data-crunching algorithms, leaching geolocation trackers and parasite mapping interfaces? Can the imposed verticality of platform capitalism by some means enrich the surface of the city, and not just exploit it? Maybe our cities deserve a truly augmented reality – reality in which value generated within urban space actually benefits its inhabitants, and is therefore ‘augmented’ in the sense of increased or made greater. Is it possible to consider the extension of zoning not only as an issue, but also as a solution, a way to create room for fairer, more social alternatives? Can we imagine the sprawling of augmented zones today, still of accidental nature, being utilized or artificially designed for purposes other than serving capital?

Gated urban enclaves also proliferate within our ‘normal’ cities, perforating through the existing social fabric. Privatization of urban landscape affects our spatial rights, such as simply the right of passage: luxury stores and guarded residential areas already deny access to the poor and marginalized. But how do these acts of exclusion happen in cities dominated by the logic of platform capitalism? What happens when more tools become available to scan, analyze and reject citizens on the basis of their citizenship or credit score? Accurate user profiles come in handy when security is automated in urban space: surveillance induced by smart technologies, from electronic checkpoints to geofencing, can amplify more exclusion.

This tendency becomes clearly visible with Facebook being able to allow for indirect urban discrimination through targeted advertising. This is triggered by Facebook’s ability to exclude entire social groups from seeing certain ads based on their user profile, so that upscale housing-related ads might be hidden from them, making it harder for them to leave poorer neighborhoods. Meanwhile Uber is charging customers based on the prediction of their wealth, varying prices for rides between richer and poorer areas. This speculation on value enabled by the aggregation of massive amounts of data crystallizes new forms of information inequality in which platforms observe users through a one-way mirror.

If platform economies take the city as a hostage, governmental bodies of the city can seek how to counter privatization on material grounds. The notorious Kremlin’s GPS spoofing fence sends false coordinates to any navigational app within the city center, thereby also disrupting the operation of Uber and Google Maps. Such gaps on the map, blank spaces are usually precoded in spatial software by platforms, and can expel certain technologies from a geographical site, leaving no room for negotiation. Following the example of Free Economic Zones, democratic bodies could gain control over the city again by artificially constructing such spaces of exception. Imagine rigorous cases of hard-line zoning such as geofenced Uber-free Zones, concealed neighborhoods on Airbnb, areas secured from data-mining or user-profile-extraction.

Vertical zoning can alter the very way in which capital manifests itself. TheBristol pound is an example of city-scale local currency, created specifically to keep added value in circulation within one city. It is accepted by an impressive number of local businesses and for paying monthly wages and taxes. Though the Bristol Pound still circulates in paper, today we can witness a global sprawl of blockchain based community currencies, landing within big cities or even limited to neighborhoods. Remarkably, Colu Local Digital Wallet can be used in Liverpool, the East London area, Tel Aviv and Haifa – areas with a booming tech landscape or strong sense of community.

Ellen Ullman’s Life in Code

Interesting account of author of Close to the Machine Ellen Ullman’s most recent book Life in Code, which sounds fantastic and very much worth a read (just like Close to the Machine), and something of its context. From the NYT:

LIFE IN CODE

A Personal History of Technology

By Ellen Ullman

Illustrated. 306 pp. Farrar, Straus & Giroux.

As milestone years go, 1997 was a pretty good one. The computers may have been mostly beige and balky, but certain developments were destined to pay off down the road. Steve Jobs returned to a floundering Apple after years of corporate exile, IBM’s Deep Blue computer finally nailed the world-champion chess master Garry Kasparov with a checkmate, and a couple of Stanford students registered the domain name for a new website called google.com. Nineteen ninety-seven also happened to be the year that the software engineer Ellen Ullman published “Close to the Machine: Technophilia and Its Discontents,” her first book about working as a programmer in a massively male-dominated field.

That slender volume became a classic of 20th-century digital culture literature and was critically praised for its sharp look at the industry, presented in a literary voice that ignored the biz-whiz braggadocio of the early dot-com era. The book had obvious appeal to technically inclined women — desktop-support people like myself then, computer-science majors, admirers of Donna J. Haraway’s feminist cyborg manifesto, those finding work in the newish world of website building — and served as a reminder that someone had already been through it all and took notes for the future.

Then Ullman retired as a programmer, logging out to go write two intense character-driven thriller novels and the occasional nonfiction essay. The digital economy bounced back after the Epic Fail of 2000 and two decades later, those techno-seeds planted back in 1997 have bloomed. Just look at all those smartphones, constantly buzzing with news alerts and calendar notifications as we tell the virtual assistant to find us Google Maps directions to the new rice-bowl place. What would Ullman think of all this? We can now find out, as she’s written a new book, “Life in Code: A Personal History of Technology,” which manages to feel like both a prequel and a sequel to her first book.

Read the rest on the NYT website.

Reblog> Whither the Creative City? The Comeuppance of Richard Florida

Nice post from Jason Luger:

Whither the Creative City? The Comeuppance of Richard Florida

Talent, Technology, and Tolerance, said Florida (2002), were the pre-conditions for a successful urban economy. Florida’s ‘creative class’ theory, much copied, emulated and critically maligned, delineated urban regions with ‘talent’ (PhDs); ‘technology’ (things like patents granted); and ‘tolerance’ (represented by a rather arbitrary ‘gay index’ of same-sex households in census data).

This combination, according to Florida’s interpretation of his data, indicated urban creative ‘winners’ versus urban ‘losers’: blue collar cities with more traditional economies and traditional worldviews. Creative people want to be around other creative people, wrote Florida, so failing to provide an ideal urban environment for them will result in their ‘flight’ (2005) and the loss of all the benefits of the creative economy. Therefore, to win in the ‘new economy’ (Harvey, 1989), cities need to compete for, and win the affections of, the ‘creative class’. Or so Florida then-believed.

Read the full post.

Growing criticism of Stiegler

I’ve begun to see some interesting criticism of Bernard Stiegler’s more recent activities, both in terms of books and thePlaine Commune project, which I confess resonate with some misgivings I have had for a little while [e.g. Chatonsky, Moatti, SilberzahnVial]. No doubt there are responses to these criticisms and so I’m not going to attempt to reiterate them wholesale or vouch for them – not least because I haven’t read the more recent work. Let me be clear in this – I think critical reflection on an argument or project is not only healthy but a necessary part of the humanities and science. I am neither seeking to ‘write off’ Stiegler’s work nor dogmatically defend it.

Nevertheless, I think there are two particular, related, points that occur in a number of different criticisms online that it seems to me may hold some water. These are:

(1) Increasingly in recent work, which is being produced at a breakneck speed, there is a quickness with concepts that has drawn criticism. Howells and Moore (2013) in their introduction to the first anglophone secondary text on Stiegler’s work say: “In contrast to the patient, diligent deep-readings we have come to expect since Derrida, the image that emerges of Stiegler is perhaps a thinker who zooms in and out of texts, skimming them to fit what he is looking to find” (p. 9). Howells and Moore, in the end, see this as a strength. Nevertheless, there is a growing ‘jargon’ that loosens and perhaps undermines the analytical purchase within arguments Stiegler attempts to make. This is particularly evident in the kinds of names for the over-arching project that have begun to emerge, for example: “organology“, “neganthropology” and “pharmacosophy“. Furthermore, it has been argued that, especially in relation to the use of the ideas of “disruption” and “entropy”, Stiegler makes all-too-quick analogies and equivalences between different meanings or applications of a given term that waters down or perhaps even undermines its use. So, for example, in terms of the increasing use of philosophical concepts, for which he stands accused of creating ever-more impenetrable jargon, we might look to his recent conjugation of the ideas of automation, anthropocene and entropy/negentropy. In an extensive and rather forthright blogpost by Alexandre Moatti, an example is taken from The Automatic Society 1 (I’ve provided the full quote, whereas Moatti just takes a snippet):

“All noetic bifurcation, that is to say all quasi-causal bifurcation, derives from a cosmic potlatch that indeed destroys very large quantities of differences and orders, but  it does so by projecting a very great difference on another plane, constituting another ‘order of magnitude’ against the disorder of a kosmos in becoming, a kosmos that, without this projection of a yet-to-come from the unknown, would be reduced to a universe without singularity. A neganthropological singularity (which does not submit to any ‘anthropology’) is a negentropic bifurcation in entropic becoming, of which it becomes the quasi-cause, and therein a point of origin – through this improbable singularity that establishes it and from which, intermittently and incrementally, it propagates a process of individuation” [p. 246].

I cannot honestly say that I can confidently interpret or translate the meaning of this passage, perhaps someone will comment below with their version. However, I am confident that the translator (from French), Dan Ross, will have done a thorough job at trying to capture the sense of the passage as best he can. Nevertheless, and even with a knowledge of the various sources the terminology implies (made more or less explicit in the preceding parts of the book) the prose is perhaps problematic. Here’s my take, for what it’s worth:

What I think is being suggested in the quote above is that all life we call ‘human’ is supported by language, writing and other prostheses (‘noetic life’) and that when these social systems shift and are split (bifurcation) they destroy productive forms of  difference – different forms of understanding, different ways of knowing and different ways of living perhaps (‘differences and orders’). In so doing, this projects a bigger hiving off of the potential for life (‘difference on another plane’ and ‘disorder of a kosmos‘) to change in various ways and possibly prevents particular kinds of future (‘yet-to-come’ ~ quite similar to Derrida’s distinction between l’avenir and futur) and an impoverished form of being/life (‘a universe without singularity’). We can only really recognise these points of rupture after the fact, because the ruptures themselves are also the seeds of the moment of realisation. To positively and sustainably act (‘negentropic bifurcation in entropic becoming’) both for a positive projection of possible futures (‘improbable singularity’) and in response to these various kinds of ‘undoing’ of potential we attempt to create new possibilities (‘a process of individuation’).

Another related, perhaps more serious, criticism is that Stiegler quickly moves between and analogises things that might be considered to push the bounds of credulity. For example, the strategies of Daech/IS, management consultants and ‘GAFA’ (Google, Amazon, Facebook, Apple) are considered analogous by Stiegler when discussing ‘disruption’ as “a phenomenon of the acceleration of innovation, which is the basis of the strategy [of disruption] developed in Silicon Valley” [Here’s a strong response to that argument, in French]. Of course the idea of ‘disruptive practices’ as a force that can be diagnosed as ‘destroying social equilibrium; in different domains is seductive. Nevertheless, isn’t part of that seduction in an over-generalisation of ‘disruption’ that elides the mixing of people, objects and strategies that are simply too different? Isn’t there a danger that ‘disruption’ becomes yet another ‘neoliberalism’ – a catchall diagnosis of all things bad against which we should (ineffectually) rail? It sometimes seems to me that there is a peculiar, slightly snobby, anti-americanism that undergirds some of this, which if so does Stiegler’s work a disservice.

Taking this further, the analogies and the systemisation of the ‘jargon’-like concepts creates what Alexandre Moatti, in the Zilsel blogpost, calls two families of antonyms:

  • (automation) entropy, anthropisation, the Anthropocene era or the ‘Entropocene’, anthropocenologists
  • (deautomation) negentropy, neganthropy, neganthropisation and the ‘Neganthropocene’ era

There is a bit of a pattern here – to create systems of binaries. In the Disbelief and Discredit series it was Otium and Negotium. In Taking Care it was psycho- vs. noo-: politics, power, techniques and technologies. Perhaps this is what it means to put the pharmakon into practice for Stiegler? I worry that this habit of rendering systems of ideas that can be wielded authoritatively has the potential for a fairly negative implementation by ‘Stieglerian’ scholars – binary systems can be used as dogma. I think we’ve seen in enough of that in the social sciences to be wary here.

I recognise that sometimes ‘difficult’ language is needed to get at difficult ideas. There’s been previous controversy around these sorts of themes in anglophone scholarship in the past, not least in relation to the work of Judith Butler, and it’s possible to read about that elsewhere. I neither want to ‘throw stones’ nor ‘sit on a fence’ here, I admire aspects of Stiegler’s writing and thinking because it does, it seems to me anyway, get at some interesting and thorny issues. Nevertheless, the blizzard of concepts, the increasingly long and hard to follow sentences and I think the quickness of fairly sweeping arguments, especially when they write-off big chunks of other peoples’ work (which is the case in the chapter that passage is from in relation to Levi-Strauss), feels to me like a series of ungenerous moves. There may be all sorts of reasons for this but it feels like a shame to me…

(2) Following from (1), there is a sense in which the increasing use of philosophical jargon, analogies and the fairly rigid system of concepts that is used to interlink many of these themes creates what Moatti calls a kind of closed Stieglerian environment of thought, which is mutually reinforcing but perhaps then limits participation, through the jargon and activities of its thinking: “there is a Stieglerian environment: that of its association Ars Industrialis [and the Institute de Recherche et Innovation at the Centre Pompidou], of the summer academies it organizes in the country residence of Epineuil-le-Fleuriel… but also of certain contemporary authors whom he cites  and who gravitate around him (very often the same group)” [Moatti]. There is a danger that both in the systematisation of the concepts, as discussed above, and in the sort of informal cabal-like behaviour of the associations, groups and schools that there is a closed group.

There have also been concerns expressed about the nature of the support of the programmes and the work being undertaken by Ars Industrialis and IRI and how this feeds into the work itself. In particular, it has been noted that the corporate sponsorship (including Orange and Dassault Systems) of the Plaine Commune experiment [discussed in this interview] in producing a ‘contributory territory’, in the vein of Stiegler’s notion of the ‘economy of contribution‘, and the channeling of funds into some rather plum roles, such as the creation of a relatively well-paid and cooshy “Participatory Research Chair” (see: Faire de Plaine Commune en Seine-St-Denis) based at MSH Paris-Nord, in the territory, yes, but arguably not of it perhaps.

A corollary to this is the observation articulated by others, and something for which I think other philosophers have also been guilty, to draw upon a narrow and fairly specific set of studies to support sweeping generalisations. For Stiegler this has been the case in relation attention, with the use of one particular paper by N. Katherine Hayles, selectively drawing upon a particular scientific study; for ‘the anthropocene’, for which he significantly relies on Bonneuil and Fressoz (2016); and for automation, with the use of a particular quote by Bill Gates [1] and a widely cited but also contested speculative study by Frey and Osborne (2013) from the Oxford Martin School. This habit, in particular, I fear rather undermines the cogency of Stiegler’s arguments.

Where does this leave someone interested in these ideas? Or, more specifically, given this is a post on a personal blog through which I have significantly drawn upon Stiegler’s work: where does it leave my own interest? I don’t think I would go as far as some to declare that the recent work by Bernard Stiegler should be disregarded, or, in the extreme case, that Stiegler should no longer be considered worthy of the title ‘philosopher’ [See: Stéphane Vial’s Bernard Stiegler : la fin d’un philosophe]. Nevertheless, I think that, sadly, there probably is cause to be reticent in engaging with Stiegler’s work on ‘the anthropocene’ and ‘automation’ for all of the reasons discussed above. I still think there are plenty of interesting arguments and ideas expressed by Bernard Stiegler, I just think, personally, I will need to take much greater care in working through the provenance of these ideas from now on.

Notes

1. see the Business Insider article ‘Bill Gates: People Don’t Realize How Many Jobs Will Soon Be Replaced By Software Bots‘, quoting from the 2014 conversation with Bill Gates at the American Enterprise Institute: From poverty to prosperity: A conversation with Bill Gates [approx 46-minutes in]. Quote: ‘Capitalism, in general, will over time create more inequality and technology, over time, will reduce demand for jobs, particularly at the lower end of the skill set. “¦ Twenty years from now labour demand for lots of skill sets will be substantially lower and i don’t think people have that in their mental model’.