Reading Clive Barnett’s The Priority of Injustice

The Priority of Injustice – Clive Barnett

There is now a ‘review forum‘ in Political Geography around Clive’s The Priority of Injustice featuring some excellent reflections by Jack Layton, Juliet Davis, Jane Wills, David Featherstone and Cristina Temenos – with concluding reflections from Clive himself. I hope you may take the time to read these thoughtful reflections and perhaps consider reading Clive’s excellent book.

In my introduction I suggest:

The Priority of Injustice is an articulation of theory-in-practice, not the reified practice of theory as mastery but an ‘ordinary’ practice of scepticism and puzzling out. Barnett articulates the book as a form of “prolegomena for democratic inquiry”, as a means of rigorously laying the groundwork for asking questions about how democracy and politics actually play out. To respond to Barnett’s provocation might provoke another question: is this a clarion for ‘radical’ geographical theory? In The Priority of Injustice Barnett is doing theory, which is (differently) radical – insofar as it has perhaps become common for critical/radical geographers to (very ably) ‘evaluate’, ‘translate’ or ‘use’ of theory, for example by applying theoretical ideas to empirical case studies. The invitation of The Priority of Injustice is to put theory in action as a part of ‘ordinary’ democratic practice. The principle of ‘charitable interpretation’, with the aim of “maximising understanding”, invoked by Barnett throughout the book, should, I think, be a tenet to which we all aspire.

Hope that encourages you to read on. If you do not have access please do get in touch.

Published> A very public cull – the anatomy of an online issue public

Twitter

I am pleased to share that an article I co-authored with Rebecca Sandover (1st author) and Steve Hinchliffe has finally been published in Geoforum. I would like to congratulate my co-author Rebecca Sandover for this achievement – the article went through a lengthy review process but is now available as an open access article. You can read the whole article, for free, on the Geoforum website. To get a sense of the argument, here is the abstract:

Geographers and other social scientists have for some time been interested in how scientific and environmental controversies emerge and become public or collective issues. Social media are now key platforms through which these issues are publicly raised and through which groups or publics can organise themselves. As media that generate data and traces of networking activity, these platforms also provide an opportunity for scholars to study the character and constitution of those groupings. In this paper we lay out a method for studying these ‘issue publics’: emergent groupings involved in publicising an issue. We focus on the controversy surrounding the state-sanctioned cull of wild badgers in England as a contested means of disease management in cattle. We analyse two overlapping groupings to demonstrate how online issue publics function in a variety of ways – from the ‘echo chambers’ of online sharing of information, to the marshalling of agreements on strategies for action, to more dialogic patterns of debate. We demonstrate the ways in which digital media platforms are themselves performative in the formation of issue publics and that, while this creates issues, we should not retreat into debates around the ‘proper object’ of research but rather engage with the productive complications of mapping social media data into knowledge (Whatmore, 2009). In turn, we argue that online issue publics are not homogeneous and that the lines of heterogeneity are neither simple or to be expected and merit study as a means to understand the suite of processes and novel contexts involved in the emergence of a public.

Unfathomable Scale – moderating social media platforms

Facebook logo reflected in a human eye

There’s a really nice piece by Tarleton Gillespie in Issue 04 of Logic themed on “scale” that concerns the scale of social media platforms and how we might understand the qualitative as well as quantitative shifts that happen when things change in scale.

The Scale is just unfathomble

But the question of scale is more than just the sheer number of users. Social media platforms are not just big; at this scale, they become fundamentally different than they once were. They are qualitatively more complex. While these platforms may speak of their online “community,” singular, at a billion active users there can be no such thing. Platforms must manage multiple and shifting communities, across multiple nations and cultures and religions, each participating for different reasons, often with incommensurable values and aims. And communities do not independently coexist on a platform. Rather, they overlap and intermingle—by proximity, and by design.

The huge scale of the platforms has robbed anyone who is at all acquainted with the torrent of reports coming in of the illusion that there was any such thing as a unique case… On any sufficiently large social network everything you could possibly imagine happens every week, right? So there are no hypothetical situations, and there are no cases that are different or really edgy. There’s no such thing as a true edge case. There’s just more and less frequent cases, all of which happen all the time.

No matter how they handle content moderation, what their politics and premises are, or what tactics they choose, platforms must work at an impersonal scale: the scale of data. Platforms must treat users as data points, subpopulations, and statistics, and their interventions must be semi-automated so as to keep up with the relentless pace of both violations and complaints. This is not customer service or community management but logistics—where concerns must be addressed not individually, but procedurally.

However, the user experiences moderation very differently. Even if a user knows, intellectually, that moderation is an industrial-sized effort, it feels like it happens on an intimate scale. “This is happening to me; I am under attack; I feel unsafe. Why won’t someone do something about this?” Or, “That’s my post you deleted; my account you suspended. What did I do that was so wrong?”

The Guardian of automation

Still from the video for All is Love by Bjork

I have been looking back over the links to news articles I’ve been collecting together about automation and I’ve been struck in particular by how the UK newspaper The Guardian has been running at least one story a week concerning automation in the last few months (see their “AI” category for examples, or the list below). Many are spurred from reports and press releases about particular things, so it’s not like they’re unique in pushing these narratives but it is striking, not least because lots of academics (that I follow anyway) share these stories on Twitter and it becomes a self-reinforcing, somewhat dystopian (‘rise of the robots’) narrative. I’m sure that we all adopt appropriate critical distance when reading such things but… there is a sense in which the ‘robots are coming for our jobs’ sort of arguments are being normalised and sedimented without a great deal of public critical reflection.

We might ask in response to the automation taking jobs arguments: who says? (quite often: management consultants and think tanks) and: how do they know? It seems to me that the answers to those questions are pertinent and probably less clear, and so interesting(!), than one might imagine.

Here’s a selection of the Graun’s recent automation coverage:

AI Now report

My Cayla Doll

The AI Now Institute have published their second annual report with plenty of interesting things in it. I won’t try and summarise it or offer any analysis (yet). It’s worth a read:

The AI Now Institute, an interdisciplinary research center based at New York University, announced today the publication of its second annual research report. In advance of AI Now’s official launch in November, the 2017 report surveys the current use of AI across core domains, along with the challenges that the rapid introduction of these technologies are presenting. It also provides a set of ten core recommendations to guide future research and accountability mechanisms. The report focuses on key impact areas, including labor and automation, bias and inclusion, rights and liberties, and ethics and governance.

“The field of artificial intelligence is developing rapidly, and promises to help address some of the biggest challenges we face as a society,” said Kate Crawford, cofounder of AI Now and one of the lead authors of the report. “But the reason we founded the AI Now Institute is that we urgently need more research into the real-world implications of the adoption of AI and related technologies in our most sensitive social institutions. People are already being affected by these systems, be it while at school, looking for a job, reading news online, or interacting with the courts. With this report, we’re taking stock of the progress so far and the biggest emerging challenges that can guide our future research on the social implications of AI.”

There’s also a sort of Exec. Summary, a list of “10 Top Recommendations for the AI Field in 2017” on Medium too. Here’s the short version of that:

  1. 1. Core public agencies, such as those responsible for criminal justice, healthcare, welfare, and education (e.g “high stakes” domains) should no longer use ‘black box’ AI and algorithmic systems.
  2. 2. Before releasing an AI system, companies should run rigorous pre-release trials to ensure that they will not amplify biases and errors due to any issues with the training data, algorithms, or other elements of system design.
  3. 3. After releasing an AI system, companies should continue to monitor its use across different contexts and communities.
  4. 4. More research and policy making is needed on the use of AI systems in workplace management and monitoring, including hiring and HR.
  5. 5. Develop standards to track the provenance, development, and use of training datasets throughout their life cycle.
  6. 6. Expand AI bias research and mitigation strategies beyond a narrowly technical approach.
  7. 7. Strong standards for auditing and understanding the use of AI systems “in the wild” are urgently needed.
  8. 8. Companies, universities, conferences and other stakeholders in the AI field should release data on the participation of women, minorities and other marginalized groups within AI research and development.
  9. 9. The AI industry should hire experts from disciplines beyond computer science and engineering and ensure they have decision making power.
  10. 10. Ethical codes meant to steer the AI field should be accompanied by strong oversight and accountability mechanisms.

Which sort of reads, to me, as: “There should be more social scientists involved” 🙂

Thinking in public – Baskin on Wurgaft

Bernard Henri Levi hit with a custard pie

From an interesting review of Benjamin Aldes Wurgaft’s “Thinking in Public, Straus, Levinas, Arendt” by Jon Baskin. Found via Anne Galloway.

Arendt, Wurgaft suggests, may remain important today less for her writing on totalitarianism than for her warnings about the rise of the “technocrats” – a new breed of “intellectuals” who pictured political life as involving the accomplishment of pre-established tasks, rather than as an ongoing argument involving perennial questions about what we value, and why.

The technocrats, undoubtedly, are still with us. At one point in his article, Wurgaft cites a widely praised review of Daniel Drezner’s recent book, The Ideas Industry, by the intellectual historian David Sessions. Drezner’s book, says Sessions, shows how today’s would-be public intellectuals are being drowned out by the rise of “thought leaders.” Thought leaders are glorified technicians and TED Talk evangelists, like Sheryl Sandberg, Thomas Friedman, and Parag Khanna, who nevertheless are treated by large audiences as emissaries from the world of ideas. Such figures would seem to fulfill Arendt’s prophecy about the danger of a culture coming to revere elite technocratic authority.

Sessions’s article, though, is not just about the superficiality and corruption of thought leaders – a seductively soft target for his New Republic readership. Sessions also hazards a positive description of what makes someone a real or authentic intellectual, and it is in these passages that his article is truly, if unwittingly, revealing. Whereas the thought leaders are guilty of flattering the whims of the superrich, Sessions claims, a group he approvingly calls the “new intellectuals on the left” have demonstrated their independence by being “willing to expose the prattle of thought leaders, to attack the rhetorical smoke screens of the liberal center, and to defend working-class voters.” Later, crediting a cluster of leftist-associated magazines (including this one) with the revival of American intellectual life, Sessions leaves little doubt as to what he considers qualifies someone to be a genuine public intellectual. To be a genuine public intellectual is to agitate for the working class, and against the “liberal center” or the superrich (also, apparently, to reflexively conflate those two terms). To be a genuine public intellectual is to have the “courage,” as he calls it, to speak truth to power.
[…]
What does it mean, then, to be an “intellectual on the left”? Although I confess the phrase strikes me as somewhat mysterious, it is not impossible to imagine a definition: an intellectual on the left, having arrived at certainty about the correct direction for society, helps formulate and disseminate arguments for moving society in that direction. But if we accept this definition as meaningful, we are compelled to agree with Strauss and Arendt that the figure of the public intellectual represents a debasement of thinking, rather than a model for it. There are plenty of reasons to commit as citizens to political parties or movements – and there may even be reasons to consider that commitment as partly the product of philosophical reasoning. But someone who speaks as a representative of a fixed ideology or group has subjugated the philosopher within themselves to the partisan.

Read the whole article here.